AI-driven 3D video analysis in facial palsy: validity on face angles toward an assessment of spontaneous smile

Event: PSTM 2024
Thu, 9/19/2024: 8:00 AM - 10:00 AM
42767 
Abstracts 
Background: Recently, artificial intelligence (AI) has become a trend in scientific research and has been applied for automatic facial keypoint detection, which enables quantitative facial palsy assessment. One of the limitations of these system is that the accuracy of the measurements is dependent on the face angles in the images: yaw or pitch rotation is unacceptable. The movements in spontaneous smile have been less access to the objective evaluation, because these movements often involve head movements and patients seldom show spontaneous smile in medical office, especially on a rigid fixation of the face. Therefore, we have developed a new facial palsy assessment tool using AI-driven video analysis, in which the three-dimensional distances traveled by the keypoints are estimated. The purpose of this study is to clarify the impact of yaw or pitch rotation on our assessment tool.
Methods: The study population consisted of 21 unilateral facial palsy patients with varying severities. Movies of voluntary grinning were recorded in one frontal view and four oblique views which were turned to the healthy side, affected side, upper side, and lower side at an almost 30-degree angle. By analyzing these movies with our assessment tool, the 3D distances traveled by the point of oral commissure when grinning (commissure excursion: CE) were calculated in the healthy side and affected side, and the differences between the two were calculated. These values were normalized with the inter-inner canthal distances and were defined as negative values when the oral commissure moved medially. CE in four oblique views were compared with CE in frontal view.
Results: The differences of CE (mean ± SD) in the frontal, healthy-oblique, affected-oblique, upper-oblique, and lower-oblique view were 0.22 ± 0.17, 0.16 ± 0.12, 0.27 ± 0.19, 0.18 ± 0.16, and 0.27 ± 0.14, respectively. In comparison with the frontal view, there were significant increase in the affected-oblique view (p=0.028) and the lower oblique view (p=0.032) and decrease in the healthy-oblique view (p=0.006) and the upper oblique view (p=0.016). The difference of CE in each oblique view showed a strong correlation with that in the frontal view (r = 0.80 ~ 0.92, p<0.0001). CE in the healthy or affected side showed similar results.
Conclusions: This study suggests that our method may cause a margin of error in evaluation of CE due to the face angle, but the error can be corrected by simple linear regression. AI-driven 3D video analysis is expected to be a solution for quantitative assessment of the movements in spontaneous smile.

Abstract Presenter

Keigo Narita MD

Abstract Co-Author

Akihiko Takushima

Tracks

Migraine and Peripheral Nerve
PSTM 2024