Researchers have built a machine learning model to aid in the diagnosis of strokes using a smartphone to detect abnormalities in speech and face muscle movements.
The researchers at Penn State and Houston Methodist Hospital are trying to emulate what a physician does when confronting a possible stroke victim in a clinical setting and before deciding whether to administer a CT scan, according to an article in Artificial Intelligence Research.
“When it comes to diagnosing a stroke, emergency room physicians have limited options: send the patient for often expensive and time-consuming radioactivity-based scans or call a neurologist who may not be available to perform clinical diagnostic tests,” said James Wang, professor of information sciences and technology at Penn State.
The researchers are using facial motion analysis and natural language processing to detect abnormalities such as a drooping cheek or slurred speech. Their hope is that the app can be used by caregivers or patients to make self-assessments before going to a hospital.
The researchers relied on a dataset from 80 patients having stroke symptoms at Houston Methodist. Their ultimate model achieved 79% accuracy. One of its central values is that it saves time in assessing a stroke.
Millions of neurons die every minute in a stroke, but many studies suggest that many patients with moderate symptoms face having a diagnosis delayed by hours, according to Houston Methodist vascular neurologist John Volpi. “The earlier you can identify a stroke, the better options for patients,” added Stephen T.C. Wong at Houston Methodist.