Nvidia AI research boasts savvy telemed

Telemed calls could be improved with better language processing based on a huge data set of clinical terms. (Getty Images)

 

Nvidia presented new research on Sunday that describes more advanced linguistic AI for use in telemedicine applications in hopes of making them more medically savvy and offering help to both patients and physicians.

The approach is designed to automatically capture conversations between doctors and patients for use in the medical record.  Patients and their caregivers can in theory be more informed about their diseases and medications while doctors can more easily process volumes of patient consultations for medical records and insurance purposes.

Free Daily Newsletter

Interesting read? Subscribe to FierceElectronics!

The electronics industry remains in flux as constant innovation fuels market trends. FierceElectronics subscribers rely on our suite of newsletters as their must-read source for the latest news, developments and predictions impacting their world. Sign up today to get electronics news and updates delivered to your inbox and read on the go.

 Such capabilities already exist, of course, but Nvidia said it is able to augment state of the art conversational AI models with clinical ontologies such as SNOMED-CT or IDC-10 to improve the AI output with disease conditions, symptoms and treatments.  The research started in early 2019 and was presented at the Conference for Machine Intelligence in Medical Imaging 2020 held virtually.

A team of Nvidia researchers presented its findings in a paper called “Fast and Accurate Clinical Named Entity Recognition and Concept Mapping from Conversations.” The COVID-19 pandemic led to a drastic increase in healthcare provider and call center volumes, making the automatic speech recognition model more valuable, they noted.

For AI training as part of the research, Nvidia started with a well-known natural language processing pre-training technique known as BERT then fine-tuned it with a clinical natural language processing dataset using publicly-available data from the National Center for Biomedical Computing. The publicly-available clinical data was comprised of 4.5 billion worlds and performed better than the BERT model that was trained on words from Wikipedia and BooksCorpus. The team then also used a larger BERT model named Bio-Megatron with a PubMed biomedical text corpus with 6.1 billion words and was able to achieve 94.79% accuracy.

Nvidia said it was able to reduce AI inference work on a single medical sentence from up to three minutes down to 1 second by using an Nvidia Jarvis service on a T4 or V100 GPU.  Jarvis is an an application framework for building conversational AI services that Nvidia customers can use to create custom applications.

Ordinary doctors might not have multiple GPUs, but medical organizations might resort to using cloud processing for language processing work. Nvidia Jetson platforms can be low-profile and require less power for an interface to the cloud, Nvidia said.

Dr. Mona Flores, global head of medical AI at Nvidia, said in an interview with Fierce Electronics that the research work “changes the game for physicians and patients alike.”  The research is already being processed into commercial applications and can be fine-tuned for special domain data, such as neurology.  “They can take our megatron model and we are enabling what was much harder before,” she said.

The research shows that gathering a large data set for AI is still important, but so is training with specific data in a specific field of concentration, she added.

Hoo Change Shin, senior research scientist at Nvidia, said patients could gain access to doctors notes more easily, with insights into the names of diseases and medications.  Shin recalled that his young daughter suffered an insect bite this summer that caused her leg to swell up quickly. A doctor diagnosed her problem via a web cam remote consultation, but Shin wasn’t able to take notes while holding the camera, so he missed out on some useful information.

“This application, even though it is in research now, makes it possible to help in such a situation,” he said. “It does speech recognition and transcribes what was said between me and the doctor to recognize the disease and medical-related terms, then maps it into the medical record.  I can look it up and don’t have to write it down and for next time I can take more precautions, look for the right type of spray and avoid the next bug bite.”

RELATED: Nvidia DGX supercomputer at center of $70M AI effort with University of Florida

Suggested Articles

Low-level autonomy vehicles will grow by 11% a year and there won’t be any produced at the highest autonomy before 2024, according to IDC.

Nvidia’s GPU Cloud hub will be integrated into vSphere, Cloud Foundation and Tanzu

Analyst firm sees yearly growth in use of 5G wireless used in comms and guidance for vehicles despite pandemic slowdown