Getting computers to understand natural human language is a challenge for artificial intelligence (AI) researchers who hope that someday, robots will reach the level of fluency of R2D2 and C3P0 from Star Wars fame. Researchers with backgrounds in everything from electrical and mechanical engineering to animal behavior are racing to develop natural language models for AI.
This week Nvidia announced that it reached a milestone in training BERT, an advanced AI language model. Nvidia’s AI platform was able to train the BERT model in 53 minutes.
According to an Nvidia blog, BERT is ideal for language understanding tasks like translation, Q&A, sentiment analysis, and sentence classification. A key advantage of BERT is that it doesn’t need to be pre-trained with labeled data, so it can learn using any plain text. This advantage opens the door to massive datasets, which in turn further improves the state-of-the-art accuracy.
NVIDIA says its Tensor Core graphical processing units (GPUs) set a new record by training BERT in just 53 minutes, a new world record.
What this means for the user experience, ultimately, is a more streamlined, natural interaction. Instead of having to memorize specific phrases, users will be able to speak to their devices the same way they would a friend or family member. The high processing speeds also make the interaction flow more naturally, free from awkward pauses.