Small Language Models breach customer service and AI-driven microphones

Small Language Models often get overshadowed by the utility and vitality of Large Language Models. Small Language Models (SLMs) are lightweight AI models designed for constrained environments and built to perform specific tasks efficiently. They are much more suitable to be equipped with edge devices, whereas Large Language Models (LLMs) are based on Transformer Neural Networks and are trained on a large amount of data. The number of behavior-deciding parameters of LLMs can go up to billions.

On the other hand, the “small” in SLM specifies that its neural network has fewer parameters and is trained on a lesser volume of data. This entails a smoother application of SLM at the edge. An appropriate example of this can be seen in Synaptics’ collaboration with Syntiant Corp’s small language model assistant (SLMA) in its new set-up box lineup.

Role of Syntiant’s Small Language Model Assistant in customer service

Syntiant, founded in 2017 and headquartered in Irvine, Calif., is involved in Edge AI software and hardware deployments. It provided a hardware-agnostic SLMA to Synaptics’ SL1680 chip series. This SLMA is built to handle AI operations efficiently within power constraints. “One of the most notable features of SLMA is its compact size, which enables it to operate on our customers’ existing platforms without necessitating expensive upgrades or additional cloud services. This simplifies deployment and minimizes ongoing costs,” explained Dr. Jonathan Su, VP of ML & software at Syntiant Corp in an exclusive interview with Fierce Electronics. In addition to enhancing their AI capabilities, Syntiant’s language mode enables edge devices to perform tasks without needing the constant cloud connectivity.

Syntiant’s Neural Decision Processor (NDP), an always-on sensor and speech recognition processor, is likely integrated into Synaptics set-top boxes. The NDP optimizes the setup for tasks such as voice recognition and sensor processing along with providing cloud-free, web-independent conversational AI at the end user’s disposal. 

Su explained, “Our innovative approach merges the natural interaction capabilities and language processing strengths of existing models with a tailored, smaller knowledge base that focuses specifically on essential user manuals and typical customer service interactions.”

Unlocked by the automatic speech recognition (ASR) models included in the NDP, this means that the end user gets a natural voice interface that conveniently guides them through installation, troubleshooting and other customer support facilities. Syntiant’s SLMA makes these set-up box devices smarter while ensuring minimal energy consumption.

Why are Small Language Models efficient?

Su explained the underlying advantage of fewer datasets and parameters. “In contrast to many large language models that typically contain around a billion parameters, our model efficiently operates with just 23 million parameters, making it resource efficient,” he said. The SLM reduces the time required to communicate with the cloud, which reduces the latency. It also doesn’t rely on internet connectivity unlike LLMs and therefore can be used in applications with limited connectivity.                                                                                          

The neural decision processor runs a hardware agnostic SLMA
     Syntiant's NDP processor with SLMs (Syntiant)

 Su also highlighted the extent of the capability of the assistant in question“While the language interpretation capabilities of our model are comparable to larger models, we can significantly reduce the knowledge base, which results in faster processing times and lower power consumption. This makes SLMA particularly well-suited for applications that require continuous monitoring,” he said. This means faster processing times and lower power consumption compared to LLMs, making SLMs efficient for continuous monitoring applications. It also makes the SLMA suitable for situations with limited power options or battery-operated systems. Every feature plays its part in improving the efficiency of SLMs, and in turn, the features reduce costs at every stage of operations.

Syntiant also focuses on optimizing the hardware aspect of its technology. “Optimizing these models to ensure they run swiftly and efficiently on our chips is a primary goal, as it directly impacts the overall performance of the devices,” Su added. “For all of our applications, prioritizing power-saving features is essential for optimal performance.” Their NDPs are specially designed for sought-after tasks. These processors provide higher inference speeds and reduce memory footprints. The processors are designed to be low-power-consuming to maximize efficiency, a critical factor since these applications involve continuous monitoring.

Syntiant’s vision with AI microphones

Syntiant is also working on its vision of extending the use case of its edge AI technology. Recently the company acquired the CMM division of Knowles. “We firmly believe that every microphone will evolve into an AI microphone, and this belief positions us well to capitalize on the growing demand for smart functionality in everyday devices,” said Kurt Busch, CEO of Syntiant as he shared the vision of the company.

The new venture of the company could mean that their NDP could end up featuring high-performance SiSonic MEMS microphones. Su added, “Whether it's for audio solutions like hearing aids or remote controls, having the microphone and an AI chip closely connected allows faster product development and better customer experiences. Long-term, microphones will evolve into AI-driven sensors, capable of sophisticated event detection.” AI can help in improving voice inputs to the small language model assistants to improve the overall experience of the end user with the voice assistant.

But it could also help in converting present microphones into AI-driven sensors. “These AI microphones will feature advanced noise suppression, beamforming, and intelligent event detection, allowing them to adapt to different environments. As AI integrates deeper into these sensors, it will enable smarter interactions, automating responses to specific sounds and improving security, communication, and user experience across industries," he said.