AI edge inferencing box to assist factories during COVID-19

A deep learning processor made by Israel’s Hailo is being deployed inside a video analytics fanless box called BOXiedge from Foxconn that is intended for smart medical applications as well as industrial IoT.

The Hailo-8 chip is combined with a parallel processor from Japan’s Socionext called the SynQuacer inside the Foxconn box, creating a standalone artificial intelligence (AI) inference node, the three companies announced on Tuesday. 

Foxconn said the collaboration is an example of how it wants to pave the way for next-generation AI solutions, including tumor detection and robotic navigation.

The companies told FierceElectronics that using smarter automation with faster processing in production lines could aid productivity and business continuity during times of social distancing with COVID-19.

 “Processing and analyzing over 20 streaming camera input feeds with AI in real-time, all at the edge, will streamline industrial automation and quality assurance, helping keep factories running,”  the companies said via email.  Foxconn and Socionext have previously worked with smart medical devices to improve tumor detection and robotic navigation in digital health applications.

  “Our edge computing solution combined with Hailo’s deep learning processor will create even better energy efficiency for standalone AI inference nodes,” Gene Liu, vice president of the semiconductor subgroup at Foxconn Technology Group, said in a statement.

Foxconn, based in Taiwan, has previously deployed in-house AI products for its chip production lines, which has improved the accuracy of reports on appearance defect inspections.  The company said it has used AI to cut at least 33% of the cost of appearance defect inspection projects.

The BOXiedge AI node product is designed to offer low latency and a high data rate with high reliability.  Hundreds of cameras used in traffic monitoring, inside stores or on industrial assembly lines can generate video streams that need to be processed locally, which is where the node can be used for processing. Processing and inferencing at the edge such as directly on the production floor instead of in the cloud will translate into significant cost savings, the three companies said.

The Hailo-8 processor delivers up to 26 tera operations per second, fast enough to run deep learning applications that previously ran in the cloud, the companies said. 

RELATED: Intel and UPenn head group using AI to find brain tumors