Platform Enables End-to-End Autonomous Vision System Design

Algolux unveils the Ion Platform for end-to-end vision system design and implementation. Based on patented technology combining machine learning, computer vision, and computational imaging technologies, Ion enables teams to bring scalability and robustness to their perception and imaging systems for applications like autonomous vehicles, ADAS, and video surveillance.

 

The Ion Platform addresses complexity and safety challenges by enabling teams to design their vision systems end-to-end through simplified workflows and optimized implementations that cross traditional sub-system boundaries. Ion modules can be deployed across the different blocks of the system, including the sensors, processors, perception algorithms, and higher stack components such as autonomous planning and control. As per Algolux, platform benefits include:

  • Increased vehicle safety: massively improved vision system accuracy and robustness across all operating scenarios
  • Improved system effectiveness: holistically optimized against key system performance metrics
  • Reduced program risks: minimized system costs, accelerated time-to-revenue, and scale resources

 

The Ion Platform features two main product lines based on the Eos deep neural network (DNN) perception technology (formerly known as CANA). Eos can be deployed as either an embedded software perception stack in the vision system or through Atlas, a suite of tools that design teams can use to optimize their camera-based systems.

 

 Ion Platform for End-to-End Vision System Design

 

Eos perception software is based on research that addresses the fundamental requirement to improve perception accuracy across all operating conditions. Through a deep learning approach, Eos reportedly delivers improvements in accuracy of more than 30% as compared to current most accurate computer vision algorithms, especially in the harshest scenarios. This enables perception teams to develop more optimal system architectures, thereby simplifying the design process and even reducing bill of material (BoM) costs.

Eos benefits include:

  • Delivering industry-leading robustness and accuracy across all conditions, with improvements of over 30% vs. public and commercial algorithms in many cases
  • Support of any sensor configuration and processing environment through an end-to-end deep learning approach
  • Significantly reducing BoM costs and power by enabling designers to use lower-cost lenses and sensors or even remove components such as the image signal processor (ISP)
  • Flexible implementation of single-camera through multi-sensor fusion perception architectures

 

Eos applications include:

  • Single-camera, such as intelligent front-facing, rear-view, and mirror-replacement use cases
  • Multi-camera for 360-degree use cases, such as self-parking and autopilot
  • Multi-sensor early fusion for autonomous vehicles and robots

 

The Atlas Camera Tuning Suite automates the tedious process of manual camera tuning. Atlas provides modules and workflows that enable end-to-end camera tuning for visual image quality (IQ) and computer vision. Benefits include:

  • Accelerating time to revenue by shrinking tuning time from many months to weeks or even days
  • Scalable and predictable methodology through automatic metric-driven camera tuning
  • Optimized tuning for any camera configuration specific to the target application
  • Automating the currently impossible task of camera tuning for optimal computer vision accuracy

 

Atlas Camera Tuning Suite provides modules that include:

  • Atlas Objective IQ (formerly known as CRISP-ML) applies patented solvers and optimization technology to automatically tune cameras to objective image quality metrics, such as sharpness, noise, and color accuracy. Objective IQ requires a very simple lab setup and the workflow allows the image quality team to quickly optimize to their specific lens, sensor, and ISP combination.
  • Atlas HDR / AWB combines additional workflows to Objective IQ to tune High-Dynamic Range (HDR) and Automatic White Balance (AWB) capabilities. These apply to specific camera configurations across attributes such as brightness, contrast, and color temperature.
  • Atlas Natural IQ automates and shrinks the many months-long process of subjective camera tuning by applying a deep learning approach to achieve the customer’s image quality preference. Customers create a small dataset of natural images that represents the desired “look and feel” and the camera is automatically tuned to match it as closely as possible.
  • Atlas Computer Vision maximizes the accuracy of a computer vision system by leveraging Eos DNN technology and specialized solvers. By enabling teams to automatically optimize the camera image quality specifically for computer vision metrics rather than only being able to manually tune for visual image quality, Atlas finally closes this gap in the vision system development process.

 

For more details, visit Algolux and/or call 877-424-9107.