Virtual world builder: Nvidia positions Omniverse Enterprise as its metaverse creation platform

Facebook may be going Meta, but it is far from the only company that is thinking about the real-world business and operational implications of metaverses and virtual worlds. Nvidia has been putting together the pieces of an underlying platform to enable the creation of these worlds and the “connective tissue” between physical and virtual worlds and applications.

During the company’s GTC Fall event keynote Nvidia CEO and co-founder Jensen Huang announced availability of that platform, Omniverse Enterprise, through a $9,000 annual subscription plan (and individual-use offering can be downloaded for free.)

The general availability announced comes almost a year after the platform was made available to several beta users at more than 700 companies, including BMW Group, CannonDesign, Epigraph, Ericsson, architectural firms HKS and KPF, Lockheed Martin and Sony Pictures Animation. More than 70,000 individual creators have downloaded the platform.

Richard Kerris, vice president of Omniverse platform at Nvidia, told members of the media prior to Huang’s keynote that many new features have been added to Omniverse Enterprise to position it as an enabler for a multitude of virtual world applications.

“We believe that virtual worlds will enable the next era of innovation,” he said. “Virtual worlds are going to help us visualize things that haven’t been built yet, and help us see how to operate and maintain factories as digital twins.”

Asked about Nvidia’s role in virtual worlds relative to Facebook’s rebranding as Meta and its investment in the metaverses concept, Kerris said, “Facebook and others have announced their intent to create virtual worlds. We think there are going to be many virtual worlds created by many different companies, but what will make them work as the next generation of the world wide web is that they have to have a common foundation.”

Nvidia is positioning Omniverse--and more specifically the Universal Scene Description (USD) 3D content creation tool created by Pixar and leveraged by Nvidia in Omniverse--to be that foundation. 

Kerris compared USD to HTML. “When the Internet started your experience depended on what kind of browser you had and what kinds of things you had installed. HTML allowed for a common foundation for text, videos and things like that. It solved the problems of the early internet,” he said.

Avatar

Prominent among the newest Omniverse technologies and features outlined by Kerris was Omniverse Avatar, which he described as a platform for creating interactive AI-based avatars that can power robots, intelligent virtual agents and other solutions. Avatar leverages Nvidia technologies in speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies. 

Kerris mentioned two current projects Nvidia is involved in that use the Avatar platform. One is Project Tokkio, a reference application for AI-enabled customer service avatars that would be placed in robots or virtual kiosks. Tokkio also employs other Nvidia tools like the Merlin deep learning solution for recommender systems, the Riva conversational AI platform, the Fleet Command tool for AI lifecycle management and the Metropolis SDK.

Project Tokkio reportedly has been used by Huang in a demonstration that shows colleagues engaging in a real-time conversation with an avatar crafted as a toy replica of himself. In a second Project Tokkio demo, he highlighted a customer-service avatar in a restaurant kiosk, able to see, converse with and understand two customers as they ordered veggie burgers, fries and drinks.

Also, in a demo of the DRIVE Concierge AI platform, a digital assistant on the center dashboard screen helped a driver select the best driving mode to reach his destination on time, and then followed his request to set a reminder once the car’s range drops below 100 miles. 

Another company project using Avatar is Project Maxine for AI-enabled video conferences and virtual collaborations. In a demonstration, Huang showed off the ability to add state-of-the-art video and audio features to virtual collaboration and content creation applications. An English-language speaker was shown on a video call in a noisy cafe, but could be heard clearly without background noise. As she spoke, her words were transcribed and translated in real time into German, French and Spanish with her same voice and intonation.

Digital twins

A key use for the Omniverse Enterprise platform is the creation of digital twins of factories, office campuses, smart cities and even larger environments--the entire Earth, to use a pretty large example. 

“Digital twins are solving some of the world’s greatest challenges” by letting people collaborate in virtual models to test solutions that could then be applied to real-world situations, Kerris said. He listed some current digital twin use cases, including a factory of the future blueprint, a twin of Earth, a wildfire simulation, a 5G signal propagation model running on a digital twin of a city, robotics, self-driving cars, intelligent virtual agents and other industrial applications.

“Ericsson uses Omniverse as a platform for development to figure out how to propagate 5G throughout cities,” he said. “They built a digital twin of the city of Stockholm to figure out where to put antennas” that also allows the wireless infrastructure company to see simulations, and figure out how they need to address problems with different antenna deployment locations by allowing them to teleport from one antenna to another.

In a very current and critical use of the concept, Kerris said Lockheed Martin is using Omniverse to create digital twins of large wildfires to better understand how these fires start and spread, and how they might be better contained and managed.

Other new Omniverse features and capabilities outlined by Kerris included:

  • Replicator, a synthetic data generation and replication engine for training deep neural networks.
  • CloudXR, an enterprise-class immersive streaming framework, which has been integrated into Omniverse Kit — a toolkit for building native Omniverse applications and microservices — allowing users to interactively stream Omniverse experiences to their mobile AR and VR devices.
  • Omniverse VR, which the company claims is the world’s first full-image, real-time ray-traced VR — enabling developers to build their own VR-capable tools on the platform, and end users to enjoy VR capabilities directly.
  • Omniverse XR Remote, which provides AR capabilities and virtual cameras, enabling designers to view their assets fully ray traced through iOS and Android devices.
  • Omniverse Farm, which lets teams use multiple workstations or servers together to power jobs like rendering, synthetic data generation or file conversion.
  • Omniverse Showroom, which is available as an app in Omniverse Open Beta, and lets non-technical users play with Omniverse tech demos that showcase the platform’s real-time physics and rendering technologies.

RELATED: Nvidia's Huang on enterprise AI, getting meta and buying Arm