Nvidia chief urges laws, standards for safe AI

Nvidia CEO Jensen Huang urged new laws and engineering standards for building safe AI systems during comments at a supercomputer event in Stockholm on Tuesday.

“Remember, if you take a step back and think about all the things in life that are either convenient, enabling or wonderful for a society, it also has some potential harm,” he said, according to Reuters.

“What is the social norm for using it? What the legal norms [are] for using it have to be developed,” he added. “Everything is evolving right now. The fact that we’re all talking about it puts us in a much better place to eventually end up in a good place.”

Reuters paraphrased what Huang called for specifically without quoting him, but the publication said engineering standards bodies would need to establish standards, similar to how medical bodies set rules for medicine. And, Huang said laws and social norms would play a role for AI development.

He was speaking at an event where officials were upgrading Sweden’s fastest supercomputer using Nvidia products to develop a large language model fluent in Swedish.

Nvidia also provided 10,000 or more GPUs used in training the ChatGPT chatbot from OpenAI. Microsoft invested in Open.AI several years ago and on Monday announced a multibillion dollar investment to accelerate AI breakthroughs. Some reports put the latest Microsoft investment at $10 billion.

RELATED: ChatGPT runs 10K Nvidia training GPUs with potential for thousands more

US Rep. Ted Lieu, D-Calif., has called for a US agency to regulate AI, and has noted how facial recognition systems can improperly identify innocent people from minority groups. “The time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial to society,” Lieu wrote in a New York Times guest essay.

While Nvidia and many chip companies espouse the virtues of AI in dozens of industry examples, there are critics of other AI uses and ChatGPT specifically, especially in academia. University of Pennsylvania Wharton School Prof. Christian Terwiesch wrote last week that the chatbot was able to pass the final exam for the school’s MBA program. The bot scored between a B- and B on the exam.

“This has important implications for business school education, including the need for exam policies, curriculum design focusing on collaboration between human and AI, opportunities to simulate real world decision making processes, the need to teach creative problem solving, improved teaching productivity, and more,” Terwiesch wrote. 

Aside from academic concerns, industry officials have raised worries. At Amazon.com, workers were warned not to provide ChatGPT with “any Amazon confidential information” which came after a company attorney found instances when responses posed to ChatGPT looked similar to confidential Amazon data. Business Insider reported the Amazon.com case on Tuesday.

Fierce Electronics asked Nvidia officials if the company has made specific recommendations for laws or standards governing AI and has not received a response.

RELATED: OSK checks methane leaks, lithium signals via satellite and hyperspectral intelligence data