OpenAI CEO Altman again invokes need for oversight of AI

OpenAI CEO Sam Altman told an Intel audience on Wednesday the cost of building out data centers to support the explosive growth anticipated with AI will be expensive, but he denied saying it will require $7 trillion as reported recently.

“We think the world is going to need a lot of compute,” Altman said during an onstage conversation with Intel CEO Pat Gelsinger. He added the number is not known, adding that the $7 trillion came from a report with an anonymous source. “You can always find some anonymous source to say something, I guess,” he added.

Aside from the somewhat humorous exchange about $7 trillion, Intel had earlier reported at its premiere Direct Connect event for Intel Foundry that it has already received $15 billion in billings to make semiconductors for various purposes, many of them AI. Gelsinger said he expects to reach $100 billion in billings in coming years on Intel's way to becoming the second largest chip fab globally.

Altman spent much of the conversation talking about the potential for generative AI and the challenges it can pose, saying he’s most excited about how the tools can be used to help developers write programs.

The challenges for AI have been exposed in numerous interviews and appearances Altman has made since the first ChatGPT appeared in late 2022.  But Altman was fairly insistent about the need for government involvement in setting up regulatory frameworks to guard against AI perils.

“We do have things to figure out,” Altman said. “There are mega-new discoveries to find in front of us.”

He said that different governments will take different approaches to guarding against the worst sci-fi movie examples of AI gone amuck. “There’s a lot of ideas and a huge amount of understandable anxiety. I’m happy the conversation is happening now…while the stakes are relatively low.”

He said what societies don’t want to have happen is for AI to be “built in the basement” and out of view.

One discovery he has made while growing up with the latest technology, he added, is that people adapt to new technology very quickly. “It’s not just that people will do things faster. We’ll be able to do things we just couldn’t do before, that we just weren’t smart enough to do before on our own. These new tools will enable it. I think it will be great.”

He added: “I’m a tremendous optimist in general. We have a lot to manage through and there are good reasons to worry. When we zoom out and look at the future…I think it’s going to be a lot better. It’s hard to imagine how much better it’s going to be. The sci-fi risks are super important and a big part of why we started OpenAI.”   Among them are risks to elections, computer security, and bioterrorism.

“Great things can cause great harm, but I love technology and I think most of the people in the world do too. For AI, we’ll have some serious decisions to make.”

On whether AI will benefit from industry standards as used widely in the tech world, Altman was somewhat equivocal. “You can take a system you have built where you’re writing prompts and getting responses, and you can switch from one AI to another more easily than you might think. We can now program computers in natural language and natural language is pretty portable,” Altman said.

But noting that the “world is pretty paranoid about anything right now” including AI, he added: “That’s good. OpenAI is truly just a voice and we try to share our perspective about what we know and what we think might come. This will be a decision our world has to make together. AI’s going to impact all of us and this is what we need our governments and institutions to do.. We need governments to play an important role if we are going to get this right. This is going to be such an impactful tech that we should all want that."