One AI pro reacts to ChatGPT: “remarkable achievement; might be devastating”

Fierce Electronics asked a veteran AI industry professional to comment on ChatGPT, which has provoked wide praise and criticism since the latest version emerged from Open AI in late November.

Kenneth Wenger, senior director of research and innovation at CoreAVI in Tampa, Fla., and chief technology officer at Squint AI Inc, has penned a forthcoming book, Is the Algorithm Plotting Against Us?: A Layperson’s Guide to the Concepts, Math and Pitfalls of AI.  His professional work focuses on AI and determinism and the use of AI in safety critical systems.

Wenger holds bachelor’s and master’s degrees in computer science from Ryerson University in Toronto.  He also is a founding member of the Working Fires Foundation dedicated to communicating complex scientific ideas simply.

In the following responses he emailed to Fierce Electronics, he describes the immense disruptive potential of ChapGPT and its potential to be misused. He also offers a reaction to “fractured” laws in the US to regulate AI and his distrust of any single entity setting AI policy frameworks. The AI industry has failed to take the high ground and respond to AI critics, he also believes.  Part of the problem, he says, is that AI researchers are “generally terrible communicators.”

Kenneth Wenger portrait
Kenneth Wenger (Working Fires Foundation)

FE: What’s your overall reaction to ChatGPT? 

Wenger: It is a remarkable achievement. Large Language Models, including previous GPT versions, have been getting increasingly better over the past couple of years, but I don't think anyone was expecting the sort of performance that ChatGPT has demonstrated this soon.

The disruptive potential is immense. On the positive side, it can be used to enhance our work. We can use it conversationally to ask questions and get succinct answers rather than having to sift through several hits on Google. Software developers are already using it to generate boilerplate code rather than starting from scratch. A software engineer recently said to me that he no longer has to remember syntax. He simply asks ChatGPT to generate a program in whichever language, and all he has to do is make small adjustments. These are all areas which we considered a frontier for AI not too long ago!

On the negative side, it can certainly be misused. It provides coherent and confident --sometimes too confident!-- answers which might still be entirely wrong. The implications of bogus scientific papers or news articles that sound authentic outpacing the production of useful research or media is terrifying and might have devastating consequences for society. We will have to see what impact it has on schooling as well. It is all too easy for kids to generate reports and essays which are certain to receive a good mark, at the expense of learning.

But the most important thing to understand is that ChatGPT is a not a scientific breakthrough over previous versions. It still functions under the same principles I describe in Is the Algorithm Plotting Against Us?. And it is still not aware of what it's producing! It generates text with a high probability of being coherent given the amount and quality of data used to train it. It is no more dangerous than E=MC^2. On its own, it can't do anything, but it can certainly cause a lot of damage if we don’t use it carefully.

FE: Are there proper standards or laws in place today to offer checks on AI for safety?

Wenger: In the US, the laws are a bit fractured and piecemeal when it comes to regulating AI. There is nothing comprehensive at the federal level yet; more regulations are expected to emerge this year.

The problem with these laws is that they tend to be use-case specific and tend to miss the bigger picture. For example, a few states recently created regulation around automated employment decision tools (AEDT) to ensure annual auditing of these tools against bias. The problem is that automated employment tools is only one example of the many cases where bias in the data would be a terrible thing. AI use in courthouses, medical diagnosis, advertising, credit and mortgage evaluations—these are all cases where biased data can have devastating consequences. The regulation should tackle the problem of bias in general, but for that, the regulators need to understand these issues at a more fundamental level.

FE: Would you trust Nvidia or industry generally to offer standards or suggest laws on AI safety?

Wenger: I certainly wouldn't trust any single company to decide what's the proper use of AI. The industry can lead the creation of standards, but I truly believe that an informed public is the only way to guarantee long-term, responsible use of AI.

FE: Do you support a US agency to regulate AI? How should any governance structure be rolled out?

Wenger:  In theory, governments should be involved in regulating AI, but the problem is that they move extremely slowly and are generally reactionary. There is currently no incentive for a politician to see beyond four years. This is why I believe an electorate that demands well-motivated and well-informed policies is the only way forward. In the meantime, we should start these discussions as part of the school curriculum. We teach history, math, and literature. We should also be teaching kids to think about the long-term impact of AI. If we did this, eventually we will have an informed public.

FE: Does the Wharton MBA final exam passed by ChatGPT provoke any particular concerns around plagiarism and cheating?

Wenger:  Yes, plagiarism is a concern, but only a minor one considering the cost to society of a graduate with degrees and no useful skills. Not to mention the fact that we are missing a huge point: Suppose most people take the route of using generative models such as ChatGPT to do their work. ChatGPT learns by analyzing data produced by people. To overly rely on these models to produce knowledge will have an impact on the availability of new human-produced knowledge that we use to the train the models in the first place! More importantly, we must ask, is our end goal to be uninspired blubs getting answers fed to us by machines, or to push our limits and continue the path to discovery using our tools to help us rather than replace us?

FE: As an industry official yourself, how can industry best show the values of AI amid so much controversy over bias and safety? Has industry failed to show the value of AI and react to criticism?

Wenger: Yes, the industry has certainly failed in this area. The issue is that today the only people who really understand AI to any useful degree are researchers. And unfortunately, researchers are generally terrible communicators. The reason I wrote my book is that I believe we need the public's help here. If you don’t understand the basics of how these algorithms work, their reach, and their limitations, then you have to take my word for it or the word of the next expert, scientist, or industry speculator. Once you understand the basics, you can listen to our opinions, but then you can reach your own conclusions.

First, we need to change the narrative that AI is too complicated and you need a PhD to understand it. There are different levels to understanding any subject. And we can all understand AI enough to take control of our future. We have to.

 RELATED: Nvidia chief urges laws, standards for safe AI