ChatGPT and the security risks of Generative AI

Generative AI platforms and chatbots are set to become increasingly popular technology tools for businesses, but will the movement unfold without proper thought given to security?

A recent Salesforce.com survey found that 67% of 500 senior IT leaders surveyed are prioritizing Generative AI technology for their organizations during the next 18 months, but the same survey found that 71% of those leaders believe the same technology is likely to “introduce new security risks to our data.”

What’s all the fuss about? Well, ChatGPT, the Generative AI chatbot from OpenAI, is the definition of an overnight success story, having drawn more than 100 million unique users in the first two months after its debut in late November 2022, according to Internet analytics firm Similarweb. 

This kind of success story has huge ripple effects, as more competitors try to rush their own AI chatbots into the market, and still more users become curious about trying ChatGPT. On the corporate front, companies are trying to evaluate how they might leverage ChatGPT and similar chatbots, but at the same time they might also be scrambling to find out how many of their employees already are using ChatGPT and exactly what they are using it for.

And therein lies a potential security nightmare, ChatGPT is so new and so popular that in their rush to play with the chatbot users may expose sensitive data to it. At the same time, security attackers are licking their chops at the notion they may have a new weapon in their arsenal, as well as a new venue in which they might find easy prey. 

Meanwhile, many individual companies and security standards groups simplay have not had enough time to put together thoughtful, comprehensive security rules and advisories, according to Gregory Hatcher, founder and CEO of cybersecurity consultancy White Knight Labs.

“ChatGPT is in its infancy, everybody is racing to keep up, that includes organizations that create security guidance. We have seen some companies restrict the use of ChatGPT altogether at the workplace (JP Morgan). Amazon and Walmart have issued weak warnings to their employees to ‘take care’ in using AI services.”

In addition, standards bodies are trying to catch up. The National Institute of Standards and Technology issued an AI Risk Management Framework, but other groups have not had much to say yet.

Meanwhile, ChatGPT creator OpenAI has gone on record saying that generative AI platforms should be regulated, but the federal government and legislators have not exactly jumped in response. “There has been legislation introduced in the House of Representatives that said Congress should focus on AI ‘to ensure that the development and deployment of AI is done in a way that is safe, ethical, and respects the rights and privacy of all Americans,” Hatcher said. “The resolution was written by ChatGPT.”

He added, “The lack of response from the federal government is worrying; the pace at which ChatGPT learns far outstrips the rate at which human beings can learn. I think the government is underestimating the role that AI is going to play in the future, and also its ramifications on national security.”

As government, standards bodies, and corporate policy administrators try to catch up, Hatcher said users of ChatGPT and other AI chatbots need to  adopt some common sense.

“There are inherent risks of feeding a LLM [large language model] sensitive information,” he said. “The data could be retrieved at a later date by a different person (or group of people).”

He advised, “Don’t ever feed ChatGPT HIPAA client data, source code, or proprietary information that, if taken, could destroy your company’s competitive advantage. We’ve seen multiple cases of confidential data being leaked on ChatGPT: an executive pasting their company’s strategy document into it, asking for ChatGPT to make a slide deck out of it. There’s also an account of a doctor inputting his patient’s name and medical condition, asking ChatGPT to create a letter to the patient’s insurance carrier.”

Generative AI platforms can be adept at debugging software code and finding programming errors, which also means that attackers could use this capability to find security vulnerabilities. “Although ChatGPT has been trained not to provide illegal hacking services, researchers have proved that those protections can be thwarted; you simply need to tell the LLM that you’re a researcher and that you are doing a CTF, or a benign activity, and it will help you find bugs and even write code on how to exploit that bug,” Hatcher noted. “Another well-known current bypass is that when a person uses the ChatGTP API directly, many of the safeguards that exist on the front end are absent.”

He added that Generative AI platforms can have their own vulnerabilities. “Microsoft’s new ChatGPT-like AI tech has shown that it’s vulnerable to a new attack called ‘prompt-injection’; which is an attack that circumvents previous instructions and provides new ones in their place. This leads to what is being known as ‘jailbreaking’ ChatGPT.”

Generative AI technology has come at us very quickly, and the months ahead could be even more confusing and complicated for companies trying to understand the technology. Lackadaisical attitudes toward security have had a negative impact on the ability of technologies like IoT to expand and grow in the enterprise market. Could Generative AI adoption face the same struggle?