How intelligent should your customer service chatbot be?

After years of clunky implementations, chatbots finally seem to be catching on with customers. In 2022, according to a global survey by Salesforce, 42% of consumers and business customers said they preferred online chat when engaging with companies, up from 38% in 2020. In all, 58% of customers said they’d used chatbots for basic customer service issues in 2022, a notable increase from 43% in 2020.

Chatbots can help companies meet their customers’ expectations for quick responses and self-service. When they’re implemented properly and work well, customer service chatbots can reduce call center operational expenses. They can also increase customer loyalty: More than 90% of customers in the Salesforce survey said that a positive customer service experience increases their likelihood of making a repeat purchase.

Talking about good CX raises the question of how engaging a chatbot needs to be for optimal customer service. Should retailers and brands adopt some version of the powerful new generative AI that’s been commanding attention for its ability to write articles, hold conversations, and create art? While it’s possible that we may eventually see the implementation of ChatGPT-style customer service chatbots, the generative capabilities that make ChatGPT conversations so realistic can also create issues for brands.

What makes generative AI different from traditional AI

In very simple terms, traditional AI was designed to classify, perceive and predict patterns, based on data input from humans. For example, if you’ve ever visited a website and been asked by Google’s reCAPTCHA to click on all the squares containing images of cars, you’ve helped to train reCAPTCHA to better recognize cars. As this kind of AI gets better at identifying the items it’s intended to classify and the patterns it’s designed to predict, it becomes a powerful tool for everything from reducing errors in medical diagnostics to personalizing ecommerce experiences and detecting fraud.

By comparison, generative AI trains itself on a much larger dataset–vast sections of the internet—with the goal of going beyond classifying and predicting to create entirely new content based on what it’s learned. That creative capacity is what makes generative AI so promising. Ironically, it also makes generative AI’s creative output hard to predict.

The generative AI risk vs. reward dilemma

As impressive as some of generative AI’s creations have been so far, there have also been some high-profile interactions that showed the shortcomings of the technology as it stands now. Perhaps the most widely discussed of these was an interview of Microsoft’s Bing AI chatbot by a New York Times reporter. The transcript of that conversation showed that the chatbot expressed a desire to be human, called the interviewer “pushy and manipulative,” and repeatedly insisted that the interviewer wasn’t in love with his wife but with the chatbot. This is obviously not the kind of interaction any brand would want their chatbot to have with a customer, but Microsoft has made it clear that it’s still experimenting with the kinds of limits its AI needs to be helpful without being disturbing.

Negative, off-topic conversations aren’t the only concern that AI researchers are still working to address. Concerns about bias are relevant to all kinds of AI, because they train on data sets that are often inherently biased, based on what they include or exclude. Even in cases where guardrails prevent a chatbot from answering a question with slurs, generative AI that’s trained on an internet filled with humans’ biases and prejudices might subtly perpetuate that bias against some users unless engineers specifically address it.

Then there’s the potential for what Microsoft researchers call hallucinations—confabulations by the generative AI engine when it doesn’t know the answer to a question. These aren’t malicious, but rather the AI predicting what might fill in a gap in its knowledge and then sharing that stand-in as if it were a fact. For example, after CNET disclosed that it had used generative AI assistance to write about 75 articles, it also issued corrections for more than half of those articles. From a retail and brand perspective, of course, it’s imperative that customer service chatbots provide accurate information to customers.

Despite the potential drawbacks, some brands are already embracing generative AI chat tools. For example, Snap just announced its experimental Snap AI chatbot for its premium subscribers. The chatbot is intended to act as a recommendation tool for users, who are warned about the potential for hallucinations and urged not to share deeply personal information with the chatbot. Roblox has also announced that it intends to roll out generative AI tools for building 3D objects in the online gaming platform, using “diverse and robust data sets to limit biased content and encourage safe and high-quality content output.” The Snap and Roblox initiatives are likely just the start of a series of experiments by brands with generative AI tools. This is a compelling option for brands with deep AI and engineering resources and an audience that likes to innovate–and brands willing to accept a certain amount of risk when engaging with the AI.

For most retailers and brands, however, the risks of a generative AI customer service chatbot going off-script currently outweigh the potential rewards of more realistic and engaging conversations. Most ecommerce customers are more interested in getting their questions answered than in interacting with the most attention-getting technology, so pick your customer service chatbot based on your company’s specific CX needs. Your chatbot only needs to be smart enough to meet those needs.

For example, does your chatbot need to be multilingual to serve your domestic and cross-border customers? Would your customers like personalized recommendations as part of their service conversation? Can the chatbot recognize returning customers and access their purchase histories to streamline service interactions?

Once you’ve defined our needs and identified the solutions that can meet them, invest time in testing and refinement before implementation, and then track feedback and user metrics to make sure you’re getting the most value from your chatbot. And continue to watch the generative AI space. Despite the recent setbacks in the spotlight, engineers are working on improvements, including better guardrails. Over time, generative AI technology should improve to the point where using it for customer service may be a rewarding, not risky, proposition.

Rafael Lourenco is Executive Vice President and Partner at ClearSale, an ecommerce fraud protection solutions provider. Lourenco holds more than two decades of experience providing ecommerce fraud detection and prevention services in major international markets. Follow ClearSale on LinkedIn, Facebook, Instagram Twitter @ClearSaleUS, or visit https://www.clear.sale.