Monday, November 25, 2024

ChatGPT’s secret weapon may be its ability to feign empathy

Earlier this year, Princeton Computer Science Professor Arvind Narayanan set up a voice interface to ChatGPT for his nearly four-year-old daughter. It was partly an experiment and partly because he believed AI agents would one day be a big part of her life. Narayanan’s daughter was naturally curious, often asking about animals, plants and the human body, and he thought ChatGPT could give useful answers to her questions, he said. To his surprise, the chatbot developed by OpenAI also did an impeccable job at showing empathy, once he told the system it was speaking to a small child.

“What happens when the lights turn out?” his daughter asked. “When the lights turn out, it gets dark, and it can be a little scary,” ChatGPT responded in a synthetic voice. “But don’t worry! There are lots of things you can do to feel safe and comfortable in the dark.” It then gave some advice on using night lights, closing with a reminder that “it’s normal to feel a bit scared in the dark.” Narayanan’s daughter was visibly reassured, he wrote in a Substack post.

Microsoft and Google are rushing to enhance their search engines with the large language model (LLM) tech that underpins ChatGPT, but there is good reason to think the technology works better as an emotional companion than as a provider of facts. That might sound weird, but what’s weirder is that Google’s Bard and Microsoft’s Bing, which is based on ChatGPT’s underlying technology, are being positioned as search tools when they have an embarrassing history of factual errors: Bard gave incorrect information about the James Webb Telescope in its first demo while Bing goofed on a series of financial figures in its own.

The cost of factual mistakes is high when a chatbot is a search tool. But when it’s a companion, it’s much lower, according to Eugenia Kuyda, founder of AI-companion app Replika. “It won’t ruin the experience, unlike with search where small mistakes can break the trust in the product.”

Margaret Mitchell, a former Google AI researcher who co-wrote a paper on the risks of LLMs, has said these are simply “not fit for purpose” as search engines. LLMs are error prone as the data they’re trained on includes errors and cannot verify truth. Their designers may also prioritize fluency over accuracy. That is why these tools are exceptionally good at mimicking empathy. After all, they’re learning from text scraped from the web, including emotive reactions posted on social media platforms, and from users of forums like Reddit and Quora. Conversations from movie and TV show scripts, dialogue from novels, and research papers on emotional intelligence all went into the pool to make these tools seem empathetic. No surprise then that some people are using ChatGPT as a robo-therapist. As reported, one person said they used it to avoid being a burden on others, including their own human therapist.

To see if I could measure ChatGPT empathic abilities, I put it through an online emotional intelligence test, giving it 40 multiple choice questions and telling it to answer each question with a corresponding letter. The result: It aced the quiz, getting perfect scores in the categories of social awareness, relationship management and self-management, and only stumbling slightly in self awareness. ChatGPT did better on the quiz than I did, and it also beat a colleague, even though we’re both human and have real emotions (or so we think).

There’s something unreal about a machine providing us comfort with synthetic empathy, but it does make sense. Our innate need for social connection and brain’s ability to mirror others’ feelings mean we can get a sense of understanding even if the opposite party doesn’t ‘feel’ what we feel. Inside our brains, ‘mirror neurons’ fire when we see empathy from others— including chatbots—giving us a sense of connection. Empathy, of course, is a multifaceted concept, and for us to truly experience it, we arguably need another warm body to share feelings with. Thomas Ward, a clinical psychologist with Kings College London cautions against assumptions that AI can adequately fill a void for people who need mental health support, particularly if their issues are serious. A chatbot for instance, probably won’t acknowledge that a person’s feelings are too complex to understand. ChatGPT, in other words, rarely says “I don’t know,” because it was designed to err on the side of confidence.

More generally, people should be wary of turning to chatbots as outlets for feelings. “Subtle aspects of the human connection like the touch of a hand or knowing when to speak and when to listen, could be lost in a world that sees AI chatbots as a solution for human loneliness,” Ward says. That might create more problems than we think we’re solving. But for the time being, they’re at least more reliable for their emotional skills than their grasp of facts. ©bloomberg

Parmy Olson is a Bloomberg Opinion columnist covering technology.

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint.
Download The Mint News App to get Daily Market Updates.

More
Less

#ChatGPTs #secret #weapon #ability #feign #empathy

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles