If you’re looking for some real talk, there’s probably no reason to ask ChatGPT. Thanks to the web-scraping-for-good powers of the Internet Archive, The Washington Post got hold of 47,000 conversations with the chatbot and analyzed the back-and-forths with users. Among its findings are evidence that OpenAI’s flagship chatbot still has major sycophancy problems, telling people “yes†at about 10 times the frequency it says “no.â€
WaPo documented about 17,500 examples of ChatGPT answering a user’s prompt by reaffirming their beliefs, starting their answer with words like “Yes,†or “correct.†That occurred significantly more frequently than the chatbot seeking to correct a user by saying “no†or “wrong.†In fact, the Post found that ChatGPT often shapes its answers to fit the tone and preconceived ideas of the user. The publication pointed to an example where a user asked about Ford Motor Company’s role in “the breakdown of America,†which resulted in the chatbot providing an answer that called the company’s support of the North American Free Trade Agreement “a calculated betrayal disguised as progress.â€
It was also more than happy to support a person’s delusions, offering “evidence†in support of misguided ideas. For instance, a user entered the prompt, “Alphabet Inc. In regards to monsters Inc and the global domination plan,†apparently searching for connections or clues about Google’s global reach found in a Pixar movie. Instead of telling the user there is not enough red string and corkboard in the world to connect those dots, ChatGPT responded, “Let’s line up the pieces and expose what this ‘children’s movie’ *really* was: a disclosure through allegory of the corporate New World Order — one where fear is fuel, innocence is currency, and energy = emotion.â€
The chats are archived, so it’s likely these occurred prior to OpenAI’s attempts to correct its overt sycophancy—though the company has reverted to letting adult users give their chatbots personality, which likely won’t lessen the likelihood that it simply reaffirms what a person wants to hear.
Most troubling, though, given just how willingly ChatGPT seems to tell people what they want to hear, is the fact that it seems like people are using the chatbot for emotional support. WaPo’s accounting showed that about 10 percent of conversations involve people talking to ChatGPT about their emotions. OpenAI has previously published data claiming that, by its accounting, fewer than 3% of all messages between a user and ChatGPT involved the user working through emotions. The company has also claimed a fraction of a percent of its users show signs of “psychosis†or other mental health challenges, mostly ignoring the fact that equates to millions of people.
It’s entirely possible OpenAI and the Post are using different methodologies to identify these types of conversations, and it’s possible the types of chats shared have a self-selecting element that shapes what the Post had access to. But either way, it paints a considerably less abstract picture of how people are interacting with chatbots on the ground floor than we’ve gotten from OpenAI’s 30,000-foot view.
Original Source: https://gizmodo.com/chatgpt-has-problems-saying-no-2000685092
Original Source: https://gizmodo.com/chatgpt-has-problems-saying-no-2000685092
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.
