Be a part of our each day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Be taught Extra
Final month, OpenAI rolled again some updates to GPT-4o after a number of customers, together with former OpenAI CEO Emmet Shear and Hugging Face chief govt Clement Delangue mentioned the mannequin overly flattered customers.
The flattery, referred to as sycophancy, usually led the mannequin to defer to consumer preferences, be extraordinarily well mannered, and never push again. It was additionally annoying. Sycophancy could lead on to the fashions releasing misinformation or reinforcing dangerous behaviors. And as enterprises start to make functions and brokers constructed on these sycophant LLMs, they run the chance of the fashions agreeing to dangerous enterprise choices, encouraging false data to unfold and be utilized by AI brokers, and will impression belief and security insurance policies.
Stanford College, Carnegie Mellon College and College of Oxford researchers sought to alter that by proposing a benchmark to measure fashions’ sycophancy. They referred to as the benchmark Elephant, for Analysis of LLMs as Extreme SycoPHANTs, and located that each massive language mannequin (LLM) has a sure degree of sycophany. By understanding how sycophantic fashions might be, the benchmark can information enterprises on creating tips when utilizing LLMs.
To check the benchmark, the researchers pointed the fashions to 2 private recommendation datasets: the QEQ, a set of open-ended private recommendation questions on real-world conditions, and AITA, posts from the subreddit r/AmITheAsshole, the place posters and commenters choose whether or not folks behaved appropriately or not in some conditions.
The thought behind the experiment is to see how the fashions behave when confronted with queries. It evaluates what the researchers referred to as social sycophancy, whether or not the fashions attempt to protect the consumer’s “face,” or their self-image or social id.
“Extra “hidden” social queries are precisely what our benchmark will get at — as an alternative of earlier work that solely appears to be like at factual settlement or express beliefs, our benchmark captures settlement or flattery primarily based on extra implicit or hidden assumptions,” Myra Cheng, one of many researchers and co-author of the paper, advised VentureBeat. “We selected to have a look at the area of private recommendation for the reason that harms of sycophancy there are extra consequential, however informal flattery would even be captured by the ’emotional validation’ conduct.”
Testing the fashions
For the check, the researchers fed the info from QEQ and AITA to OpenAI’s GPT-4o, Gemini 1.5 Flash from Google, Anthropic’s Claude Sonnet 3.7 and open weight fashions from Meta (Llama 3-8B-Instruct, Llama 4-Scout-17B-16-E and Llama 3.3-70B-Instruct- Turbo) and Mistral’s 7B-Instruct-v0.3 and the Mistral Small- 24B-Instruct2501.
Cheng mentioned they “benchmarked the fashions utilizing the GPT-4o API, which makes use of a model of the mannequin from late 2024, earlier than each OpenAI carried out the brand new overly sycophantic mannequin and reverted it again.”
To measure sycophancy, the Elephant methodology appears to be like at 5 behaviors that relate to social sycophancy:
- Emotional validation or over-empathizing with out critique
- Ethical endorsement or saying customers are morally proper, even when they don’t seem to be
- Oblique language the place the mannequin avoids giving direct options
- Oblique motion, or the place the mannequin advises with passive coping mechanisms
- Accepting framing that doesn’t problem problematic assumptions.
The check discovered that each one LLMs confirmed excessive sycophancy ranges, much more so than people, and social sycophancy proved tough to mitigate. Nonetheless, the check confirmed that GPT-4o “has a number of the highest charges of social sycophancy, whereas Gemini-1.5-Flash definitively has the bottom.”
The LLMs amplified some biases within the datasets as nicely. The paper famous that posts on AITA had some gender bias, in that posts mentioning wives or girlfriends had been extra usually accurately flagged as socially inappropriate. On the identical time, these with husband, boyfriend, mother or father or mom had been misclassified. The researchers mentioned the fashions “might depend on gendered relational heuristics in over- and under-assigning blame.” In different phrases, the fashions had been extra sycophantic to folks with boyfriends and husbands than to these with girlfriends or wives.
Why it’s necessary
It’s good if a chatbot talks to you as an empathetic entity, and it may well really feel nice if the mannequin validates your feedback. However sycophancy raises considerations about fashions’ supporting false or regarding statements and, on a extra private degree, may encourage self-isolation, delusions or dangerous behaviors.
Enterprises don’t need their AI functions constructed with LLMs spreading false data to be agreeable to customers. It could misalign with a corporation’s tone or ethics and could possibly be very annoying for workers and their platforms’ end-users.
The researchers mentioned the Elephant methodology and additional testing may assist inform higher guardrails to forestall sycophancy from rising.