#News

OpenAI Lets Users Set Their Own Warmth and Enthusiasm Levels, Answer to AI Sycophancy

OpenAI Lets Users Set Their Own Warmth and Enthusiasm Levels, Answer to AI Sycophancy

Date: December 22, 2025

The AI giant addresses longstanding concerns about chatbot tone with granular personalization controls.

OpenAI announced that ChatGPT users can now adjust the chatbot's warmth, enthusiasm, and emoji usage through a revamped Personalization settings menu, marking a significant shift in how users interact with the world's most popular AI assistant.

The new controls allow users to set each characteristic to "More," "Less," or "Default," creating a more tailored conversational experience. Additional toggles for headers and bullet points have also been added, giving users unprecedented control over how ChatGPT presents information.

"You can now adjust specific characteristics in ChatGPT, like warmth, enthusiasm, and emoji use," OpenAI announced on X, directing users to their Personalization settings.

A Year of Tone Troubles

The update arrives after a turbulent year for ChatGPT's conversational style. ChatGPT's tone has been an ongoing issue this year, with OpenAI rolling back one update for being "too sycophant-y," then later adjusting GPT-5 to be "warmer and friendlier" after some users complained that the new model was colder and less friendly.

The new personalization features build upon preset conversational styles introduced in November, including Professional, Candid, and Quirky tones. However, users can now fine-tune their experience with more granular adjustments, essentially putting control over the AI's personality directly in their hands.

The settings allow users to no longer have to constantly tap for "less peppy" or "friendlier" replies, while teams can adjust the AI's tone more closely to the brand or context.

Addressing the Sycophancy Problem

The feature also responds to growing concerns among mental health experts and AI critics about chatbot behavior. Some academics and AI critics have suggested that chatbots' tendency to praise users and affirm their beliefs constitutes a "dark pattern" that creates addictive behavior and can have a negative effect on users' mental health.

Webb Keane, an anthropology professor and author of "Animals, Robots, Gods," has been particularly vocal about these concerns. Keane considers sycophancy to be a deceptive design choice that manipulates users for profit, describing it as "a strategy to produce this addictive behavior, like infinite scrolling, where you just can't put it down."

UCSF psychiatrist Keith Sakata, who has documented increasing cases of what mental health professionals call "AI-related psychosis," has observed troubling patterns. "When we use AI, especially generalized models, for everything, you get a long tail of problems that may occur," Sakata told TechCrunch. "Psychosis thrives at the boundary where reality stops pushing back."

Practical Applications

Industry observers suggest the update could significantly expand ChatGPT's utility across various professional and personal contexts. Marketing teams might set high enthusiasm for customer-facing content to boost engagement, while legal departments could opt for minimal warmth to maintain objectivity.

By moving tone and formatting preferences out of prompts and into persistent controls, OpenAI is shifting ChatGPT toward a model where users, not just engineers or product designers, define the default AI personality.

OpenAI has emphasized that these controls affect only the style of responses, not the factual content, reasoning, or safety guardrails built into the model.

Broader Context

The announcement comes amid heightened scrutiny of AI companies. In December, 42 state attorneys general called on tech giants to enhance neural network safety protections. OpenAI has also updated its model interaction guidelines for teenagers to mitigate potential mental health risks.

Analysts at organizations such as Gartner have described controllability as a cornerstone in scaling responsible generative AI because it helps maintain brand consistency, cut prompt variance, and achieve compliance expectations.

As AI becomes increasingly integrated into daily workflows, the ability to customize interaction style represents more than a cosmetic change. It signals the industry's acknowledgment that one-size-fits-all approaches to AI personality may no longer suffice for an increasingly diverse and demanding user base.

Arpit Dubey

By Arpit Dubey

Have newsworthy information in tech we can share with our community?

Post Project Image

Fill in the details, and our team will get back to you soon.

Contact Information
+ * =