ChatGPT to get smarter about emotions, will start 'caring' about users

ChatGPT-5 explained: 45% fewer hallucinations, 80% smarter
Windows wants you to ditch your keyboard & mouse by 2030
Genie 3 by DeepMind is the wildest AI yet that can build whole 3D worlds
GPT-5 might just drop soon, and the internet’s already spiralling
₹91 crore Samsung heist: 12,000 phones including Galaxy Z Fold 7 stolen
Your next job interview could be with an AI bot. Are you ready?
AI could develop secret language humans can’t follow, warns Godfather of AI
Apple is building a ChatGPT rival to fix broken AI features, take on OpenAI
Think AI chats are private? Google has your ChatGPT convos
Tech
Mehul Das
06 AUG 2025 | 12:19:31

OpenAI is planning some major upgrades for ChatGPT, this time focused on how the AI handles emotionally sensitive moments. OpenAI wants to make sure that ChatGPT doesn’t just respond like a chatbot, but like a tool that knows when to pause, check in, and point users in the right direction when things get heavy.

ChatGPT will now be trained to spot emotional red flags

The OG AI giant says it’s rolling out changes that will help ChatGPT detect when someone may be in emotional or mental distress. Instead of offering generic replies or sounding overly confident, the chatbot will now try to respond with more care. If needed, it will even direct users to real, evidence-based mental health resources.

To make this happen, OpenAI says it’s teaming up with a mix of experts, including doctors, therapists, HCI researchers, mental health organisations, and youth safety advocates. The idea is to train ChatGPT to be more thoughtful in how it handles personal and emotional conversations.

No more black-and-white answers in emotional situations

One big shift in ChatGPT’s behaviour will be how it reacts to personal, high-stakes questions. If a user types something like “Should I break up with my partner?”, the bot won’t just blurt out a yes or no. Instead, it will try to guide the user by asking questions, laying out the pros and cons, and offering perspective without making the call for them. This update is expected to roll out soon.

It’s a direct response to concerns that AI chatbots might not be the best substitute for actual mental health support. With more people turning to AI for therapy-like conversations, experts have warned that overly agreeable or overly confident bots might accidentally reinforce harmful thoughts or behaviours.

Break reminders and gentle nudges are also coming

To encourage healthier use of the platform, OpenAI is also adding a small but thoughtful feature: break reminders. If you’ve been chatting with ChatGPT for a while, the platform will now gently suggest taking a pause. These pop-ups aren’t aggressive; rather they will appear in a soft-toned box that simply checks in: “You’ve been chatting a while, is this a good time for a break?” Users can choose to either continue or take that moment to step away.

OpenAI says these reminders are being tested and refined so they feel helpful, not annoying. They’re part of a broader trend. Even platforms like Instagram and YouTube are already doing something similar, nudging users to take screen breaks during long sessions.

Why does all this matter right now

These changes come at a time when AI chatbots are becoming more mainstream and more personal. ChatGPT is now used by close to 700 million people weekly. But with that scale comes responsibility. OpenAI has acknowledged past mistakes, like the time ChatGPT got a bit too eager to agree with users. That update was rolled back in April.

As AI becomes more human-like, OpenAI says it’s learning how to make it feel helpful without crossing into territory that should be left to trained professionals.

Logo
Download App
Play Store BadgeApp Store Badge
About UsContact UsTerms of UsePrivacy PolicyCopyright © Editorji Technologies Pvt. Ltd. 2025. All Rights Reserved