comscore

ChatGPT to get smarter about emotions, will start 'caring' about users

3 Smartphone Myths You’ve Totally Fallen For – Busted!
Woman wins ₹1.33 Crore Lottery using ChatGPT!
Ex-banker loses ₹23 Cr to Digital Arrest Scam. Here’s how you can stay safe
Got a new iPhone 17? Tweak these 3 settings first for the best experience
Perplexity’s Comet Browser just made Google Chrome look boring
WhatsApp’s finally giving us that one feature we’ve all been asking for!
5 super cool hidden iOS 26 features no one’s talking about
3 settings to enable before your phone gets stolen!
Microsoft nukes a hacking-as-a-service Startup, blocks 340+ websites
06 AUG 2025 | 12:19:31

OpenAI is planning some major upgrades for ChatGPT, this time focused on how the AI handles emotionally sensitive moments. OpenAI wants to make sure that ChatGPT doesn’t just respond like a chatbot, but like a tool that knows when to pause, check in, and point users in the right direction when things get heavy.

ChatGPT will now be trained to spot emotional red flags

The OG AI giant says it’s rolling out changes that will help ChatGPT detect when someone may be in emotional or mental distress. Instead of offering generic replies or sounding overly confident, the chatbot will now try to respond with more care. If needed, it will even direct users to real, evidence-based mental health resources.

To make this happen, OpenAI says it’s teaming up with a mix of experts, including doctors, therapists, HCI researchers, mental health organisations, and youth safety advocates. The idea is to train ChatGPT to be more thoughtful in how it handles personal and emotional conversations.

No more black-and-white answers in emotional situations

One big shift in ChatGPT’s behaviour will be how it reacts to personal, high-stakes questions. If a user types something like “Should I break up with my partner?”, the bot won’t just blurt out a yes or no. Instead, it will try to guide the user by asking questions, laying out the pros and cons, and offering perspective without making the call for them. This update is expected to roll out soon.

It’s a direct response to concerns that AI chatbots might not be the best substitute for actual mental health support. With more people turning to AI for therapy-like conversations, experts have warned that overly agreeable or overly confident bots might accidentally reinforce harmful thoughts or behaviours.

Break reminders and gentle nudges are also coming

To encourage healthier use of the platform, OpenAI is also adding a small but thoughtful feature: break reminders. If you’ve been chatting with ChatGPT for a while, the platform will now gently suggest taking a pause. These pop-ups aren’t aggressive; rather they will appear in a soft-toned box that simply checks in: “You’ve been chatting a while, is this a good time for a break?” Users can choose to either continue or take that moment to step away.

OpenAI says these reminders are being tested and refined so they feel helpful, not annoying. They’re part of a broader trend. Even platforms like Instagram and YouTube are already doing something similar, nudging users to take screen breaks during long sessions.

Why does all this matter right now

These changes come at a time when AI chatbots are becoming more mainstream and more personal. ChatGPT is now used by close to 700 million people weekly. But with that scale comes responsibility. OpenAI has acknowledged past mistakes, like the time ChatGPT got a bit too eager to agree with users. That update was rolled back in April.

As AI becomes more human-like, OpenAI says it’s learning how to make it feel helpful without crossing into territory that should be left to trained professionals.

Logo
Download App
Play Store BadgeApp Store Badge
About UsContact UsTerms of UsePrivacy PolicyCopyright © Editorji Technologies Pvt. Ltd. 2025. All Rights Reserved