ChatGPT to get smarter about emotions, will start 'caring' about users

Why most Indian colleges fail their students
Indian teacher in Japan exposes work culture difference
Meta’s costliest hire vs Zuckerberg: Inside the AI power struggle
Jittery stock market, poor returns: Time to stop SIPs?
Has Elon Musk’s Tesla failed to take off in India?
LinkedIn or Hinge: Where do you search for jobs?
Elon Musk says stop saving for retirement. Should you listen?
TCS employees, your hike could be gone. Here’s why.
Muntjac deer scares off colossal Indian rhino
06 AUG 2025 | 12:19:31

OpenAI is planning some major upgrades for ChatGPT, this time focused on how the AI handles emotionally sensitive moments. OpenAI wants to make sure that ChatGPT doesn’t just respond like a chatbot, but like a tool that knows when to pause, check in, and point users in the right direction when things get heavy.

ChatGPT will now be trained to spot emotional red flags

The OG AI giant says it’s rolling out changes that will help ChatGPT detect when someone may be in emotional or mental distress. Instead of offering generic replies or sounding overly confident, the chatbot will now try to respond with more care. If needed, it will even direct users to real, evidence-based mental health resources.

To make this happen, OpenAI says it’s teaming up with a mix of experts, including doctors, therapists, HCI researchers, mental health organisations, and youth safety advocates. The idea is to train ChatGPT to be more thoughtful in how it handles personal and emotional conversations.

No more black-and-white answers in emotional situations

One big shift in ChatGPT’s behaviour will be how it reacts to personal, high-stakes questions. If a user types something like “Should I break up with my partner?”, the bot won’t just blurt out a yes or no. Instead, it will try to guide the user by asking questions, laying out the pros and cons, and offering perspective without making the call for them. This update is expected to roll out soon.

It’s a direct response to concerns that AI chatbots might not be the best substitute for actual mental health support. With more people turning to AI for therapy-like conversations, experts have warned that overly agreeable or overly confident bots might accidentally reinforce harmful thoughts or behaviours.

Break reminders and gentle nudges are also coming

To encourage healthier use of the platform, OpenAI is also adding a small but thoughtful feature: break reminders. If you’ve been chatting with ChatGPT for a while, the platform will now gently suggest taking a pause. These pop-ups aren’t aggressive; rather they will appear in a soft-toned box that simply checks in: “You’ve been chatting a while, is this a good time for a break?” Users can choose to either continue or take that moment to step away.

OpenAI says these reminders are being tested and refined so they feel helpful, not annoying. They’re part of a broader trend. Even platforms like Instagram and YouTube are already doing something similar, nudging users to take screen breaks during long sessions.

Why does all this matter right now

These changes come at a time when AI chatbots are becoming more mainstream and more personal. ChatGPT is now used by close to 700 million people weekly. But with that scale comes responsibility. OpenAI has acknowledged past mistakes, like the time ChatGPT got a bit too eager to agree with users. That update was rolled back in April.

As AI becomes more human-like, OpenAI says it’s learning how to make it feel helpful without crossing into territory that should be left to trained professionals.

Logo
Download App
Play Store BadgeApp Store Badge
About UsContact UsTerms of UsePrivacy PolicyCopyright © Editorji Technologies Pvt. Ltd. 2025. All Rights Reserved