OpenAI is planning some major upgrades for ChatGPT, this time focused on how the AI handles emotionally sensitive moments. OpenAI wants to make sure that ChatGPT doesn’t just respond like a chatbot, but like a tool that knows when to pause, check in, and point users in the right direction when things get heavy.
The OG AI giant says it’s rolling out changes that will help ChatGPT detect when someone may be in emotional or mental distress. Instead of offering generic replies or sounding overly confident, the chatbot will now try to respond with more care. If needed, it will even direct users to real, evidence-based mental health resources.
To make this happen, OpenAI says it’s teaming up with a mix of experts, including doctors, therapists, HCI researchers, mental health organisations, and youth safety advocates. The idea is to train ChatGPT to be more thoughtful in how it handles personal and emotional conversations.
One big shift in ChatGPT’s behaviour will be how it reacts to personal, high-stakes questions. If a user types something like “Should I break up with my partner?”, the bot won’t just blurt out a yes or no. Instead, it will try to guide the user by asking questions, laying out the pros and cons, and offering perspective without making the call for them. This update is expected to roll out soon.
It’s a direct response to concerns that AI chatbots might not be the best substitute for actual mental health support. With more people turning to AI for therapy-like conversations, experts have warned that overly agreeable or overly confident bots might accidentally reinforce harmful thoughts or behaviours.
To encourage healthier use of the platform, OpenAI is also adding a small but thoughtful feature: break reminders. If you’ve been chatting with ChatGPT for a while, the platform will now gently suggest taking a pause. These pop-ups aren’t aggressive; rather they will appear in a soft-toned box that simply checks in: “You’ve been chatting a while, is this a good time for a break?” Users can choose to either continue or take that moment to step away.
OpenAI says these reminders are being tested and refined so they feel helpful, not annoying. They’re part of a broader trend. Even platforms like Instagram and YouTube are already doing something similar, nudging users to take screen breaks during long sessions.
These changes come at a time when AI chatbots are becoming more mainstream and more personal. ChatGPT is now used by close to 700 million people weekly. But with that scale comes responsibility. OpenAI has acknowledged past mistakes, like the time ChatGPT got a bit too eager to agree with users. That update was rolled back in April.
As AI becomes more human-like, OpenAI says it’s learning how to make it feel helpful without crossing into territory that should be left to trained professionals.