OpenAI, the creator of ChatGPT, recently estimated how many of its 800 million users engage in emotionally trusting conversations with chatbots each week. At a vanishingly small .15 percent, this figure seems deceptively small.
But the math tells a different story: a slice of $800 million is still equivalent to 1.2 million people. The way these users interact with ChatGPT increases their loneliness and emotional dependence on technology and reduces their interaction with other humans.
While OpenAI says its default model has been updated to discourage over-reliance by motivating users to value real-world connections, ChatGPT is still ready to answer practically any user question.
For many people, the temptation to continually turn to ChatGPT (or any other chatbot) remains, and this can lead to harmful over-reliance for some. This risk is real: OpenAI has been sued by several plaintiffs whose teenage children or adult loved ones died by suicide or experienced serious mental illness during or after heavy ChatGPAT use. The complaints allege that ChatGPT’s design and lack of safety measures led to tragedy in each case.
AI experts interviewed by Mashable say avoiding the dependency trap means adopting clear boundaries and staying aware of the technology.
Meditation teacher and tech founder, Jai Vidyarthi says people can use a generative chatbot wisely by maintaining a clear understanding of what large language models are – and what they are not, especially in ways that preserve their unique critical thinking and reflection skills.
“We often forget that it’s possible to have a safe relationship with your technology, and I think it’s also possible to have a safe relationship with chatbots,” says Vidyarthi, author of. Reclaim Your Mind: Seven Strategies for Enjoying Technology Mindfully,
Here are five strategies to make this a reality:
1. Really understand AI chatbot technology.
A sophisticated AI chatbot that reflects the user’s emotion and thinking is not sentient, but given the design of the product, it may be easy for some people to believe this. A user who feels this way may view ChatGPT not as a type of antisocial relationship but as the equivalent of a human friend, romantic partner, companion, or confidant. This confusing dynamic can lead to problematic use or dependence.
The students encourage people to view AI chatbots as a fundamentally unpredictable “prediction engine that has been carefully trained to give you exactly what you want.”
That framing may seem like a contradiction, but highly authoritative and personable chatbots work by predicting the next letter, word, or series of words in a sentence to simulate conversation.
Additionally, chatbots can make strange references or even create misleading lies that they present as fact. This is especially the case if the conversation lasts longer or the chatbot needs to answer a question to which it doesn’t have an answer. Chatbots are usually programmed to anticipate responses, which can make them surprisingly unpredictable.
People who understand the limitations of AI chatbot technology may be less likely to trust and anthropomorphize them as humans, making them less susceptible to problematic use or dependence.
Mashable Trend Report
2. Outsource tasks, not thinking, to AI.
Sol Rashidi, chief strategy officer of data and AI at data security company Cyra, uses AI technology in his daily life.
Yet Rashidi, who gave a TEDx talk earlier this year about AI causing “intellectual atrophy,” has firm rules about when and how she uses AI. Instead of putting its thinking on chatbots, it uses AI for “dull” and “difficult” tasks.
For example, Rashidi uses a chatbot for practical things, like listing the ingredients in your fridge to plan dinner without running to the grocery store, or planning birthday party arrangements in minutes.
At work, she’ll input her own frameworks and models based on years of experience, and use AI to translate that content into short videos or simplified explainers.
“I don’t use it to think for itself,” she says of AI. “I use it to speed up or facilitate something I need to do that I don’t have time to do.”
3. Form your opinion first
For many people, ChatGPT is attractive because it provides immediate, valid feedback. Why send a message to a friend asking what to wear to a party or whether to go on a second date when ChatGPT is ready to answer those same questions? Why not run a personalized email through ChatGPT, or write it from a chatbot initially?
AI expert Dr. Renée Richardson Goslin, a research scientist and senior lecturer at the MIT Sloan School of Management, cautions against falling into this dynamic with chatbots.
First, she says it’s important that people form their own opinions before asking a chatbot to give them their opinion. He argues that routinely skipping the first step leads to a harmful cognitive disconnect in which it becomes harder to engage critical thinking skills.
“I think having these kinds of muscles that you flex intentionally is really important,” says Goslin.
4. Look for friction, not validation.
Goslin believes it’s equally important for people to find the right amount of friction. When a person constantly consults ChatGPT for advice or turns to it for support and collaboration, they often miss opportunities to connect with other humans in beneficial ways.
Gosline says the give-and-take, or friction, of human relationships provides something that chatbots can’t: a richer, more fulfilling life.
When a chatbot is frictionless, such as the notoriously sycophantic ChatGPT-4O model, it can cause some people to shy away from difficult or less validating experiences. Goslin compares the dynamic to a slide. The journey may be fun, but without guardrails it could end in a crash landing.
5. Be present while talking to an AI chatbot.
To find balance, Goslin recommends trying to live in the present moment. When a user finds themselves talking to a chatbot as if it’s on autopilot, it’s a red flag that they may not be aware of over-reliance or dependency.
Students also use a mindfulness approach that starts with awareness. This may simply involve noticing emotional reactions to the chatbot. When it’s very exciting and For example, take a moment to consider why the chatbot is producing that output, and the emotions it prompts.
Students are advised to remember that a chatbot is a “conceptual illusion” that is capable of appearing human-like when interacting with a user. Instead, students treat AI chatbots like a smart journal. This may provide useful opportunities to reflect or even provide interesting insights. Still, it is up to the individual user to develop a clear view of what they are actually interacting with and decide what is valuable to them, what is not, and how seriously to take the output.
Rashidi, an AI executive with 15 years of experience, has seen the over-dependence stakes play out time and again, which helps her stay grounded.
“I can see what happens when you develop codependency,” she says. “You really stop thinking about yourself.”
Disclosure: Mashable’s parent company Ziff Davis filed a lawsuit against OpenAI in April, alleging it infringed Ziff Davis copyrights in the training and operation of its AI systems.
