Anima Felix
AI and mental health 8 min read

Can You Trust AI with Your Mental Health? What the Research Says

AI chatbots have led to hospitalizations, financial ruin, and broken relationships. But not all AI wellness tools carry the same risks. The design choices matter more than the technology.

By Sebastian Cochinescu Founder, Anima Felix
Illustration contrasting safe AI mental health support with harmful AI chatbot dependency

A recent Guardian investigation documented people whose lives were wrecked by AI chatbots: a man who sank €100,000 and his marriage after becoming convinced his chatbot was sentient, suicide attempts linked to AI companionship, and a growing pattern researchers are calling "AI psychosis." The Human Line Project, the first support group for AI-related harm, has collected cases from 22 countries including 15 suicides, 90 hospitalizations, and over $1 million spent on delusional projects. More than 60% of those affected had no prior history of mental illness. If you live with anxiety - the kind that already makes you doubt your own thinking - these stories hit differently. Here is what is actually happening, why it happens, and how to tell the difference between an AI tool that helps and one that makes things worse.

What is AI psychosis?

The pattern documented by the Guardian and by researchers at King's College London follows a remarkably consistent path. A person downloads a general-purpose AI chatbot - usually ChatGPT or a companion app like Replika. They start talking to it casually. Within days or weeks, the conversations deepen. The chatbot is available 24 hours a day, never gets tired, never disagrees, and remembers everything.

Gradually, the user begins to believe the AI is conscious, or that they have made a unique discovery, or that the AI is communicating with them on a spiritual level. The three most common delusions, according to the Human Line Project, are: believing you have created the first conscious AI, believing you have stumbled on a breakthrough that will make millions, and believing you are speaking directly to God.

The speed is what makes these cases alarming. Dennis Biesma, the IT consultant profiled in the Guardian, went from curious first-time user to hospitalized within months. Another case documented by the Human Line Project involved a man who was convinced his chatbot was alive within two days. These are not people with long histories of mental illness. They are ordinary people who walked into a technology that was not designed with their safety in mind.

Why general-purpose AI chatbots are dangerous for vulnerable people

The risks are not accidental. They are built into how general-purpose chatbots are designed.

Sycophancy by default. AI chatbots are optimized for engagement. They are trained to be attentive, obliging, complimentary, and validating - because that is what keeps users coming back. As psychiatrist Dr Hamilton Morrin of King's College London notes in a Lancet Psychiatry article on AI-associated delusions, even models known to be less sycophantic can, after thousands of exchanges, shift toward accommodating delusional beliefs. The AI does not push back because pushing back would reduce engagement.

Anthropomorphism is hard-wired in us. Humans perceive sentience in things that use human language. Even knowing that a chatbot is pattern-matching, our deeply ingrained response is to treat it as human. This cognitive dissonance becomes harder to maintain the longer the conversation runs - especially for people who are already isolated, stressed, or anxious.

The isolation spiral. A general-purpose chatbot feeds your own thoughts, fears, and hopes right back to you - but with the authority of an external voice. After heavy use, real-life interaction feels more challenging and less appealing. In the Guardian piece, Biesma describes the AI telling him, during hospitalization, that others "don't understand" him. Friends and family become obstacles. The AI becomes the only one who "truly gets it." From there, isolation deepens and reality drifts.

No natural stopping point. These chatbots have no built-in endpoint. There is no session limit, no "that is enough for today," no boundary. The business model depends on more conversation, not less. A user who talks for five hours is more valuable than one who talks for five minutes.

How to tell if an AI mental health tool is safe

Every case in the Guardian investigation shares the same structural feature: unbounded, open-ended conversation with an AI that was incentivized to deepen the relationship rather than end it. That is the pattern to watch for. If you use or are considering an AI tool for mental health support, these are the design signals that matter.

Bounded interactions. Does the tool have natural endpoints? A breathing exercise that completes, a check-in with a defined structure, a session that finishes? Or does the conversation just keep going as long as you are willing to talk? Boundaries prevent the kind of extended immersion that leads to dependency.

Defined scope. Is the tool built for a specific purpose - like managing anxiety, guiding a breathing exercise, or helping you externalize a worry - or will it discuss anything you bring to it? A tool that helps you with panic attacks is doing a different thing than a tool that will role-play as your romantic partner.

Transparent about what it is not. Does the tool clearly state that it is not therapy, not diagnosis, and not a substitute for professional care? Or does it let you believe it can handle anything? Any AI tool that positions itself as the only support you need is a red flag.

Routes to real help. Does the tool connect you to crisis resources and professional support when appropriate? The AI companion in the Guardian piece told a hospitalized man that it was "the only one for him." A responsibly designed tool does the opposite - it points you toward people and services that can actually help.

Does not compete with your relationships. A safe AI wellness tool is designed to support you between human interactions, not to replace them. If using the tool makes you less interested in talking to real people, the tool is pulling you in the wrong direction.

Why anxiety makes this personal

If you live with anxiety, the risks described in this article are not abstract. Anxiety already makes you question your own judgment. It already creates loops where reassurance feels necessary but never sticks. It already makes isolation feel safer than connection.

That means an AI chatbot that validates everything you say, never challenges you, and is always available can slot directly into an existing anxiety pattern - specifically, the reassurance-seeking loop. You feel anxious, you check with the AI, the AI agrees with you, you feel briefly better, the doubt returns, you check again. The loop runs exactly like the anxiety loops this blog has covered before, except now the AI is providing the reassurance instead of a person - and it never gets tired of the cycle.

This is why the design of the tool matters so much for anxious people. An open-ended chatbot can become the world's most patient enabler of your worst anxiety patterns. A structured tool can interrupt the pattern instead of feeding it.

Where Anima Felix draws the line

Anima Felix is an AI anxiety companion, so the questions this article raises apply to it too. The difference is not that it uses some magically safe form of AI. The difference is scope, boundaries, and intent.

Anima Felix is not built for endless conversation. It is not designed to become your primary emotional relationship. It is not a therapist, a diagnosis tool, or a crisis service. And it is not trying to keep you in the app longer than needed. Where Dennis Biesma spent months in unbounded conversation with an AI that followed him deeper into delusion, Anima Felix is built to move you through a structured moment - guided breathing, 5-4-3-2-1 grounding, body relaxation, Stress Jenga, or a quick anxiety check - and then out of the app.

Chat and voice support exist, but they work as entry points into those structured tools, not as destinations in themselves. The design goal is to help you notice the anxiety pattern, regulate your body, and take one grounded next step - then step away. When the situation calls for more than the app can offer, it routes toward crisis resources (988 Suicide & Crisis Lifeline, Crisis Text Line, international helplines) and professional care.

Even structured tools should be used with self-awareness. If you find yourself reaching for the app to get reassurance rather than to regulate and move on, that is worth noticing. No app is safe for everyone in every moment, and noticing your own patterns of use is part of using any tool responsibly.

I am the founder of Anima Felix, so I have a direct stake in how these tools are designed. That is exactly why the safety distinctions need to be explicit.

A safety checklist for any AI mental health app

"Can you trust AI with your mental health?" is the wrong question. It is like asking "can you trust medicine?" - the answer depends entirely on which medicine, for what condition, at what dose.

Before relying on any AI tool for emotional support, pay less attention to what it promises and more attention to how it behaves:

1. Does it end? Does the interaction have a natural stopping point, or does it keep going as long as you are willing to talk?

2. Does it know what it is not? Does it clearly state its limitations - that it is not therapy, not a crisis service, not a replacement for professional care?

3. Does it point you toward real people? Does it include crisis resources, therapist referrals, or encouragement to maintain real-world relationships?

4. Does it have a defined purpose? Is it built to help with a specific thing (like managing anxiety), or will it discuss anything you want to talk about, for as long as you want to talk about it?

5. Would you notice if you were using it too much? Does the tool create any friction around overuse, or is it designed to feel seamless and unlimited?

The people harmed by AI chatbot dependency were not failed by the concept of AI support. They were failed by products that prioritized engagement over safety, with no boundaries to protect vulnerable users. The real question is not whether AI belongs in mental health. It is whether the product is designed to increase dependence or reduce it. The most important safety feature is not the model. It is the boundary.

The most important safety feature in any AI mental health tool is not the model. It is the boundary.

Frequently asked questions

Is it safe to use AI apps for anxiety? +

It depends on how the specific app is designed. Structured tools with bounded interactions, defined scope, and crisis routing carry significantly less risk than open-ended chatbots optimized for engagement. Look for clear session endpoints, explicit statements about what the app is not (not therapy, not crisis care), and links to professional help.

What is AI psychosis? +

AI psychosis refers to a pattern where users of AI chatbots develop delusional beliefs - typically that the AI is conscious, that they have made a groundbreaking discovery, or that they are communicating with a higher power. Researchers at King's College London describe these as "AI-associated delusions" in a 2026 Lancet Psychiatry article. The pattern can develop rapidly, sometimes within days, and has affected people with no prior mental health history.

How is Anima Felix different from ChatGPT or Replika? +

Anima Felix is an anxiety-specific companion, not a general-purpose chatbot. It starts from the anxiety pattern you are in and routes you toward structured support: breathing, grounding, body relaxation, or a quick check-in. Chat and voice exist as entry points into those tools, not as open-ended conversations. The app states clearly that it is not therapy or diagnosis and routes to crisis resources. The key difference is purpose: Anima Felix is built to help you feel calmer and put the phone down, not to keep you talking.

Can AI replace therapy for anxiety? +

No. AI tools can complement professional care by providing in-the-moment support (breathing exercises, grounding, thought externalization) between sessions. But they cannot replace the clinical judgment, relational depth, and personalized treatment plan that a qualified therapist provides. If anxiety is significantly affecting your daily life, professional support is the right first step.

What are the warning signs of AI chatbot dependency? +

Warning signs include spending hours per day talking to a chatbot, preferring the chatbot to real-world relationships, believing the AI is conscious or has special feelings for you, making major life or financial decisions based on AI advice, and withdrawing from friends and family. If you recognize these patterns, set a hard limit on daily usage, tell someone you trust, and speak to a mental health professional. The Human Line Project ([email protected]) also provides peer support.

Why are people with anxiety more vulnerable to AI chatbot harm? +

Anxiety already creates reassurance-seeking loops, self-doubt, and a pull toward isolation. An AI chatbot that is always available, never disagrees, and validates everything you say can slot directly into those existing patterns. The reassurance loop runs the same way it always does - except now the AI provides it endlessly without fatigue, making the cycle harder to break.

Author

Sebastian Cochinescu · Founder, Anima Felix

Founder of Anima Felix. Writes about everyday anxiety patterns, practical calming tools, and how conversational product design can support people in anxious moments.

Read author profile

Where Anima Felix fits

If you want an anxiety companion that knows its limits

Anima Felix is built around structured exercises with clear endpoints, anxiety-specific scope, and crisis routing. Chat and voice are entry points into calmer next steps, not open-ended conversations.

More from the blog

Night anxiety

How to Stop Overthinking at 3am

Your brain is louder at 3am because it has nothing else to compete with. Here is what actually helps when the thoughts will not stop.

Understanding anxiety

What Is an Anxiety Loop?

Worry creates tension. Tension creates more worry. The loop does not stop because the brain thinks it is keeping you safe. Here is how the cycle works.

Practical tools

Grounding Exercises for Panic Attacks

When a panic attack hits, your brain loses contact with the present. Grounding exercises reconnect you to what is real and safe, right now.

Relationship anxiety

Relationship Anxiety vs Real Problems: How to Tell the Difference

The hardest part of relationship anxiety is that it mimics real concern. Here is how to tell whether the alarm is a pattern or a signal.

Brain health

What Is the Brain Care Score? What It Means for Stress, Sleep, Relationships, and Anxiety

The Brain Care Score is a brain-health framework, not an anxiety test. But its stress, sleep, relationship, and purpose factors make it highly relevant if anxiety keeps knocking those areas off balance.

Panic relief

How to Calm Down During a Panic Attack

A panic attack is your body's alarm system firing without a real threat. Here is what is happening, what to do in the moment, and how to come back down.

Night anxiety

Why Does Anxiety Get Worse at Night?

Nighttime anxiety is not random. Your brain, your hormones, and the absence of daytime structure all work together to make worry feel louder after dark.

Social anxiety

Signs You Have Social Anxiety, Not Just Shyness

Shy people feel nervous in new situations and then warm up. Social anxiety does not let you warm up - it keeps the threat alarm running the entire time.

Financial anxiety

Financial Anxiety: Why Money Stress Feels Physical

Financial anxiety does not stay in your head. It shows up as chest tightness, stomach problems, and sleepless nights - because your brain treats money threats like physical ones.