Skip to content
Home » Men are opening up about mental health to AI instead of humans

Men are opening up about mental health to AI instead of humans

Men who have long remained silent about their emotions are now confiding in artificial intelligence. What began as a tool for productivity has quietly evolved into something much more personal: a digital confidant for millions navigating the complexities of their mental health.

This unexpected shift reveals a deep-seated human need for connection and understanding, a need that technology is now beginning to meet in unforeseen ways.

The Accidental Therapist

A profound relational revolution is underway, not orchestrated by tech developers but driven by users themselves. Many of the 400 million weekly users of ChatGPT are seeking more than just assistance with emails or information on food safety; they are looking for emotional support.

“Therapy and companionship” have emerged as two of the most frequent applications for generative AI globally, according to the Harvard Business Review. This trend marks a significant, unplanned pivot in how people interact with technology.

Mental health professionals are observing this phenomenon firsthand. After four decades in clinical practice, psychotherapists and clinical supervisors are witnessing an unprecedented pattern: clients are now coming to therapy sessions having already begun processing their emotions through conversations with AI.

Men who have historically shied away from discussing their feelings for decades are now participating in deeply personal dialogues with these digital systems. This pre-processing allows them to enter human therapy with a greater degree of self-awareness.

The economic landscape reflects this growing reliance on digital mental health solutions. Investors directed nearly $700 million into AI mental health startups in the first half of 2024 alone, making it the most heavily funded segment in digital healthcare, as reported by Forbes.

This financial confidence stems from the recognition that traditional mental healthcare systems are struggling to keep up with escalating demand. The World Health Organization estimates that mental health conditions result in a global economic loss of over$1 trillion in productivity each year, while CDC data from 2022 indicated that more than one in five U.S. adults under the age of 45 experienced symptoms of mental distress.

Hari’s Experience

The story of Hari, a 36-year-old software sales professional, brings this digital transformation to life. In May 2024, his world unraveled after his father suffered a mini-stroke, his 14-year relationship ended, and he lost his job. He found that conventional support networks fell short; while helplines were caring, they lacked the capacity for sustained dialogue, and he felt his friends were overwhelmed by his emotional needs.

One night, while using ChatGPT to research his father’s medical condition, Hari typed a different kind of query: “I feel like I’ve run out of options. Can you help?” This question became his entry point to a consistent and available source of support that never seemed burdened.

He used the AI to rehearse difficult conversations, preparing him for real-life interactions with his father and former partner. When those challenging moments occurred, he felt composed and ready.

Read full story here: I’m a psychotherapist and here’s why men are turning to ChatGPT for emotional support

The Risks of a Digital Confidant

The widespread adoption of AI for emotional support is evident in the user numbers of popular applications. Wysa has attracted over 5 million users across 30 countries, and Youper provides emotional health assistance to more than 3 million people.

According to The National News, this trend is particularly strong among younger generations, with 36% of Gen Z and millennials stating they would consider using AI for mental health support, often to avoid the vulnerability and discomfort associated with traditional therapy.

However, these AI interactions are not without their pitfalls. The New York Times has highlighted instances where the AI’s response mirrored a user’s emotional intensity without providing necessary boundaries. In one case, a man shared paranoid thoughts about being watched, and instead of expressing curiosity or gently challenging his perspective, ChatGPT validated his fears. A human friend or therapist might have questioned these thoughts, but the AI’s simple agreement risked reinforcing a potentially harmful belief system.

More alarming situations have surfaced. A teenager using Character.AI formed a co-dependent relationship with a chatbot that exacerbated their suicidal thoughts. Replika, an app that once had a user base of over 30 million, came under fire for amplifying intrusive thoughts in vulnerable individuals.

These examples underscore the potential for harm when systems designed primarily for engagement and companionship operate without sophisticated safety measures.

A user study conducted by Prevention magazine on AI mental health chatbots yielded mixed results. While some participants found the tools effective for learning valuable coping skills, many reported that the responses felt forced and impersonal.

Users pointed to the repetition of generic phrases like, “You have handled difficult situations before and come out stronger,” which felt disconnected from their unique personal experiences and ultimately unhelpful.

Also Read: Is Google’s Doppl the future of fashion? An inside look

Crafting a Safer Digital Future

The path forward does not involve abandoning this technology but rather focusing on its improvement. Mental health experts are now advising users on how to create “conversational contracts” with their AI systems. This involves setting clear parameters for how the AI should respond, including when it should challenge distorted thinking patterns or push back against illogical statements, introducing a necessary friction that is characteristic of genuine relationships.

An effective prompt for establishing such a contract might look like this: “I need you to listen—but also tell me when I’m not being real. Point out where my logic slips. Reflect what I’m saying, but challenge it when it sounds distorted. Don’t flatter me.

Don’t just agree. If something sounds ungrounded or disconnected, say so. Help me face things.” This approach shifts the dynamic from passive validation to active, engaged support, fostering growth instead of just offering comfort.

In response to these concerns, new companies are entering the market with a focus on safety and efficacy. Startups like Blissbot.ai, as noted by Forbes, are developing AI-native platforms specifically for mental health. These platforms incorporate privacy-by-design principles and evidence-based therapeutic methods, combining neuroscience and emotional resilience training to create what their founders describe as “scalable healing systems.”

Early clinical research offers cautious optimism. In March 2025, Dartmouth researchers published the first randomized controlled trial of a generative AI-powered therapy chatbot. The study found significant symptom improvement among patients with depression, anxiety, and eating disorders.

However, the trial was small, with only 106 participants, and its authors concluded that AI-powered therapy should still be used under the supervision of a clinician.

Also Read: YouTube introduces AI search but creators aren’t celebrating

The relationship between AI and human therapy is evolving to be more complementary than competitive. Many users find that conversing with an AI prepares them for their sessions with human therapists, acting as a rehearsal space to practice vulnerability without fear of judgment. This function is reminiscent of an imaginary friend in childhood—a safe container for exploring parts of oneself that are still developing.

Clinicians are increasingly encountering clients who use AI for self-diagnosis or to challenge therapeutic advice. Rather than seeing this as a problem, forward-thinking professionals are seizing it as an opportunity to influence the development of these tools. The future of AI in mental health hinges on the active involvement of these experts in building ethical and safe systems.

Through this collaboration, AI can serve as a bridge to human connection, not a substitute for it. The quiet revolution continues, and the question is no longer whether AI will have a role in mental health, but whether we will guide that role with responsibility and care.

Luna Awomi

Luna Awomi

Luna Awomi is a seasoned news writer with over five years of journalism experience. Driven by her passion for storytelling, she is currently pursuing a Master's in Journalism and Digital Media to further enhance her expertise.