India’s Therapy Apps Are Failing Gen Z When It Matters Most
- Esha Mariya

- Jul 30
- 4 min read

In a society where mental health is still often whispered about, many young Indians have turned to their smartphones for solace. Therapy apps — accessible, anonymous, and always on — promise to ease anxiety, burnout, and depression. From AI chatbots that talk you down from a mental breakdown to online sessions with “certified experts,” they offer what feels like instant relief. But beneath the smooth interfaces lies a messy reality.
Mental health, already worsened by the pandemic, has collided with a booming digital wellness market. Rapidly expanding startups like Wysa, Lissun, and MindPeers are riding a wave of investor funding and social media marketing. Their pitch is simple: affordable, stigma-free mental health support at your fingertips. For Gen Z, dealing with academic pressure, family and social expectations, climate anxiety, and an endlessly online life, these apps seem like lifelines. They’re cheap, private, and always available.
But when it comes to actual care, the cracks begin to show.
One major concern is the qualifications of those offering support. While some apps do connect users to licensed therapists, many also rely on “listeners” or “well-being coaches” with minimal training. In some cases, users receive guidance that veers into clinical territory — such as advice on trauma, suicidal thoughts, or medication — from individuals with no formal credentials. This creates a blurry boundary between casual support and legitimate therapy. Psychiatrists and mental health professionals have warned that psychological help offered by untrained individuals, even online, may do more harm than good, especially in serious cases.
The danger isn’t just that bad advice might be unhelpful — it can be actively harmful. In serious cases like trauma, self-harm, or suicidal ideation, a missed red flag or a poorly worded chatbot response could escalate a crisis. Some users may delay or avoid seeing a real therapist, believing they’re already “getting help” through the app. Others might be misled into thinking their struggles aren’t “serious enough” for professional care. In extreme cases, relying on unqualified guidance could worsen symptoms, reinforce shame, or lead to medical emergencies that go unaddressed. When mental health support is delivered without accountability, the consequences aren’t just technical — they’re human.
Then there’s the role of AI. Chatbots like Wysa and others offer what they call “emotionally intelligent” conversations. While they may provide low-risk emotional support, they often oversimplify complex mental health issues and lack the depth of real clinical care. A 2023 study published in Frontiers in Digital Health warned that AI lacks the emotional understanding and contextual awareness required for true therapeutic work. It cannot replace the trust, empathy, and insight that human therapists provide.
Even more worrying is what’s happening with user data. Most mental health apps collect deeply personal information — moods, thoughts, even audio recordings — and not all are transparent about how that data is stored or shared. Mozilla’s Privacy Not Included report flagged several mental health apps globally for vague privacy policies and weak data protection. With no dedicated health-tech regulation in India, users have little recourse if their most vulnerable moments are mined for profit.
Another overlooked issue is the increasing pressure on users to self-diagnose. Many apps offer mood trackers or symptom quizzes that, while useful for reflection, can blur the line between awareness and misdiagnosis. Without clinical guidance, users might label themselves with disorders based on generalized content or app feedback. This can reinforce anxiety, increase dependency on the platform, or discourage people from seeking professional help altogether.
Part of the problem is that the industry operates in a regulatory gray area. The Mental Healthcare Act (2017) guarantees the right to access mental health care but doesn’t specify standards for digital platforms. The Telemedicine Guidelines of 2020 were a step forward, but they remain silent on app-based mental health startups, AI tools, or data ethics. As a result, many apps continue to function without medical oversight.
So why are these apps still growing? Because people genuinely need help. They’re meeting a real demand, even if imperfectly. India’s mental health infrastructure is under-resourced, especially outside major cities. In such a vacuum, even flawed apps can offer a measure of comfort. But the danger lies in mistaking access for adequacy.
Mental health care must be affordable and stigma-free, but also ethical, qualified, and safe. A better future might lie in hybrid models: platforms that offer both digital convenience and clinical accountability. Until then, Gen Z users should be cautious about who — or what — they’re confiding in. A good app can be a tool for wellness; a bad one can be a digital mirage.
Mental health is not a UX problem you can fix with good design and a friendly tone. People need context, continuity, and qualified care — not just digital empathy.
SOURCES
– “Artificial Intelligence and the Therapeutic Alliance,” Frontiers in Digital Health (2023)
– “Privacy Not Included: Mental Health Apps,” Mozilla Foundation (2022)
– The Mental Healthcare Act, 2017 – India
– Telemedicine Practice Guidelines, 2020 – Ministry of Health, India







Comments