MENTALHEALTH.INFOLABMED.COM - A recent study has unveiled a compelling connection between individuals who engage with chatbots for emotional assistance or other personal reasons and a heightened likelihood of reporting symptoms indicative of depression or anxiety.

This significant research was spearheaded by a team from Mass General Brigham.

They conducted a comprehensive survey among 20,847 participants across the United States, predominantly comprising white men and women.

The survey aimed to understand patterns of AI usage and their correlation with mental health indicators.

Published in the prestigious JAMA Network Open, the findings reveal that a notable 10.3% of respondents reported using artificial intelligence 'at least daily.'

Furthermore, 5% of participants indicated they used AI 'multiple times per day.'

Delving deeper, among those who engaged with an AI program on a daily basis, nearly half utilized it for professional purposes.

Approximately 11% leveraged AI for academic pursuits.

Intriguingly, a substantial 87.1% of these daily users stated they employed AI for personal reasons.

These personal uses could encompass seeking recommendations, obtaining advice, or even for emotional support.

Dr. Roy Perlis, a lead author of the study, highlighted that for most people, their primary exposure to artificial intelligence comes through chatbot interactions.

The average age of the participants in this extensive study was 47 years.

A critical observation was that individuals who used chatbots daily for personal reasons demonstrated a greater propensity to experience at least moderate depression.

They also reported increased feelings of anxiety and irritability when compared to those who did not utilize AI in this manner.

Participants were specifically queried about how often, over the preceding two weeks, they had encountered difficulties concentrating, sleeping, eating, or experienced thoughts of self-harm.

These inquiries aimed to assess common symptoms associated with depression, such as persistent sadness, diminished self-esteem, a noticeable lack of energy, and reduced motivation.

The study also identified that users within the age bracket of 45 to 64 years were particularly more inclined to report depressive symptoms in conjunction with their AI use.

The Broader Context of AI and Mental Health

Previous scholarly investigations have already illuminated a trend where certain individuals turn to AI for emotional solace.

Some even explore romantic relationships facilitated by AI platforms.

Early stage studies have suggested that chatbots specifically engineered for mental health treatment might serve as valuable adjuncts to traditional therapy.

However, other analyses focusing on general-purpose chatbots, such as OpenAI's ChatGPT, have raised concerns, suggesting they could pose challenges for individuals already contending with mental health conditions.

The American Psychological Association (APA) has issued clear guidance, strongly advising against the use of AI as a direct substitute for professional therapy and psychological treatment.

Dr. Perlis acknowledged that the average disparity in depression severity between chatbot users and non-users was relatively small.

Nevertheless, he issued a cautionary note that a segment of individuals might grapple with significantly more severe struggles.

He elaborated, saying, 'There’s probably a subset of people where AI use is associated with no change in their mood, or even benefit in their mood.'

Dr. Perlis, who holds the position of vice chair for research in the department of psychiatry at Mass General Brigham, added, 'But that also means there are a subset where AI use is probably associated with worsening of their mood, and for some people, that can be substantially greater levels of depression.'

The researchers further observed a phenomenon they termed a 'dose response.'

This implies that the more frequently an individual engaged with AI, the more pronounced their symptoms tended to be.

It's important to note that using AI for work or educational purposes did not show an association with symptoms of depression.

For those who use AI for personal reasons, Perlis remarked that the nature of their interactions can 'run the gamut.'

He suggested that AI chatbots often provide a means for individuals to have 'a social interaction that otherwise would be difficult for them.'

He clarified, 'It’s not the case at all that all AI is harmful and chatbots are harmful.'

Dr. Perlis, who also serves as an associate editor of JAMA Network Open, expressed his specific concern: 'I think my concern is particularly for these sort of general-purpose chatbots.

They’re really not designed to take up people’s social support or mental health support, and so when we use them that way, I think there’s some risk.'

Important Limitations and Nuances

The survey, while insightful, comes with several inherent limitations.

Crucially, it establishes an association between AI use and negative mental health symptoms, rather than a definitive cause-and-effect relationship.

The study also did not specify which particular AI programs participants were using.

Furthermore, it did not provide a precise definition of what constituted 'personal use.'

The 'Vicious Cycle' Hypothesis

An alternative and plausible explanation posits that individuals already experiencing higher levels of depression might be more inclined to seek out AI programs for companionship.

Dr. Jodi Halpern, co-director for the Kavli Center for Ethics, Science and the Public at UC Berkeley, emphasized that the study does not definitively prove that AI causes depression.

She posited, 'It could go in either direction.'

Dr. Halpern added, 'It could be a vicious cycle, we just have no idea.'

She concluded, 'So the idea that when people are more depressed, they may use AI more for personal uses is very plausible.'

Nicholas Jacobson, an associate professor across biomedical data science, psychiatry, and computer science at Dartmouth College, suggested that people might turn to AI for therapeutic support due to dissatisfaction with conventional care.

He also highlighted the easier accessibility of AI solutions.

Jacobson stated, 'There’s nowhere near enough providers to go around.'

He continued, 'And folks are looking for greater support than they can access otherwise.'

The study additionally revealed demographic trends: men, younger adults, higher earners, individuals with greater educational attainment, and those residing in urban environments reported more frequent AI usage.

Jacobson conceded that the precise reasons why some individuals might be more inclined to use AI or are more negatively affected by it remain unclear.

He admitted, 'We don’t know enough about this.'

Jacobson underscored the need for further investigation: 'I think we need more studies to really understand why it is those groups in particular are more likely to use this, certainly.'

Dr. Halpern advocated for future research on AI to specifically concentrate on its impacts on individuals' mental health.

She praised the current study for 'stretching our attention to look at the people we might not have been paying attention to.'

Mindful Engagement: A Path Forward

Dr. Perlis clarified that his study should not be interpreted as an alarm bell.

Instead, he encouraged individuals to critically assess their AI usage and determine whether it genuinely benefits them.

He advised that people 'should be mindful when they’re interacting with a chatbot about how often they’re doing it, what they’re doing it instead of, and if they feel better or worse after they’ve had an extended interaction.'