Stephen Hammes Executive Vice Chair, Department of Medicine | University Of Rochester Medical Center
Stephen Hammes Executive Vice Chair, Department of Medicine | University Of Rochester Medical Center
Access to mental healthcare presents a considerable challenge in the United States, characterized by patchy insurance coverage and insufficient numbers of mental health professionals, resulting in long wait times and high costs. The advent of artificial intelligence (AI) offers a possible solution. Tools like AI mental health apps, which include mood trackers and chatbots simulating human therapists, are becoming more common, offering a potentially affordable and accessible way to address these challenges. These advances, however, bring ethical considerations, particularly concerning their use with children.
Most AI mental health applications remain unregulated and are primarily designed for adult users. Nonetheless, there is increasing discussion about their application with younger demographics. Bryanna Moore, PhD, an assistant professor of Health Humanities and Bioethics at the University of Rochester Medical Center, emphasizes the importance of incorporating ethical considerations into these discussions.
Moore notes the potential impact of AI mental health chatbots on children's social development. "Evidence shows that children believe robots have 'moral standing and mental life,' which raises concerns that children, especially young ones, could become attached to chatbots at the expense of building healthy relationships with people," she said. Children's mental well-being is deeply connected to their social environments. Pediatric therapists ensure that treatment does not occur in isolation by including family and social relationships in therapeutic processes. AI chatbots lack access to this vital context, potentially overlooking essential intervention opportunities when a child might be at risk.
Moreover, AI systems often exacerbate existing health disparities. Jonathan Herington, PhD, an assistant professor in the departments of Philosophy and Health Humanities and Bioethics, noted, "AI is only as good as the data it’s trained on. To build a system that works for everyone, you need to use data that represents everyone." He further remarked on economic disparities, stating, "Children from lower-income families may be unable to afford human-to-human therapy and thus come to rely on these AI chatbots in place of human-to-human therapy. AI chatbots may become valuable tools, but should never replace human therapy."
Currently, most AI therapy chatbots lack regulation. The U.S. Food and Drug Administration has approved only one AI-based mental health app to treat major depression in adults, leaving a gap in regulation that could lead to potential misuse, inadequate reporting, or inequities in training data and user access. Moore stated, "There are so many open questions that haven't been answered or clearly articulated. We're not advocating for this technology to be nixed. We're not saying get rid of AI or therapy bots. We’re saying we need to be thoughtful in how we use them, particularly when it comes to a population like children and their mental health."
The commentary by Moore and Herington is part of a collaboration with Şerife Tekin, PhD, associate professor in the Center for Bioethics and Humanities at SUNY Upstate Medical, who examines the intersection of psychiatry, cognitive science, and the bioethics of using AI in medicine.