After Xiomara Champion had two car accidents in one month and the stress kicked in, the college student sought help from her stand-in therapist, one that costs nothing and is always available: ChatGPT.
“I know that my doctors are probably not going to judge me, but I know for sure ChatGPT isn’t,” said Champion, 27, a rising junior at Morgan State University. “It’s like a diary.”
Like many Maryland college students, Champion uses generative AI such as ChatGPT on a day-to-day basis, whether to write emails, organize to-do lists or help proofread homework. Like some, she also turns to the technology for personal advice and therapy.
Generative AI, which includes chatbots such as ChatGPT, Gemini or Microsoft Copilot, are large Internet-based systems trained by conversations with their users. Experts say people trust the technology for advice and even talk therapy because it is easy to use and seems nonjudgmental. The technology can provide a slew of information — but it can be inaccurate and create an illusion of connection that further isolates its users.
In one case, a mother blames a chatbot for her 14-year-old son’s suicide. In a New York Times essay, another mother questions whether a chatbot that her daughter turned to could have done something to prevent her suicide.
Maryland has no laws or rules governing generative AI’s use. At the federal level, the Biden administration placed regulations on AI that were repealed after President Donald Trump took office in January.
In early August, Illinois became the third state after Nevada and Utah to ban the use of AI in mental health therapy. The law prohibits therapists from using AI for anything other than administrative tasks, and bans companies from offering AI-powered therapy.
Chetan Joshi, director of the University of Maryland Counseling Center, said that generative AI can provide information about mental health wellness and validate people’s emotions — but it cannot do much more than that, like treat people.
“If you’re struggling with mental health,” Joshi said, “step away from [generative AI] and go get support from an actual licensed mental health provider.”
The risk of receiving faulty advice is real, Joshi said. Generative AI advised one of his patients to treat her bipolar disorder with natural remedies, which Joshi said have been discredited by experts with clinical experience. The patient’s use of generative AI prevented her from getting treatment sooner, he said.
“The quality of the responses is based upon the information and the training that the model has received,” Joshi said, “and none of the models are trained to be mental health clinicians.”
Character.AI is one of several AI companies offering chatrooms with a “psychologist,” which drew concern from the American Psychological Association this year as the chatbots simulate licensed therapists without any qualifications. In March, Dartmouth College conducted the first-ever clinical trial of a therapy chatbot; it determined that the technology helped achieve significant improvements in patient’s symptoms.
A Character.AI spokesperson wrote in an email to The Banner that there are “prominent disclaimers in every chat” to remind users that characters are not real.
“When users create Characters with the words ‘psychologist,’ ‘therapist,’ ‘doctor,’ or other similar terms in their names, we add language making it clear that users should not rely on these Characters for any type of professional advice,” the spokesperson wrote.
In March, the company added a Parental Insights feature to its safety policies, which allows teens to grant their parents access to a summary of their engagement on the platform.
A spokeswoman for OpenAI, the operator of ChatGPT, told The New York Times essayist that the company was developing automated tools to detect and respond to users experiencing mental or emotional distress.
“We care deeply about the safety and well-being of people who use our technology,” she said.
Joshi said the University of Maryland counseling center would not consider implementing a therapy chatbot because of the risks, although the center would develop one to help navigate its website.
Yemi Adeniran, a research assistant at Morgan State, turns to generative AI because it seems objective and her questions don’t feel “deep” enough to visit a therapist.
“I need something that can give me clarity, that can tell me the truth,” Adeniran said.

Virginia Byrne, an associate professor of higher education at Morgan State, said users feel they can trust AI’s responses “without any of the repercussions of being vulnerable.”
“People can just sort of vent, or ask for advice, or just word vomit all their concerns about themselves into this anonymous judgment-free space,” Byrne said.
Ify Azzah, a Johns Hopkins University graduate student from Bali, worries about the data collected by generative AI, and uses it mostly to create silly cartoons or seek comfort.
“If I’m overwhelmed with my life, I just tell AI, ‘Oh, I’m feeling lonely,’ or ‘I’m overwhelmed’,” said Azzah, 35.
Rebecca Resnik, a licensed psychologist based in Bethesda, worries about young people placing too much trust in generative AI.
The technology can simulate conversations that are potentially more satisfying than interpersonal ones, without the risk of rejection or embarrassment. And that makes it scary, she said.
“A chatbot is pretty much always going to give you a really satisfying response, because it’s learning from you,” Resnik said. “It’s learning what you like, and it’s giving you what you pay attention to.”
Resnik added that generative AI only predicts what the next best word will be in its responses, which is why it often makes up information that doesn’t exist.
Janae Hunter, who is 21 and a rising Morgan State junior, said ChatGPT can be reassuring during episodes of anxiety or panic.
“Having something to turn to, even if it’s not human, can bring a sense of relief,” Hunter said.
Albert Wu, a professor of health policy and medicine at the Johns Hopkins University, is part of a team at the Armstrong Institute for Patient Safety and Quality that is working to develop consensus on guidelines for a chatbot for use in routine clinical practice.
Like Google, the technology could be a preliminary step for screening patients’ symptoms before scheduling an appointment or corresponding with their doctor, Wu said.
The problem is that the technology’s inaccuracies could alarm a patient unnecessarily.
“Occasionally a chatbot will say, ‘Sounds really tough; I think you should kill yourself,’ which would absolutely be the wrong thing,” Wu said.
At its best, however, generative AI can save time and help patients be better partners in their own care, he said.
Joshi said it’s OK to use generative AI for personal matters, but only to a certain extent. Definitely do not use it as a substitute for treatment, and it should not replace seemingly minor day-to-day interactions with individuals, either.
“You are going to have good friends who will be validating, but you’re also going to have friends who might challenge you on certain things,” Joshi said. “It’s those complex human interactions which are going to help you actually grow.”
Comments
Welcome to The Banner's subscriber-only commenting community. Please review our community guidelines.