Image by viarami from Pixabay

Using AI for mental health support requires caution and regulation: McGill professors   

Experts are calling for stronger safeguards and caution for teenagers’ use of AI following the Tumbler Ridge shooting in B.C.   

The subject has been under increased scrutiny since the Wall Street Journal reported Friday that shooter Jesse Van Rootselaar’s account was banned last June after it was flagged for troubling posts, including some that included scenarios of gun violence. 

OpenAI said it did not consider Van Rootselar’s posts to meet the threshold to inform the police, although its employees thought about alerting the authorities.   

On Tuesday, federal Artificial Intelligence Minister Evan Solomon brought OpenAI executives to Ottawa to discuss safety measures.   

Vincent Paquin, an assistant professor of psychology at McGill University, says using AI for mental health support has its advantages and problems.  

Considering the ease of use compared with clinical psychologists, Paquin worries that people might act on unvalidated information provided by AI chatbots.   

“If a young person is in distress and chooses to turn to AI rather than seek professional help, then they will not receive the optimal intervention for their needs,” said Paquin.   

“Using AI instead might delay their access to proper care and proper interventions.”

Related:

OpenAI promised to strengthen its safeguards for using ChatGPT for mental health support following a lawsuit by the parents of a California teenager who died by suicide in 2025 after ChatGPT helped him plan his death and write a note.  

A 2025 survey by Mental Health Research Canada found that young adults were using AI nearly six times more than older adults for mental health support.   

Research from Cornell University suggests AI tends to reinforce users’ delusions.   

“There’s still more work to do to educate young people about the limitations of AI to the ways in which it is designed to always provide validation rather than challenge your beliefs,” said Paquin.   

Paquin says more regulation is needed to hold AI developers accountable.  

He suggests that companies should report annually on safety incidents, including data on instances of suicidal ideation and violence. Reports, he said, could help determine an AI product’s safety.  

“Do we want to allow freely accessible products like ChatGPT and [Google’s] Gemini to be able to provide mental health advice, or do we want to restrict those functionalities in the same way that any medication is?   

Paquin encouraged teenagers to be curious about generative AI if they continue using it.  

“I also want to invite some people to remain critical about the risks of using AI for mental health support.”   

Jennifer Raso, an Assistant Professor in McGill’s Faculty of Law, says it is hard to determine what regulations are needed without knowing the shooter’s interaction with ChatGPT that triggered OpenAI to ban her.   

Related:

She says OpenAI flagging a user who is mentally vulnerable or plans a violent action can lead to broad user targeting.   

“There’s a risk of potentially over-policing people’s usage of these types of tools,” said Raso, who agrees it was troubling that OpenAI’s concerns were not brought to the authorities.  

Raso says standards that examine whether AI products are safe to enter the Canadian market should have been in place before the shooting, similar to vehicle safety standards.   

“It breaks my heart that this is the type of situation that needs to happen for these questions to get on the radar of Minister Solomon and the federal government,” said Raso.   

—With files from The Canadian Press