OTTAWA–OpenAI’s pledges to strengthen security protocols are missing key details including how they will be implemented, Canada’s minister in charge of artificial intelligence said.
Minister for Artificial Intelligence Evan Solomon is also demanding greater clarity on OpenAI’s operations, including how troubling interactions with the ChatGPT chatbot are escalated, and how privacy considerations are balanced with public safety.
Solomon issued a statement Friday, roughly 24 hours after OpenAI wrote to the minister and pledged to bolster safety protocols. OpenAI said that with the changes, it would have referred the account belonging to Jesse Van Rootselaar to police if it was discovered today. Police have identified Van Rootselaar as the suspect in a deadly school shooting in Tumbler Ridge, British Columbia that left eight dead and dozens injured.
The Wall Street Journal has reported that OpenAI considered alerting Canadian law-enforcement authorities about interactions between Van Rootselaar and ChatGPT. OpenAI shut down Van Rootselaar’s ChatGPT account after detecting a violation of its policy but didn’t notify police.
Solomon said he’s also scheduled to speak to OpenAI Chief Executive Sam Altman next week. A spokesman for Solomon said the timing and location have yet to be determined.
A meeting with Solomon marks the second senior Canadian politician Altman has agreed to speak with. British Columbia Premier David Eby said he’s also set to speak to Altman, and Eby added he wants the CEO to be cognizant of the pain that families in Tumbler Ridge are feeling.
A spokesperson for OpenAI confirmed that Altman has scheduled meeting with both Messrs. Solomon and Eby.
Online platforms have long debated how to balance questions of privacy with public safety in their decisions to alert law enforcement about certain users. That debate has now pulled in the AI companies that power the chatbots to which people are confiding the most intimate details of their private thoughts and lives.
In its letter to Solomon, OpenAI divulged that Van Rootselaar had a second ChatGPT account. Among OpenAI’s pledges is a commitment to strengthen detection systems to prevent efforts to evade safeguards.
Taylor Owen, a public-policy professor at Montreal’s McGill University with a specialty in media and ethics, said OpenAI has now publicly acknowledged its previous safety protocols were inadequate, and that Van Rootselaar’s ability to create a second account discloses a previously unknown system error.
“It means the threshold that governed the original decision, the one that resulted in Canadian police not being contacted about violent content flagged by the company’s own systems, was one the company itself now considers inadequate,” Owen said. OpenAI’s pledges, he added, follows a pattern among social-media platforms “where product safety changes come only after an incident forces them.”
Solomon said he will press Altman about ensuring the pledges are fully implemented and enforced.
“We will be seeking further clarity on how human review is conducted and whether Canadian context and best practices are appropriately embedded in those decisions,” the minister said.
Solomon added that intends to meet other major digital platforms in the coming weeks to advocate for a consistent approach on protecting youths and the public.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Canada Says OpenAI's Safety Pledges Lack Details — 2nd Update
By Paul Vieira
OTTAWA–OpenAI’s pledges to strengthen security protocols are missing key details including how they will be implemented, Canada’s minister in charge of artificial intelligence said.
Minister for Artificial Intelligence Evan Solomon is also demanding greater clarity on OpenAI’s operations, including how troubling interactions with the ChatGPT chatbot are escalated, and how privacy considerations are balanced with public safety.
Solomon issued a statement Friday, roughly 24 hours after OpenAI wrote to the minister and pledged to bolster safety protocols. OpenAI said that with the changes, it would have referred the account belonging to Jesse Van Rootselaar to police if it was discovered today. Police have identified Van Rootselaar as the suspect in a deadly school shooting in Tumbler Ridge, British Columbia that left eight dead and dozens injured.
The Wall Street Journal has reported that OpenAI considered alerting Canadian law-enforcement authorities about interactions between Van Rootselaar and ChatGPT. OpenAI shut down Van Rootselaar’s ChatGPT account after detecting a violation of its policy but didn’t notify police.
Solomon said he’s also scheduled to speak to OpenAI Chief Executive Sam Altman next week. A spokesman for Solomon said the timing and location have yet to be determined.
A meeting with Solomon marks the second senior Canadian politician Altman has agreed to speak with. British Columbia Premier David Eby said he’s also set to speak to Altman, and Eby added he wants the CEO to be cognizant of the pain that families in Tumbler Ridge are feeling.
A spokesperson for OpenAI confirmed that Altman has scheduled meeting with both Messrs. Solomon and Eby.
Online platforms have long debated how to balance questions of privacy with public safety in their decisions to alert law enforcement about certain users. That debate has now pulled in the AI companies that power the chatbots to which people are confiding the most intimate details of their private thoughts and lives.
In its letter to Solomon, OpenAI divulged that Van Rootselaar had a second ChatGPT account. Among OpenAI’s pledges is a commitment to strengthen detection systems to prevent efforts to evade safeguards.
Taylor Owen, a public-policy professor at Montreal’s McGill University with a specialty in media and ethics, said OpenAI has now publicly acknowledged its previous safety protocols were inadequate, and that Van Rootselaar’s ability to create a second account discloses a previously unknown system error.
“It means the threshold that governed the original decision, the one that resulted in Canadian police not being contacted about violent content flagged by the company’s own systems, was one the company itself now considers inadequate,” Owen said. OpenAI’s pledges, he added, follows a pattern among social-media platforms “where product safety changes come only after an incident forces them.”
Solomon said he will press Altman about ensuring the pledges are fully implemented and enforced.
“We will be seeking further clarity on how human review is conducted and whether Canadian context and best practices are appropriately embedded in those decisions,” the minister said.
Solomon added that intends to meet other major digital platforms in the coming weeks to advocate for a consistent approach on protecting youths and the public.
Write to Paul Vieira at paul.vieira@wsj.com
(END) Dow Jones Newswires
February 27, 2026 18:58 ET (23:58 GMT)
Copyright © 2026 Dow Jones & Company, Inc.