Canada’s AI minister blames OpenAI for ‘failure’ after mass shooting

POLITICO - Wednesday, February 25, 2026

OTTAWA — Canada’s Liberal government says it is prepared to regulate AI chatbots if tech companies like OpenAI can’t demonstrate they have safeguards to protect Canadian users.

AI Minister Evan Solomon issued the warning after what he described as OpenAI’s “failure” to report a Canadian ChatGPT user who police say went on to kill eight people in Tumbler Ridge, British Columbia, in a school shooting.

“Of course a failure occurred here. I mean, look what happened. This is a horrific tragedy,” Solomon said Wednesday.

“We were really disturbed by the reports that there might have been an opportunity to escalate this to law enforcement further, and we want to make sure if any company has that opportunity, they would escalate,” he added.

OpenAI’s head of policy, Chan Park, and six others from the company, met with members of Prime Minister Mark Carney’s Cabinet on Tuesday in Ottawa — a meeting ministers later described as “disappointing.”

Justice Minister Sean Fraser said he expected OpenAI to return with substantial new safety measures.

“The message that we delivered, in no uncertain terms, was that we have an expectation that there are going to be changes implemented,” Fraser told reporters Wednesday. “If they’re not forthcoming very quickly, the government’s going to be making changes.”

Solomon has said “all options are on the table,” but wouldn’t say if that includes banning OpenAI from Canada. The Liberal government is waiting to see OpenAI’s proposals.

“Trust is going to be earned,” Fraser said. “We need to actually see what changes are going to be forthcoming, both from the company’s point of view, but we also need to identify the best path forward.”

OpenAI, which operates the popular chatbot ChatGPT, said it has strengthened safeguards and changed guidelines about when to notify police in cases involving violent activities.

“But the ministers underscored that Canadians expect continued concrete action — and we heard that message loud and clear,” an OpenAI spokesperson told POLITICO.

“We’ve committed to follow up in the coming days with an update on additional steps we’re taking, as we continue to support law enforcement and work with the government on strengthening AI safety for all Canadians.”

In June, OpenAI said it banned the account of Jesse Van Rootselaar, who police said went on to kill eight people, including five children, on Feb. 10.

OpenAI said the account was flagged through its own monitoring systems, which use both automated tools and human review, to detect potential misuses linked to violence.

The company considered whether Van Rootselaar’s ChatGPT account should be referred to Canadian police, a move OpenAI says it ultimately didn’t take.

OpenAI’s threshold for referring a user to police depends on whether the case involves an imminent and credible risk of serious physical harm to others. OpenAI says Van Rootselaar’s account did not meet that threshold.

Solomon said he’ll put some of these questions and concerns to developers of AI chatbots and digital platforms in the coming weeks.

“Public safety will always come first for this government,” he said.