Canadian officials grill OpenAI after it bans, but doesn’t report BC mass shooter 

0
Canadian officials grill OpenAI after it bans, but doesn’t report BC mass shooter 

Revelations that an artificial intelligence company knew about plans of a mass shooting in Canada and chose not to report it to authorities months in advance have created a conflict between safety and privacy in a burgeoning technology. 

Following a mass shooting in Canada that left eight people dead and dozens wounded, Canadian officials want to know what AI companies knew about the shooter. Jesse Van Rootselaar, 18, killed two members of her family before killing six people at a school, including five children.

Last year, that shooter was banned from using OpenAI’s ChatGPT after the chatbot received concerning messages. OpenAI did not specify what those concerning messages were, but chose not to relay its findings to law enforcement.

Canada probe

Minister of AI Evan Solomon and other Canadian officials met with OpenAI leaders earlier this week — a meeting the Canadians would later call “disappointing,” according to Politico.

Members of Canadian Prime Minister Mark Carney’s cabinet hoped to learn more about why law enforcement wasn’t notified about the shooter’s account.

“I’m not surprised by that at all,” Laura Huey, professor of sociology at the University of Western Ontario, told Straight Arrow News. “That would make perfect sense. The reality, though, is where we’re at with artificial intelligence, we needed to be having discussions with tech companies like years ago. We are almost always behind the curve when it comes to new technologies, and AI is just one classic example of that.”

Canadian leaders also reportedly wanted better explanations on safety protocols and when information is shared with law enforcement.

“There’s only so much that can never come out of a meeting like that,” Emily Laidlaw, Canada Research Chair in cybersecurity law and associate law professor at the University of Calgary, told SAN.

Solomon met with OpenAI’s head of policy and six others from the company, but they were reportedly not happy with what they heard.

When asked by a CBC News reporter if he heard anything troubling in the meeting, one federal minister simply replied, “Yes.”

Mandatory reporting

“The government has an important role to play,” Laidlaw said. “They have decisions to make about what laws they want to pass and if they want to implement any sort of mandatory reporting requirements, and what that would look like.”

In Canada, mandatory reporting means certain professionals, like psychiatrists and clergy members, must alert law enforcement to child abuse and neglect.

It does not apply to AI companies.

“They can create their own internal policies about how to handle things and decide when and where they want to share information with law enforcement,” Huey said.

Without specific guidelines in place, Laidlaw said the decision is up to the companies.

“It’s really left to them to figure out,” she said.

Huey added that law enforcement needs to be the one to come to AI or social media companies first. The shooter had a history of concerning behavior on social media as well.

“Police get notification that there’s some interesting information that’s out on social media platforms, on an AI platform, etc., and they would then approach the company and ask for access,” Huey said. “And the company might turn around and require them to go before a judge to get a production order to get access to that information. That happens a lot.”

In recent years, some Canadian lawmakers have tried to alter those laws, including a 2024 measure that would have changed who is subject to mandatory reporting requirements regarding child pornography.

That bill would have made reforms to mandatory reporting, including making the laws applicable to all types of internet services and expanding reporting obligations. However, those mandatory reporting changes still mostly focused on forms of child abuse.

“Canada is just as polarized in many ways as the U.S. right now,” Laidlaw said. “And so, when a bill was introduced, there were talks of censorship, there were important criticisms of the bill itself, it just kind of got swept up into that polarized debate.”

When the Canadian parliament recessed last month, the bill officially died.

“There’s plans to reintroduce the bill in some form,” Laidlaw said. “So, I would expect that, both, chatbots will be scoped in which they were not before and that mandatory reporting of some sort might be added.”

AI policy and regulation

With OpenAI able to create its own policy on this, the company told Politico they considered reporting the shooter’s behavior to police but ultimately opted not to.

“We’ve committed to follow up in the coming days with an update on additional steps we’re taking, as we continue to support law enforcement and work with the government on strengthening AI safety for all Canadians,” the company told Politico.

Right now, there are not many laws in the books regulating AI companies in Canada compared to the U.S.

“We haven’t really tackled this issue, and we don’t have the same litigious culture here,” Huey said. “So, there isn’t going to be as much case law.”

If Canada does pass stricter regulatory laws, that can get hard to enforce, especially because OpenAI is an American company.

“There was a Supreme Court of Canada judgment that required Google to delist worldwide some search results, and a lower court in California held that it was unenforceable based on Section 230, so that challenge about enforceability is very real,” Laidlaw said.

Section 230 has been used for decades, including in an ongoing trial against Meta, by social media companies to skirt liability for things posted on their sites. So far, there’s really no precedent on how it applies to AI providers.

Canada experiences significantly fewer mass shootings per year than the U.S. This latest one shook much of the country, and Laidlaw said this new information about the shooters’ AI history could be a driver behind new regulations.

“This might be a bit of a sea change with what happened,” she said. “Because it’s such a horrific tragedy that it just sheds this light on the role of these companies and made the public acutely aware of their decision-making power and that so much rests on their shoulders.”

Huey isn’t so sure.

“Who’s going to stand behind and champion increased police access to information about your personal stuff on the internet if there aren’t as many groups that are willing to speak up, and the public is not as sympathetic to that, even in situations where you have a significant mass shooting?” she asked.

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *