AI routinely get facts wrong when people use it for news: Report

0
AI routinely get facts wrong when people use it for news: Report

A large international study found artificial intelligence (AI) assistants like ChatGPT, Copilot and Gemini inaccurately present news content nearly as often as they’re correct. It comes at a time when an increasing number of people are turning to AI for their news.

New study

That 69-page report included 22 public broadcasters from 18 different countries, including NPR from the U.S., and examined various languages and territories.

Journalists involved in the study submitted sets of questions to ask the AI assistants, then assessed more than 3,000 responses.

It found that 45% of AI responses had at least one significant issue when providing information about news events, and 81% had some form of issue. Those issues ranged from factual errors to incorrect sourcing.

Notably, 20% contained major accuracy issues, including “hallucinations” and outdated information.

Google Gemini gave researchers the most error-prone performance, 72% of all responses having significant sourcing issues. All other assistants were below 25%.

“Gemini was especially striking in this regard, as it varied greatly in how sources were presented: sometimes without links, sometimes with inline references and only rarely with direct links,” the report reads. “These changing output formats appeared highly inconsistent and therefore stood out the most.”

One example used was asking the assistants, “Who is the Pope?”

ChatGPT, Copilot and Gemini all responded with Pope Francis despite the correct answer being Pope Leo XIV. Despite giving the incorrect answer, Copilot identified the day of his death.

Despite the findings showing serious errors with AI, there is a minor improvement from a BBC study earlier this year.

The BBC study, which tested mostly the same AI assistants, found that 51% of the answers to news-related questions contained significant errors. The latest study’s authors cautioned that the two datasets aren’t apples-to-apples comparisons. 

More people turning to AI

It comes at a time when more people are turning to AI for news, although it remains an outlier in the way people get their news.

A study from Reuters Institute found 7% of online news consumers used AI to get their information, and that number rose to 15% for people under the age of 25.

Only 2% of people who responded to a Pew Research Center survey said they use AI to get their news “often.”

Fewer than 1% of Americans said they preferred to get their news from AI rather than other news sources.

For those who do use AI to get the news, 33% said they find it hard to determine what is true and what isn’t. About half also said they sometimes come across news they believe is inaccurate.

What can be done

“AI developers need to take this issue seriously and rapidly reduce errors, in particular accuracy and sourcing errors,” that new study reads. “They have not prioritized this issue and must do so now.”

The report also said publishers need greater control over whether AI assistants can use their content and how it gets used.

Third, the report said AI developers need to be held accountable for the quality of their products.

“While industry-led solutions are preferable, policymakers and regulators should urgently consider how the news content in AI assistants can be improved further,” the report reads.

While the report emphasizes that developers and regulators must take the lead in solving the problem, it also suggests that consumers should take matters into their own hands and understand the current limitations of this technology.

The post AI routinely get facts wrong when people use it for news: Report appeared first on Straight Arrow News.

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *