Gemini 3 and other chatbots scrutinized as unreliable news gatherers

0
Gemini 3 and other chatbots scrutinized as unreliable news gatherers

Artificial intelligence tools like Google’s Gemini, Microsoft’s Copilot, and OpenAI’s new search engine, Atlas, are quickly becoming the primary way people search for information. But Jeffrey Blevins, a professor at the University of Cincinnati and a faculty fellow of the Center for Cyber Strategy and Policy, says consumers should be cautious when using AI to gather news. The warning comes as Google rolls out Gemini 3, its latest major update to its AI model.

Blevins told Straight Arrow News that even calling these systems “intelligent” can be misleading. 

“I’ve never been comfortable with the term ‘intelligence,’” he said. “‘Artificial’ I’m good with, and to me, ‘algorithm’ is just a much better moniker for that.”

First step, not final source

He said AI can be a helpful starting point when researching news or politics, but it is far from reliable as a sole source of information. 

“We should absolutely not be relying on it. At best, it’s a first step,” Blevins said. “Then, I need to be willing to take the next action steps and go to different sources to verify that.”

Blevins pointed to recent high-profile AI errors in newsrooms, including an AI-generated summer reading list published by the Chicago Sun-Times that included mismatched authors and several nonexistent books. 

“Sometimes the titles weren’t perfect, and sometimes they were books or authors that didn’t exist at all,” he said. “So, that’s a pretty big miss.”

Beyond factual mistakes, Blevins warns that AI platforms are designed to maximize engagement. 

“There’s a commercial interest here, and that is to keep you engaged,” he said.

AI use expansion

Recent data shows just how widespread AI use has become. A Pew Research Center survey found that 62% of Americans interact with artificial intelligence at least several times a week, highlighting how these tools are quickly becoming part of daily life. 

Meanwhile, an international study examining AI-generated responses to news prompts found that 45% contained at least one significant issue, and 81% had some form of inaccuracy. Researchers said the findings underscore concerns about relying on AI systems for timely or factual news. 

As AI tools continue to expand, Blevins says the responsibility falls on consumers to verify information through trusted, established sources rather than the fastest or most convenient ones.

The post Gemini 3 and other chatbots scrutinized as unreliable news gatherers appeared first on Straight Arrow News.

Ella Rae Greene, Editor In Chief

Leave a Reply

Your email address will not be published. Required fields are marked *