Machines make mistakes, but with authority: a generation learns to trust oracles that lie half the time
The mask fell. The new harbingers of truth, the artificial intelligence assistants sold to the public as revolutionary information tools, are fundamentally “untrustworthy.” This is not a Luddite suspicion or fear; is the unequivocal conclusion of a massive study conducted by the European Broadcasting Union (EBU), the world’s largest public service media alliance. The report, a robust collaboration that included public broadcasting giants such as the BBC, Radio France and Deutsche Welle, exposes the systemic negligence of the corporations that control the flow of digital information.
The study was not superficial. It involved 22 public service media outlets from 18 countries, who methodically asked the same 30 questions about news and current affairs to free versions of Silicon Valley’s most ubiquitous tools: OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity.
The result is a certificate of informational bankruptcy. Overall, 45% of all responses generated by these AIs had “at least one significant problem”. Almost half the time, the information provided by these technology monopolies is flawed at best.
What is even more alarming is the nature of these errors. The study found that one in five responses “contained serious accuracy issues.” We’re not talking about minor typos, but “hallucinatory details and outdated information.” In short, machines are lying, inventing events, confusing parodies with facts, and getting crucial dates wrong.
The anatomy of this failure reveals a crisis of corporate responsibility. The main cause of problems, accounting for 31% of cases, was missing sources — missing, misleading, or simply incorrect attributions. In a post-truth world, where the provenance of information is the only anchor we have, Big Techs have decided to let go. This is followed by a lack of precision (20% of problems) and a lack of context (14%).
These tools aren’t just getting it wrong; are actively polluting the information ecosystem with fabricated data, presented with unquestionable algorithmic authority.
The performance of Google’s Gemini — one of the richest and most powerful companies in human history — has been particularly disastrous. The study points out that Gemini had the “worst performance, with significant problems in 76% of responses”. Three-quarters of the time, the tool failed, “largely due to its poor performance in searching for suppliers.”
One prominent example is as absurd as it is dangerous. Radio France asked Gemini about an alleged Nazi salute by Elon Musk. The chatbot responded that the billionaire had “an erection on his right arm.” Google’s AI apparently consumed a satirical radio show and regurgitated it as literal fact.
The worst, however, was what came next: Gemini cited Radio France itself and Wikipedia as sources for this grotesquely false information, without providing any link. As the Radio France reviewer wrote, “the chatbot transmits false information using the name of Radio France, without mentioning that this information comes from a humorous source.” Here we see the corporate machine not just getting it wrong, but actively defaming and undermining the credibility of a public media institution by using it as a shield for its own hallucination.
The incompetence doesn’t stop there. Multiple media outlets, including the Finnish YLE and the Dutch NOS and NPO, asked ChatGPT, Copilot and Gemini “Who is the Pope?”. The answers indicated “Francisco”. However, at the time of the study, Pope Francis had already passed away and was succeeded by Leo XIV. These platforms, with access to an unprecedented volume of data, cannot even keep up with one of the most reported global events.
Fast-moving news stories and direct quotes have also proven insurmountable obstacles, with AIs often inventing or modifying statements. A BBC evaluator summed up the central ethical problem perfectly: “Like all summaries, the AI fails to answer the question with a simple and precise ‘we don’t know.’ It tries to fill the gap with explanations rather than doing what a good journalist would do, which is explain the limits of what we know to be true.”
Public journalism is built on checking and admitting limits. Corporate AI is built on the presumption of authority and filling in gaps with falsehoods.
This is not a technical problem; It’s a democratic problem.
According to a Reuters Institute report, 15% of people under the age of 25 already use these faulty tools weekly to obtain news summaries. An entire generation is being taught to trust oracles that lie half the time.
Jean Philip De Tender, deputy director general of the EBU, got straight to the point: “AI assistants are not yet a reliable way to access and consume news.” He emphasizes that the failures are not “isolated incidents” but rather “systemic, cross-border and multilingual.”
De Tender’s conclusion should serve as a fire alarm for all free societies: “We believe this puts public trust at risk. When people don’t know who to trust, they end up trusting nothing, and this can inhibit democratic participation.”
The EBU study is not just a technical report; It’s a formal accusation. Tech giants, in their unbridled quest for market dominance, have released unfinished and dangerous products that are actively eroding the foundation of shared reality. While public communications services struggle to maintain standards of accuracy and context, Silicon Valley monopolies flood the world with convenient, hallucinatory, and unsourced misinformation. Public trust is being sacrificed on the altar of corporate innovation, and democratic participation itself is the collateral victim.
Source: https://www.ocafezinho.com/2025/11/09/um-estudo-europeu-desmonta-o-mito-da-precisao-artificial/