Europe – A recent study conducted by the European Broadcasting Union (EBU), in collaboration with the BBC, showed that generative AI programs, such as ChatGPT, Copilot, Gemini, and Perplexity, distort news content in 45% of cases in which they generate news information, regardless of language or geography.
According to the report, described as the largest of its kind in the field of media and artificial intelligence, the study included thousands of news samples generated by AI systems in more than 20 European countries and in 15 different languages.
inaccurate results
The results indicated that paraphrasing, changing context, and adding inaccurate or fabricated information were the most common forms of distortion, raising concerns among media organizations about the growing spread of misinformation online.
The report explained that some models tended to exaggerate or analyze without factual basis, leading to a loss of editorial accuracy, while only a small percentage demonstrated adherence to journalistic neutrality and reliable sources.
Demands for legislative frameworks
The joint EBU-BBC research committee said these findings underscore the need for stricter legal and ethical frameworks to ensure the responsible use of AI in the media, particularly as it expands its adoption in European and global newsrooms.
The report confirmed that the distortion is not language-specific, as the same errors were observed in English, French, Arabic, and Spanish, indicating that the flaw is rooted in the model’s operating mechanism, not in translation or linguistic output.
Experts warn that continued uncontrolled reliance on these technologies could lead to a decline in trust in both traditional journalism and digital content.




