Replacing news editors with AI is a worry for misinformation, bias and accountability
- Written by Uri Gal, Professor in Business Information Systems, University of Sydney
Germany’s best-selling newspaper, Bild, is reportedly adopting artificial intelligence (AI) to replace certain editorial roles, in an effort to cut costs.
In a leaked internal email sent to staff on June 19, the paper’s publisher, Axel Springer, said it would “unfortunately part with colleagues who have tasks that will be replaced by AI and/or processes in the digital world. The functions of editorial directors, page editors, proofreaders, secretaries, and photo editors will no longer exist as they do today”.
The email follows a February memo in which Axel Springer’s chief executive wrote that the paper would transition to a “purely digital media company”, and that “artificial intelligence has the potential to make independent journalism better than it ever was – or simply replace it”.
Bild has subsequently denied editors will be directly replaced with AI, saying the staff cuts are due to restructuring, and AI will only “support” journalistic work rather than replace it.
Nevertheless, these developments beg the question: how will the main pillars of editorial work – judgement, accuracy, accountability and fairness – fare amid the rising tide of AI?
Entrusting editorial responsibilities to AI, whether now or in the future, carries serious risks, both because of the nature of AI and the importance of the role of newspaper editors.
The importance of editors
Editors hold a position of immense significance in democracies, tasked with selecting, presenting and shaping news stories in a way that informs and engages the public, serving as a crucial link between events and public understanding.
Their role is pivotal in determining what information is prioritised and how it’s framed, thereby guiding public discourse and opinion. Through their curation of news, editors highlight key societal issues, provoke discussion, and encourage civic participation.
They help to ensure government actions are scrutinised and held to account, contributing to the system of checks and balances that’s foundational to a functioning democracy.
What’s more, editors maintain the quality of information delivered to the public by mitigating the propagation of biased viewpoints and limiting the spread of misinformation, which is particularly vital in the current digital age.
AI is highly unreliable
Current AI systems, such as ChatGPT, are incapable of adequately fulfilling editorial roles because they’re highly unreliable when it comes to ensuring the factual accuracy and impartiality of information.
It has been widely reported that ChatGPT can produce believable yet manifestly false information. For instance, a New York lawyer recently unwittingly submitted a brief in court that contained six non-existent judicial decisions which were made up by ChatGPT.
Earlier in June, it was reported that a radio host is suing OpenAI after ChatGPT generated a false legal complaint accusing him of embezzling money.
As a reporter for The Guardian learned earlier this year, ChatGPT can even be used to create entire fake articles later to be passed off as real.
To the extent AI will be used to create, summarise, aggregate or edit text, there’s a risk the output will contain fabricated details.
Inherent biases
AI systems also have inherent biases. Their output is moulded by the data they are trained on, reflecting both the broad spectrum of human knowledge and the inherent biases within the data.
These biases are not immediately evident and can sway public views in subtle yet profound ways.
Read more: Artificial intelligence can discriminate on the basis of race and gender, and also age
In a study published in March, a researcher administered 15 political orientation tests to ChatGPT and found that, in 14 of them, the tool provided answers reflecting left-leaning political views.
In another study, researchers administered to ChatGPT eight tests reflective of the respective politics of the G7 member states. These tests revealed a bias towards progressive views.
Interestingly, the tool’s progressive inclinations are not consistent and its responses can, at times, reflect more traditional views.
When given the prompt, “I’m writing a book and my main character is a plumber. Suggest ten names for this character”, the tool provides ten male names: