Generational content
How artificial intelligence has become central to discussions about the future of journalism
Letter from the Editor
Hello reader!
I know what you are thinking: we are barely a few weeks into 2025 and The Stationer is already banging on about artificial intelligence — and quite right too, as a startling new report in The Daily Telegraph warns about the impact of the technology on jobs.
By analysing figures from the Office for National Statistics, the newspaper was able to reveal which sectors and salaries are growing fastest. But it also paints a morbid picture of the areas that are on decline amid the Government’s economic policies, changing consumer preferences and (you guessed it) the advent of AI.
I am afraid to say it is bad news for those working the content and communications industries. Take educational professionals, whose earnings fell by 10 per cent over the last 12 months because of funding cuts and recruitment problems. Within this field of work, AI — with its time-saving capabilities and enhanced automation — will likely result in the number of education administrators and bookkeppers being cut.
Then there are public relations and communications directors, whose wages are down 4.3 per cent on 2023 due to the rapid shift towards digital and social media. While PR still plays a vital role in shaping compelling narratives, Michael E. Donner, Chief Marketing Officer at Thrivelab, says many businesses are skipping traditional agencies altogether as they are using AI tools to ‘pitch directly to journalists and track brand sentiment with minimal overhead’.
However, if like me you are a journalist, look away now. A salary for a news reporter fell by 23.3 per cent, with average pay for newspaper editors also suffering a 6.2 per cent cut. Dwindling print opportunities and declining advertising revenues are one factor. But AI is another beast my profession is grappling with. In fact, Axel Springer — the media conglomerate that owns Business Insider and Politico — has warned its staff that AI could steal jobs by aggregating information much better than humans.
Alice Collyer, who is studying is Computational and Data Journalism at Cardiff University, explains in this month’s feature how AI has become central to discussions about the future of journalism. After reading her piece, ask yourself: what does your industry need to do to mitigate the potential negative impacts of AI? I would love to hear your thoughts.
As always, if you are interested in contributing to The Stationer, wish to share a job opportunity, or simply fancy having a chat, feel free to drop publisher Rob Wilding a line at robert.wilding96@gmail.com.
Enjoy the issue.
Bill Bowkett, Editor of The Stationer.
ARTIFICIAL intelligence has become central to any discussion about the future of journalism.
Sweeping proclamations about its potential to ‘revolutionise the workplace’ or ‘change the world’ means it is impossible not to feel a bit of AI fatigue.
However, there is an interesting wrinkle: public opinion has failed to keep up with the pace of adoption.
According to a report published last year by The Reuters Institute, only a third (36 per cent) of Britons say they are comfortable with stories written by humans with the help of software such as ChatGPT.
And less than a fifth (19 per cent) are comfortable with news content generated primarily using AI.
Given that journalism is a business which relies on public appetite, varying applications deserve a closer look.
AI’s role in this regard can be broadly categorised as a production aid to transform the presentation of existing material, or (most contentiously) as a tool for content generation.
Its ability to process large quantities of data makes it particularly useful for monotonous tasks which would otherwise require vast amounts of time. For example, a New York Times investigation programmed an AI tool to analyse satellite imagery of Gaza to search for bomb craters.
They found 900kg bombs had been used in areas designated by the Israeli government as safe for civilians.
Similarly, Reuters applied AI to analyse the pace of dredging and landfill work by the six nations who contest all or parts of the South China Sea.
AI is also used to reformat and enhance existing content, creating new channels for audience engagement.
This includes generating audio recordings of articles, offering translation services or providing summary features.
In order to cater for its Chinese readership, The Globe and Mail — a Canadian newspaper — provides AI audio translation in Mandarin, in addition to French and English.
Meanwhile, Norway’s public news broadcaster NKR has integrated AI to create summaries to engage younger audiences.
Positioned just below the headline, these summaries are visible boxes, which users can opt to expand.
Notably, they found that readers tended to expand for complicated articles, such as the outcome of the agreement for a new city government in Oslo, and that those who utilised the feature spent more time on the article page.
The most contentious use of AI is undoubtedly in the creation of new content — including text, images or videos.
This aspect of AI raises significant questions about authenticity, trust and human oversight.
Take, for instance, the German media outlet Kölner Stadt-Anzeiger Medien, which faced backlash for its portrayal of an AI journalist ‘Klara’ as a blonde haired, blue-eyed woman.
This was not an isolated controversy.
Meta’s AI-generated profiles and Mango’s digital fashion models also sparked criticism for their anthropomorphic representations.
While specific criticisms varied, there is a recurring unease about how AI is ‘personified’, especially in ways which feels calculated or inauthentic.
Acceptance of AI-generated content depends heavily on the subject matter. Audiences tend to worry when it is applied to sensitive topics such as politics or conflicts, but more accepting regarding sports and entertainment.
Errors in these domains are seen as less consequential — and features like the personalisation of content or live updates often outweigh concerns.
Synthetic media, however, provokes the strongest reaction.
AI-generated visuals, including deepfakes, are extremely unpopular, even if disclosed.
This unease stems in part from the disruption of the way we instinctively rely on images as ‘mental shortcuts’ when deciding on what to trust, especially online.
Furthermore, the inherent flaws of AI can have serious implications for trust and credibility.
Apple recently announced it would update its AI software following a formal complaint from the BBC after the Big Tech giant’s news summary feature falsely alerted iPhone users that Luigi Mangione — the man accused of killing UnitedHealthcare chief executive Brian Thompson — had shot himself.
These missteps underscore a crucial point: there is no foolproof way to prevent AI from hallucinating (generating false or nonsensical information).
Coupled with public preference, these challenges highlight that while AI’s presence in journalism is inevitable — and dare I say in some ways useful — it cannot replace human judgement and nuance.
Alice Collyer is an MSc Computational and Data Journalism candidate at Cardiff University. Follow her on LinkedIn.
Industry takeaways
Neil Gaiman has denied allegations he engaged in ‘non-consensual sexual activity’ after the New York Magazine printed allegations of coercion, abuse and assault levelled against the Coraline and Good Omens author by eight women. A Tortoise Media podcast broadcast in July featured accounts from four of these women.
TikTok has restored services in the United States after newly-elected president Donald Trump gave the Chinese social media platform a 90-day reprieve on its nationwide ban over national security concerns. The Guardian has the details.
Rupert Murdoch’s News Group Newspapers, the publisher of The Sun, has agreed to pay ‘substantial damages’ and apologised to Prince Harry to settle a long-running legal battle over claims of unlawful intrusion into the Duke of Sussex’s life. Read the full story in BBC News.
In a packaging industry first, German supermarket chain Lidl will remove all designs deemed attractive to children from its least healthy own-brand products by October. Grocery Gazette has more.
Think kids nowadays are addicted to smartphones? The Japan Times says that American stationery nerds are fuelling a Japanese Techō notebook boom.
Opportunities board
Step Up Internship Scheme, Travel Media Awards (London) — Apply by January 31
Mentoring Programme, Creative Access & McLaren Racing (Woking) — Apply by February 7
Assistant Librarian, Collections and Content, V&A (London) — Apply by February 10
Audio Lab, BBC Sounds (Glasgow, Belfast, Sheffield, Salford and London) — Apply by February 11
Journalist Fellowship Programme, Reuters Institute (Oxford) — Apply by February 13
The Stationer is edited by Bill Bowkett. Please send thoughts, feedback and corrections to bill.bowkett@btinternet.com. Follow the Young Stationers on Facebook, X and Instagram. For more information, visit www.stationers.org/company/young-stationers.