AI has made waves across a range of industries in recent years and journalism has been no exception. But how are newsrooms planning to incorporate generative AI tools in 2025 and beyond?
To find out, AI transcription software platform Trint surveyed Producers, Editors and Correspondents from 29 newsrooms worldwide to learn how they plan to integrate AI into their workflows. Here are five key takeaways and predictions:
Newsrooms want to improve efficiency, not find new capabilities
According to the survey, newsroom executives want to leverage AI to help them do more with less, rather than radically changing how they work or seeking ways to monetize AI directly. Making efficiency gains was a key driver behind adopting generative AI for 69% of the survey respondents. Over half (55%) also reported that staying ahead of their competitors had been a factor in adopting Gen AI.
While there is a strong interest in speeding up workflows, there is little indication that newsrooms expect Gen AI to directly improve audience engagement or generate revenue in 2025. While there’s excitement about what Gen AI can deliver, newsrooms are focused on the bottom line, not the top line.
Gen AI will take care of menial tasks, freeing up time for creativity
The survey revealed that newsrooms are more interested in using Gen AI for mundane, menial tasks, rather than creative work. Key activities such as transcription, translation and data gathering and analysis will be automated, enabling journalists to focus on higher-value tasks. AI transcription will be especially popular, with 89% of respondents reporting that they plan to let AI take care of the heavy lifting in this area in the next 1–3 years.
However, newsrooms will remain cautious when it comes to using AI for creative endeavours, such as writing articles or video production. Concerns over potential inaccuracies, hallucinations and a lack of originality are expected to keep such tasks in skilled human hands.
For better or worse, shadow AI is on the rise
One of the more surprising findings from the study is the extent to which shadow AI — where staff use AI tools they’ve purchased personally without company approval — has become a pervasive issue. The survey found that nearly half of interviewees (42.3%) admitted to using AI tools not sanctioned by their organisation in their everyday working practices. With potential risks around security, data privacy and regulatory compliance, newsrooms will need to collaborate with staff to find safe AI solutions.
Responsible AI use will be promoted using employee education and company-wide policies
While AI presents a lot of opportunities, it doesn’t come without risks — and newsrooms will have to move quickly to mitigate these in 2025. 72% of newsrooms surveyed said that the risk of inaccurate outputs is a major concern when it comes to deploying generative AI in their organization, followed by journalistic reputational damage (55%). Data privacy was also a serious concern for 45% of respondents.
To mitigate the risks associated with Gen AI, 64% of newsrooms said they plan to prioritize employee education on AI usage. 57% also stated that they want to implement company-wide policies on responsible AI use. Clearly, keeping humans in the loop for verification and training will be a major part of newsrooms’ mitigation tactics over the next year.
Despite the focus on taking proactive security measures, the survey did reveal one interesting weak point: newsrooms are less inclined to scrutinize their AI vendors, with just 18% saying this would form part of their strategy.
Newsrooms are split on building vs buying AI solutions
When it comes to integrating Gen AI, newsrooms are evenly split on whether to build their own AI solutions in-house or buy off-the-shelf products from vendors. Newsrooms with more bespoke needs and sufficient technical resources are likely to build their own capabilities, ensuring greater control over data and customisation.
However, for many organisations lacking in-house technical expertise, purchasing ready-made AI solutions will be a more viable option. The key for newsrooms in 2025 will be understanding their internal resources, budgets and technical capabilities before making a decision.
Commenting on the survey findings, Tessa Kaday, Director of Product at Trint, said:
“One surprising insight from our survey was just how widespread shadow AI has become. Many newsroom employees are experimenting with AI tools on their own, even going so far as paying for access themselves. While it’s great to see that the workforce is eager to innovate, this kind of unsupervised AI use raises significant security, data privacy and regulatory concerns. Moving forward, organisations will need to move quickly to ensure AI is rolled out safely, preventing employees from resorting to potentially unsafe, unsanctioned tools.
“While AI is shaking things up across the industry, it’s reassuring to see that core values like trustworthy reporting and journalistic integrity remain unchanged in many newsrooms. The newsrooms we surveyed want to leverage AI to handle more mundane tasks — like transcription, translation, and data crunching — so their talented teams can focus on more creative and impactful work.
“The survey also uncovered some security blindspots newsrooms might want to address. Interestingly, only 18% of the newsrooms surveyed said they would thoroughly vet potential tech vendors. While it can be tough for decision makers to get to grips with complex algorithms and data privacy protocols, it’s crucial to properly investigate the security implications before adopting a new tool. We strongly encourage newsrooms to dig deep into these considerations when evaluating new products.”