Insights into AI Sentiment and Adoption from more than 1,900 news publisher employees across Europe, Middle East and Africa.
We often hear that AI is rapidly reshaping the media industry, promising to revolutionise how newsrooms operate - but how do the people working within news publishers actually feel about AI in their day-to-day roles?
We surveyed 19 news organisations, totalling more than 1,900 employees, across the EMEA region, to better understand their attitudes toward and adoption of AI in the workplace. This article highlights five key takeaways from our analysis.
1. Most organisations lack alignment and clarity of vision
As organisations invest in AI, the need for clear communication has become evident. While 40% of employees report that their organisation has dedicated AI leaders, only 13% observe clearly articulated goals or effective communication. For instance, many news organisations have recently hired people to explore AI, but others in the company do not always feel included or understand their contributions. Without a shared vision, AI initiatives risk becoming isolated experiments.
To address this, news publishers should prioritise instilling a vision for AI which is shared across departments. For example, linking AI initiatives to measurable outcomes - such as revenue growth or audience engagement metrics - can ensure all teams have a clear focus and reason for using AI. Equally important is creating a structured way in which AI teams and business leaders communicate regularly, such as knowledge-sharing sessions or opportunity discovery workshops.
2. Levels of optimism are high but vary by role
Our findings reveal a striking optimism about AI: only 3% of respondents expressed direct pessimism. However, this positive sentiment varies by department:
- 57% of senior leaders are optimistic about AI’s potential
- Only 36% of editorial staff share this view
- For newsroom and content teams, concerns about copyright risks and the potential for AI-generated inaccuracies remain top-of-mind.
Bridging this gap is critical for adoption. Editorial teams need tailored training to illustrate how AI can complement, rather than replace, their work. For example, practical demonstrations of AI tools integrated within content commissioning and audience analytics processes can help to dispel myths and build trust.

3. Expertise is siloed in technical teams
Currently, AI expertise remains concentrated in more technical teams, with 30% of employees self-identifying as “advanced” or “expert” in AI, while others lag behind. This imbalance risks creating silos within the workforce, where the benefits of AI remain confined to a small subset of the organisation.
To spread AI knowledge, publishers should prioritise workforce-wide training initiatives that make AI approachable for employees who are less likely to encounter AI technologies in their typical work. Some examples we’ve explored with publishers include:
- Creating mentor schemes where employees who are more familiar with AI can partner with other colleagues to share knowledge and build connections between teams.
- Introducing AI champions in each department who lead the adoption and know the daily workflows of their specific teams.


4. Employees are most excited about efficiency gains but they worry about inaccurate outputs
Employees widely recognise AI’s benefits, with 77% citing increased efficiency and 45% noting increased creativity as a positive impact to be gained through adoption. However, concerns about the misuse of IP (46%) and the loss of audience trust (36% on average, rising to 53% among editorial teams) remain significant. Employees widely recognise AI’s benefits as:
- 77% believe AI boosts efficiency
- 45% say it sparks creativity
But concerns remain:
- 46% cite misuse of IP as a worry
- 36% mention loss of audience trust, rising to 53% among editorial teams
Furthermore, only 6% of senior leadership expressed concerns about job security, compared to 19% on average across all respondents. This disparity highlights a sense of vulnerability among more junior employees who may feel more at risk of job displacement.
To address these concerns, organisations need to integrate robust safeguards into their workflows. Clear policies that outline ethical guidelines for AI use, particularly in areas like content generation and IP management, are essential. These can be complemented by human-in-the-loop review processes to ensure AI-generated outputs meet the highest standards of accuracy and integrity. Senior leadership should also prioritise fostering a culture where employees feel safe discussing AI’s impact on their roles among all employees as they navigate this industry disruption together.
5. Lack of training and time are often reported as blockers for further adoption
While a third of respondents are satisfied with the AI tools available to them, barriers such as insufficient training (34%) and lack of time to upskill (28%) are preventing more scaled adoption. Many employees also report being unaware of tools relevant to their roles.
To overcome these obstacles, organisations could invest in role-specific tools that directly address the needs of different teams. For example, AI tools that streamline editorial workflows or automate audience analytics can demonstrate immediate value. Moreover, allocating time for structured training during work hours can ensure employees have the opportunity to develop their skills. A variety of training formats can be used, such as team workshops, on-demand course programmes, drop-in expert sessions, and free online resources.
Conclusion
Our findings paint a promising picture peppered with clear opportunities for organisations to scale their AI adoption while building trust across their workforce. Employees in the news industry are largely enthusiastic about AI. As such, the next steps should be to:
- Create strategic alignment,
- Build workforce-wide skills beyond data, technology and product teams
- Implement training and guardrails in response to sector-specific concerns like copyright risks and AI inaccuracies
By establishing a clear vision, prioritising training, and proactively addressing employee concerns, publishers can unite their teams around the exciting potential of AI.
At FT Strategies, we have a deep knowledge of AI, technology & data, and what you need to future-proof your business. If you would like to learn more about our expertise and how we can help your company grow, please get in touch.
About the authors
Aliya Itzkowitz, Manager

Aliya Itzkowitz is a Manager at FT Strategies, where she has worked with over 20 news companies worldwide. She previously worked at Dataminr, bringing A.I. technology to newsrooms and at Bloomberg, as a journalist. She has a BA from Harvard University and an MBA from Said Business School, University of Oxford. She is currently a member of the FT's Next Generation Board.

Tim Goudswaard, Associate Consultant
Tim is an Associate Consultant at FT Strategies who previously worked for an intelligence consultancy and the United Nations. He has experience working in data, artificial intelligence and product/content strategy. He holds an MSc in Global Governance from University College London.
This article was originally published on INMA.org here.