At FT Strategies, we work with organisations globally to understand and respond to emerging trends - including Artificial Intelligence. In this interview with Media Makers Meet (Mx3), our AI experts share the methodologies used for understanding the opportunities and risks presented by this fast-moving technology and the practical ways that publishers (including the FT) are finding success through applying AI.

Q1. What are the key current trends you are seeing with AI in Media?

Aliya: The big change this year has been the use of Generative AI in the journalistic process. The capacity of the tech took many people off guard and now we are seeing some newsrooms embrace this as part of their process - whether it’s smaller parts of the process such as Headline Generation or even the creation of automated news reports on topics such as sports or stock market performance.

Sam: We’re currently somewhere near the peak of the Generative AI hype cycle and, like with hype cycles on previous forms of AI or in different industries, the conversation started around education, use cases and business strategy. But this time we’re seeing it move more quickly into implementation and experiments, possibly because of the technically accessible nature of some of these new tools.

Q2. To date, which applications of AI have inspired you the most within media?

Aliya: I am really fascinated by the next frontier of automatic language translation. I feel this has the potential to be truly disruptive in the context of news. AI makes it possible for a publisher that produces in a smaller world language, such as Hungarian, to reach audiences beyond their home market by directly producing content in other languages. On the other hand, this could also create a world in which all publishers compete globally for an audience, where an English TV news anchor can have their reporting translated in real-time into other leading world languages such as Spanish, Mandarin and Hindi.

Sam: Like Aliya, the ‘language’ part of ‘large language models’ is the bit which fascinates me. I think the reason that Generative AI exploded on the media scene is primarily due to the uncanny levels of language ‘understanding’ that these models are now exhibiting. Some of the most practical and powerful ways of deploying Generative AI are as a ‘natural language wrapper’ around more established (and controllable) data and analytics technologies - for example using AI so that a data scientist can talk to their database, or so that a reader can ask questions directly to the content they are consuming. But we’re also starting to see the emergence of multimodal AI that seamlessly converts between text and other formats, which is a relatively unexplored area especially when viewed as an ‘image or video interpretation tool’ rather than just for content generation.

Q3. What are the chief concerns/fears media groups have with AI in general?

Sam: We’re seeing fears of ‘being left behind’ - organisations know that there is a commercial value to be gained from AI, but it can be difficult to know where to start or how to move quickly enough. That’s part of the reason why we love helping media companies to cut through the noise and get started on their AI journeys. For media organisations, doing AI responsibly should be a central tenet. For example, reliability of information is especially important given that this is the currency of most news organisations. Magnification of bias and unreliable information could be turbocharged by AI algorithms. And IP ownership needs to be carefully managed. If users increasingly go to Gen AI models and alternative products to answer their questions - where does that leave the publisher?

Organisations know that there is a commercial value to be gained from AI, but it can be difficult to know where to start... That’s part of the reason why we love helping media companies to cut through the noise and get started on their AI journeys.

Q4. What are the key challenges media groups face when trying to implement AI across their operations?

Aliya: Trying to create systems and approval processes without stifling innovation. Also, the technology itself is changing so rapidly that it is difficult to keep pace. As with previous forms of digital transformation, innovation can sometimes occur in pockets and teams might not necessarily get to hear about it due to organisational silos - this can lead to duplication of work or teams not being kept in the loop about potential change to their workflows.

Q5. The general consensus amongst many publishers is that AI won’t replace jobs, but will augment existing roles. Charlie Beckett of LSE feels AI will ultimately become a force for good in the media industry: “AI will make it possible for the journalist to be this multifarious thing. I call it the jetpack journalist. The robot won’t replace you, but you will have all these little robots working for you. When an editor says ‘do me a long piece, a short piece, a snippet for Twitter, a piece for TikTok,’ etc, you’ll press an F key and the AI will reformat it all for you.” Do you agree?

Aliya: I saw someone say recently, “Your job will not be replaced by AI. Your job will be replaced by someone who understands AI.” Using AI creates a whole new set of challenges that we need people to solve. That said, a concern I have is the education process for people who are new in their careers. Many of the ‘easier’ tasks that are typically now being given to Gen AI would have been given to an intern or trainee in the past. So the question for me is, how will we make sure that junior talent is able to come up the learning curve in an AI world?

Sam: It’s difficult to predict what will happen to jobs and corporate structures, but what I can say is that the research area of ‘autonomous agents’ (tools which can receive a complex task and attempt to break it down and work through the smaller chunks) is a very exciting field! An example of this would be a tool like ChatDev which is given a coding task, like “make me a website”, and works through it step-by-step.

 

“Your job will not be replaced by AI. Your job will be replaced by someone who understands AI.”

Q6. Publishers like The Washington Post and Trusted Media Brands (Reader's Digest etc) have set up task forces to harness AI and better understand its value. What advice would you give to publishers, both large and small, about this - should they employ a new Director (Head of AI) or simply set up a task force? Or is there another way?

Sam: It’s great to have experimental teams embedded at the department level - this is often where the best ideas come from - as well as a higher-level board that is setting a clear vision and strategy. But I wouldn’t necessarily reinvent the wheel for AI - lots of publishers will already have teams who are working with, or at least thinking about AI! Generative AI (and future, unseen iterations of AI) presents new opportunities and risks, so it’s important to have people responsible for looking at these, such as a cross-functional discussion and experimentation group, and a dedicated risk and governance panel.

 

Q7. We’ve seen a lot of controversy with AI-generated content - such as CNET earlier this year or this tweet from a staffer at Gizmodo. However, AI for real estate, financial or factual sports stories seems to be an appropriate use of the technology (e.g. United Robots). What are your views on how AI should or could be used within editorial content?

Aliya: You’re correct that some types of stories, typically those following a relatively structured template, lend themselves more to AI and automation. But that’s been happening for years already. I used to be a journalist so I'm passionate about this question! I tend to agree with the FT’s editor-in-chief that quality journalism will still be created by humans. I think AI can be used to augment both the creative process and the experience of the reader but it’s essential for a human to remain involved in the process. What I mean by that is that if you want to truly add something to the news agenda it usually comes in the form of a scoop or some investigative work. Now, there are also AI tools that can help you with that! For example, I used to work at Dataminr, a tool that helps journalists discover breaking news faster. It still comes back to the same question, reporting on what happened is no longer enough. You need a unique angle for a story and that is usually something that a human needs to come up with. Beyond high-quality reporting, publishers should figure out what brings readers to their platforms and aim to meet these user needs and to build direct relationships with them - for example using First Party Data and the individual brands of their journalists.

 

Q8. The News/Media Alliance - representing over 2,000 publishers - and 26 other trade bodies including FIPP, have issued a set of global AI principles which amongst other things “outline the need for GAI developers to obtain explicit permission for use of publishers’ intellectual property, and publishers should have the right to negotiate for fair compensation for use of their IP by these developers.” What are your thoughts on these legislative initiatives?

Sam: Logically, it’s probably a good thing for the industry that trade bodies like FIPP are coordinating on the risks and opportunities around AI. These global AI principles are a good articulation of the AI ethics which have become relatively standardised in other industries prior to the focus on ‘Generative AI in Media’ and are important for doing AI responsibly, but also now include the additional IP considerations that have become relevant to our industry. Most of the (ongoing) discussions on IP do not seem to propose anything outside of the norm - copyright law is already a thing and, as the FT’s Chief Commercial Officer Jon Slade says, if our content is used then there should be a payment. It’s worth noting that the inclusion of quality journalism in these models could reduce the risk of misinformation, but this doesn’t mean that there shouldn’t be compensation or licensing.


About the authors

aliya_jpeg__200x2880_q90_subsampling-2_upscale-3
Aliya Itzkowitz, Manager at FT Strategies.

Aliya Itzkowitz, Manager at FT Strategies.
Aliya Itzkowitz, Manager

Aliya Itzkowitz is a Manager at FT Strategies. She has over five years of experience across Media, Finance and Technology. Previously, she worked at Dataminr, bringing A.I. technology to newsrooms across EMEA. Prior to that, she worked as a journalist at Bloomberg and was a member of the British National Fencing Team. She has a BA from Harvard University and an MBA from Said Business School, University of Oxford.



sam_gould_1000x1000px_png__200x2880_subsampling-2_upscale-1
Sam Gould, Senior Consultant

Sam Gould, Senior Consultant
Sam Gould, Senior Consultant

Sam is a Senior Consultant at FT Strategies with a wealth of experience helping clients to solve strategic business challenges using data. He has helped organisations in both the public and private sectors to define strategic roadmaps and processes for using AI. He has also designed and built innovative data solutions, working with senior stakeholders as part of critical delivery-focused teams.