“Agents” is the latest trend in AI - each day we see headlines announcing new “agentic” products which promise to revolutionise the way we work. For example, OpenAI’s latest release (March 11) is a set of tools “to help developers and enterprises build useful and reliable agents”. Yet despite the excitement, the term remains loosely defined, and many tools are still early-stage experiments rather than fully functional systems being used in production.

This article takes a pragmatic look at why AI agents are gaining attention in news publishing, explores real-world examples of agent-driven tools in and around the news industry, and provides a clear checklist to assess whether your organisation is ready to invest in AI agents. By separating hype from practical application, we aim to help publishers make informed, strategic decisions about this emerging technology.

 

Hype Versus Reality

The word “agent” has become a bit of a buzzword in the AI space. You might see “agents” described as:

  • Autonomous goal-oriented systems
  • LLMs using tools
  • Programs acting on behalf of a user

The overall idea is that agents are systems which direct their own processes, typically by using an LLM to decide its next step and how to accomplish it. For example, you might instruct a ‘deep research’ agent to look into a certain research question, and without explicitly instructing it to do so, it could decide to run a web search. This is how Anthropic defines it: “Agents are systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks”. This is in contrast to traditional software, including other forms of AI or automation, which generally follow a pre-defined set of steps.

In reality, this subfield of AI is nascent, with few AI agent products being ready for major adoption. Nevertheless, there is real potential behind the hype.

 

Demand for More Useful and Convenient Products

Through our research at FT Strategies, we see news and media audiences demonstrating new behaviours (e.g. favouring new formats, looking for content that has utility not just facts, seeking personalised experiences aligned with their topic or distribution preferences). Audiences are increasingly seeking products that can proactively deliver content and help them to perform tasks (e.g. research, make reservations, purchase online products, manage emails). Tech companies have clearly noticed this user demand, which is reflected in a range of product developments - from OpenAI’s Operator which can browse websites on behalf of a human user to Google’s Generative AI Search (AI overviews) which similarly browses traditional search results to provide a more immediate response to certain queries.

For the news industry, these emerging products may represent a shift away from traditional news channels. The possibility that audiences could be served by external AI-driven experiences raises strategic questions about how publishers can maintain direct relationships with readers, cultivate loyalty and offer unique value. Hence it is important that news publishers develop an understanding of what they, or competitors, might be able to offer to customers using AI agent technology.

Primary users vs secondary users vs tertiary users

 

Emerging Agent Examples and Lessons

The truth is no supplier (whether that's a news publisher or an AI company) is completely "doing AI agents" - yet. Having now helped nearly 50 publishers across EMEA to develop and test different AI applications, and 200+ with use case discovery and design, FT Strategies has seen the news industry shift its focus from generative AI CMS plugins (e.g. headline generators) a year ago, to more external-facing experiments (e.g. LLM chatbots and other experiments with UX innovation) towards the end of 2024, which lay a solid foundation on which to develop more complex tools. But the only sector where we really see agentic tools being adopted at pace is in software engineering, as we will discuss below.

Even if no one has yet “cracked” AI agents in the news sector, we can examine a few promising initiatives to learn more about what makes them successful, and therefore lessons on ‘why’ and ‘how’ news companies might approach AI agents in the future. 

While these can provide early inspiration, it remains an open question where AI agents will prove most valuable in news. Their potential is in workflows that require iteration, decision-making, and integration across multiple tools, databases or formats. For example, an investigative research agent wouldn’t just retrieve documents - it would extract key points, refine subsequent search queries, and regularly surface summaries to a human researcher. An editorial assistant agent could transcribe interviews, summarise key insights, and then autonomously decide that it has enough content to generate social content, rather than requiring separate tools for each step. And for news consumers, agentic products would learn their preferences and automatically gather and surface key news stories in the right format, even attempting to answer highly specific questions and solve specific problems on behalf of their user. As put by Markus Franz, CTO of Ippen Digital: “will content and ‘voice’ be separated in the future? Are some media companies increasingly focusing on curating information, and are we, as readers, choosing different ‘author characters’ depending on the genre, our mood, and our preferences?”.

Screenshot 2025-02-05 at 13.33.48Primary users vs secondary users vs tertiary users

 

1. Ask FT (The Financial Times)

Ask FT is a retrieval-augmented generation (RAG) chatbot that provides article-based answers to queries from B2B research users. RAG is a workflow which receives a question from a user, searches a database (in this case the FT content archive), and retrieves relevant results to help a large language model (LLM) chatbot finally generate a response to the question. 

RAG is a step-by-step, linear workflow, so is not considered agentic. However, most use cases for agents would likely involve searching databases and generating summaries/answers, so it is useful to consider how Ask FT works if we want to develop more complex tools in the future. The key enabler to RAG tools like Ask FT is a readily-available database, in particular an up-to-date content archive which is made available via a vector search API for other tools to retrieve relevant entries. Developing Ask FT, and agentic tools which use LLMs to generate responses, also depends on having a data science team who are comfortable with working with prompts and testing different LLMs.

AI 1

 

2. Broadcast Control Room Automation (consortium)

This second example is a tool for helping broadcast control room operators automate a range of real-time controls, driven by their voice commands. It was showcased in the 2024 IBC accelerator (winning Project of the Year) and has many well-renowned institutional users.

In essence, the tool works as follows:

  1. Speech-to-text (STT) transcribes the operator’s voice commands.
  2. An LLM interprets those instructions.
  3. The system triggers automation to move cameras, adjust running orders, and delegate tasks to sub-agents who can access specific databases (e.g. broadcast assets, show plans) or other tools.

The key to this success is integration: LLM-based agents are being used to connect existing broadcast automation systems, data sources and a STT input layer, in a way which multiplies their effectiveness and allows the human operator to more efficiently control them. Viewing agentic tools as an orchestration of building blocks, and being able to experiment in this way, is the lesson to be taken from this example.

AI Agents 2

 

3. Coding Assistant (Replit)

The last example is Replit’s AI Agent, a coding assistant that writes, runs and tests code, automatically suggesting fixes for errors.

As mentioned above, software engineering is a domain that lends itself to agentic tools, due to a tight feedback loop - meaning a small gap between ‘generating output’ and ‘assessing how good that output is’. In contrast to the previous example, in software engineering it is more easy to determine if the agent is generating a ‘good’ output. We can (essentially) just run the code and see if it works. If the generated code fails, the system (and the user) immediately sees this and can adjust. Users can provide feedback as the code evolves to steer the direction. Users are also able to ask questions to understand the code as needed. 

Ultimately this is just another efficiency tool; it is not unlocking significant product innovation, although it is lowering the barrier for entry for non-coders. But it shows us that being able to evaluate agents’ outputs is extremely important for developing these tools, and for justifying investment into them and driving adoption with end users.

AI Agents 3

 

Are You Ready for AI Agents?

It's clear that there are a variety of ways to approach agents and lots of potential benefits. Based on our work helping publishers to establish their foundational capabilities for doing AI effectively, FT Strategies has developed this list of key prerequisites to maximise the success of AI agent projects: 

  1. You have identified a real problem to solve and can measure success (with user-driven feedback loops, editorial accuracy metrics and business value metrics). 
  2. Your (content) data is available for agents (e.g. you have a vector search API, high-quality metadata, and accessible databases)
  3. Your teams (e.g. journalists) are using LLMs regularly - including maintaining a shared bank of high-quality prompts
  4. You use in-house benchmarks to evaluate new LLMs
  5. You have the capability to experiment with custom integrations and/or open-source frameworks like HuggingFace models, LlamaIndex, smolagents, etc.
  6. You have an adoption strategy which includes staff training

If you cannot confidently check most of these boxes, the best next step is likely to invest in foundational data capabilities, infrastructure, and governance rather than diving headlong into building an AI agent. We are happy to discuss how you might get there.

 

Looking Ahead

There is still ample time for news publishers to develop robust AI foundations before fully autonomous agents become mainstream. The essential ingredients - trusted brands, quality content, and engaged audiences remain the media industry’s key assets. 

It's become very clear to us that the companies which truly realise internal efficiency or new innovative audience engagement are ones where teams and audiences are brought along the journey with clear training - which sometimes includes mentorship programmes or communication from other parts of your business. 

Really without a plan for adoption and iteration, the agent will remain an experiment - not a product.


At FT Strategies, we have a deep knowledge of AI, technology & data, and what you need to future-proof your business. If you would like to learn more about our expertise and how we can help your company grow, please get in touch.


About the authors

 

FTS_Headshots_SamGould_Website_RGB-2

Sam Gould, Manager, AI Lead

 

Jhanein Geronimo, Consulting Project Associate
Jhanein Geronimo, Consulting Project Associate

Sam is the AI Lead at FT Strategies with over 7 years of experience helping clients solve strategic business challenges using data. He has helped organisations in both the public and private sectors to define strategic roadmaps and processes for using AI. He has also designed and built innovative data solutions, working with senior stakeholders as part of critical delivery-focused teams.

 

FTS_Headshots_AzymberdiTaganov_RGB_1000X1000

George Adelman, Principal
George Adelman, Principal

Azymberdi Taganov, Consultant


Azymberdi (Azym) is a Consultant at FT Strategies, specialising in media strategy, process optimisation, and AI-driven operational excellence. With three years of experience advising media organisations, he helps businesses navigate digital transformation, streamline workflows, and leverage AI to drive sustainable growth.