Last month I was honoured to be elected onto America’s Newspapers Board of Directors at their Senior Leadership Conference in Chicago. Having lived in the States twice and worked in news for most of my career, it’s amazing to be part of such an incredible organisation that is empowering local news providers to thrive against the headwinds of volatile ad revenues and the rise of news avoidance.

During the conference, AI was a particularly hot topic that publishers were keen to hear more about - partly out of excitement at how it could help them to be more efficient in their daily work but also out of fear of the threat to traditional publications and jobs posed by AI-generated content. These fears and opportunities were addressed during the conference day dedicated to AI, which I hosted with the support of industry experts.

If I could sum up the sentiment of the entire conference, it would be this: AI isn’t going anywhere and we can’t remove it completely from the media ecosystem so ignoring or running from AI isn’t going to be an option - nor should it be, given the benefits that it can bring. However, media organisations should be selective and cautious about their use of Artificial Intelligence to ensure they are maintaining their unique journalistic style and reducing the possibility of AI-generated errors. Below are the main insights from the day:

Don’t let machine learning algorithms trawl your site

Currently, machine learning algorithms will be using any material they can get hold of in order to train their models to produce content. If you are a publisher, there is no advantage to you in letting this happen, until the industry can figure out a way to ensure AI companies are compensating you for the benefit of training their models on your intellectual property. Such an agreement is more likely to arrive through collective bargaining, making organisations such as America’s Newspapers even more important.

Remember that AI can make mistakes

With all the brilliance that AI is capable of, it is worthwhile to remember that AI can make mistakes, though generative AI models are highly literate, they can hallucinate. Speaker and panellist J. Stephen Poor, Partner at Seyfarth Shaw, gave the audience an example of a lawyer who used ChatGPT to create a brief but when he asked ChatGPT to confirm the quotes and sources, they could not be verified and therefore had to be treated as untrue. We’ve seen this too at the FT when ChatGPT stated that a copy of the Financial Times had been sent into space with one of the Apollo launches yet when it was asked to prove this with sources, it had to concede that it had likely hallucinated this event. The key takeaway is that we cannot trust Generative AI to be accurate and it therefore cannot be a shortcut around traditional journalistic rigour. If AI has been used in any way for a story, make sure you have verified the original sources.

Getting 80% there

Given the possibility of errors and the fact that AI cannot yet replace deeper-dive forms of journalism such as investigative journalism, it begs the question of what we can safely use AI for. One important idea is that AI can get you 80% of the way there, but that the final (and most important) 20% will always need to be carried out by trained journalists. Examples of some areas where AI can help could be: SEO optimisation, headline writing, photo editing and the populating of previously manual data such as sports results. If AI can get you 80% of the way there with tasks such as these, it frees up more time for journalists to focus on creating valuable and differentiated content. In this way, AI would be enabling the role of journalism, rather than eroding it as is often feared.

In summary, whilst we must take account of AI’s limitations and not use it in isolation from human oversight, media organisations should be looking at the ways in which it can make manual tasks more efficient, freeing up time to create journalism that matters.

If you would like to learn more about how to navigate the use of AI, please get in touch here.


About the author

Daisy Donald, Principal
Daisy Donald, Principal

Daisy joined FT Strategies from Reuters where she was Director of Global Customer Experience overseeing their website and reader revenue. Before that, she spent over eight years leading customer research teams in London and New York at the Financial Times and beyond, working closely with the B2B sales and product teams to define key strategic research initiatives. She has an EMBA from the IE Business School, Madrid.