WAN-IFRA - World Association of Newspapers and News Publishers

10/31/2024 | News release | Distributed by Public on 10/31/2024 07:13

Meet the AI leaders: Jane Barrett, Head of AI Strategy Reuters

Meet the AI leaders: Jane Barrett, Head of AI Strategy Reuters

2024-10-31. As rapidly evolving tech creates new roles and processes in global newsrooms, freelance journalist Anabelle Nicoud leads a 3-part Q&A series on AI pioneers in newsrooms. In this final instalment, she talks with Jane Barrett, Head of AI Strategy for Reuters.

by WAN-IFRA External Contributor[email protected]| October 31, 2024

By Anabelle Nicoud

Two years into the generative AI boom, many newsrooms and publishers are rethinking their editorial roles to include an AI-focused mentality.

As rapidly evolving tech creates new roles and processes in global newsrooms, freelance journalist Anabelle Nicoud leads a 3-part Q&A series on AI pioneers in newsrooms.

In this final part of the series, she talks with Jane Barrett, Head of AI Strategy for Reuters about how they are building the news of tomorrow today.

See also the two other articles in this series: Data reporter and AI specialist Cynthia Tu and Lead Editor, AI innovation in journalism, Tyler Dukes

Jane Barretthas worked for Reuters for more than 20 years, first as a correspondent, then as editor. More recently, she was Head of Global News before, in July, becoming the Head of AI Strategy.

The new title doesn't begin to define the level of energy and creativity she brings to it. Here, she shares how AI has been deployed in Reuters newsroom.

How did this new position happen, and how does it fit into the broader AI vision at Reuters?

I seem to fall into interesting jobs. I'm very lucky. Before this role, I was helping Reuters build new businesses, essentially expanding the news into different areas. So, when ChatGPT launched in November 2022, I thought, 'Well, there's my next job!' I started experimenting with generative AI.

We're fortunate at Thomson Reuters to have a large AI team, including the Thomson Reuters Labs. I started collaborating with a colleague of mine on a few proof-of-concept projects. One was very successful, while another didn't work out - but that was a valuable learning experience, too.

As we were learning, more people became interested in what we were doing. We eventually formalised some of our work and, as we started to achieve real results, I was given a new role. The task was to prove there's a need for this, and that a job could come out of the work we were doing.

That's interesting, because we've seen when it comes to AI and media, that it can be top to bottom, or the other way around.

Yeah, we've taken both approaches. There's a bottom-up approach to using AI in the newsroom, and also a top-down approach: deciding which use cases to prioritise, how to staff them, and how to develop them. You need both. You need ideas coming from the newsroom to identify the problems we need to solve and how AI can help, but you also need the broader perspective to see where AI will have the greatest impact, which tools will be useful, and how they can help us and our clients around the world.

We have 2,500 journalists globally, plus the backing of Thomson Reuters, which is investing $100 million annually into using generative AI in business. We have access to Thomson Reuters Labs, data scientists, and AI experts, so we're in a strong position.

But, of course, you never have enough resources to do everything. So, the challenge is deciding how to get the biggest impact with the time and money we have.

How do you do that?

We have our own safe version of large language models (LLMs) within the Thomson Reuters AI platform. Anyone in the company can access it to build their own tools. If someone has a particular need, they can experiment with AI to see if it can solve their problem.

That's the mass, self-service approach, where anyone can jump in and use the platform. Then there's a more formalised approach. If we know we want to build something specific, we form a cross-functional team to tackle the problem and see if AI can provide a solution. If the answer is yes, we then figure out how to integrate it into our workflow to ensure it has the necessary impact.

After that, we consider how to extend those solutions to our clients. We ask, 'What problems do our clients have, and how can the tools we're building internally help them use AI in their work as well?' "

How do you identify projects in the newsroom?

If you walk into the newsroom and ask, 'What's wrong?' you'll get 1,000 answers. Journalists are often frustrated with the tools they have and the tasks they need to complete. So, identifying use cases for improvement wasn't the issue. The real challenge was filtering those ideas and figuring out where generative AI would be the solution and where something else would be a better fit.

As we tried to solve problems with generative AI, we often realised, 'You don't need AI for this - you just need a software solution.' So, we'd bring in data science and engineering experts to help those of us in editorial, who don't always have that technical background, to figure out which challenges are truly suited for generative AI.

We have this incredibly powerful tool, but we need to direct it to specific problems with full control and oversight. We're not going to let it run unchecked in the newsroom. We had to carefully consider: What can we do with it? How will we do it? And how will we maintain trust in the output, ensuring that the ethics, accuracy, and standards are upheld both in the technology and the workflow? So, identifying use cases wasn't hard-we had thousands. The hard part was filtering out which were really relevant for generative AI, and then doing the work to figure out how to integrate this new technology into our existing tools and workflows, while maintaining human oversight to ensure that the output remains trusted.

It brings us to AI literacy. To understand what AI can (and can't) do, you need to understand the tech behind it. How do you work with that?

We've launched our own training courses in the newsroom. We have two online training courses, one which is mandatory and is essentially an introduction to Gen AI. Then there's a second course, which goes deeper into AI. And then on top of that, we then do different training courses. One that's been really popular is a prompting workshop: you come with a problem you want to solve and we put people together in teams and they try to start working out like, how to write the prompts to get things to work, which data do you need, are you going to do it just a one shot prompt or a few shot prompt, do you want the chain of thought reasoning. I think everyone needs basic literacy. Some people will then naturally get more deeply into it.

So AI is transforming the way we produce and distribute information. How do you see the impact that AI can have on journalism and also the media landscape?

My feeling is just from what we've seen, from what we've actually seen, the impact of building tools in our own newsroom, how much time it saves people, how much better it makes our output. The impact can be huge. When I look out the impact on journalism, I suppose I see it in the three buckets that I use internally.

The first is productivity gains. We're always perennially short staffed. So if you can reduce the road work using generative AI while still maintaining control, then that's going to be transformational, because it will free up newsrooms to do more stuff. Then there is the second bucket, which is augment. How can you create new things to reach new audiences, new markets? And then finally there's that transform bucket (…). I'm looking at the teenagers at the moment like I'm trying to get as many teenagers in my life as possible to see how they're interacting with AI, because I think that will then show us what the future experience, news experiences might need to look like.

We've seen a lot of interest around spatial computing, with Apple's Vision Pro, and more recently, the Meta glasses. It's not too big a stretch to think that in just a few years, information will be consumed very differently.

I often catch myself being that person walking down the street with my phone in hand, reading while I should be paying attention. It's dangerous, of course. But if I had an earpiece, I could just ask, 'What's happening in Israel or Lebanon today?' and have the news delivered straight to my ear. Then I might think, 'Oh, that's interesting, tell me more about that person.' It would be so much more interactive. But this makes me wonder-what does that mean for how we produce the news? Should we still approach it the same way?

The world is evolving rapidly due to AI, and we shouldn't focus on solving problems that may not even exist in the future. Instead, we need to train ourselves to be adaptable, curious, and open to experimentation. We need to prototype, pivot, and take control of our future, rather than letting it be dictated to us - as has often been the case with past waves of digital transformation.

Is this why you believe many publishers have been very proactive when it comes to gen AI, because of the mistakes that we've done in the past with digital transformation, as an industry?

I think so. I'm sure it's not a blanket rule, but I get to speak to a lot of brilliant people who are working on AI in their publishing houses. I'm impressed with how many people are out there really trying things out and also trying to work out what makes sense for them. I feel that the relationship (with tech) has shifted.

It's interesting, because I don't think we found the perfect use case yet.

But use cases already exist, and that's how we learn, right? There are real examples today that we can measure, assessing which ones work and which don't. It's about knowing how to blend Gen AI with other engineering techniques to get the best outcomes. That's why I always say, jump in now - solve today's use case. By doing that, you'll learn the techniques, capabilities, and skills you'll need to tackle the next challenge or use case that arises. As the world evolves, we won't be caught off guard - we'll be prepared and well-equipped to handle whatever comes next.

About the author, Anabelle Nicoud

A freelance journalist and consultant based in San Francisco, Nicoud currently collaborates with The Audiencers newsletter and the Canadian monthly L'actualité.

She has worked with Apple News+ (2022-2024); helped the editorial teams at La Presse (2015-2019) and Le Devoir (2019-2022) with their digital transformation, while leading ambitious editorial projects that have won prestigious journalism awards in Canada and Quebec.

A former journalist for La Presse and correspondent for Libération in Canada, Nicoud is passionate about the impact of technology on the media, she closely follows issues related to the use of artificial intelligence.

WAN-IFRA External Contributor

[email protected]