The New York Times has named an Editorial Director of A.I. Initiatives, in yet another sign of how pervasive generative artificial intelligence has become in the information ecosystem. A staff memo announced last week that the new position would be filled by Zach Seward, a founding editor of the Quartz business news site and former journalist at the Wall Street Journal. The memo admitted that the appointment leads to some ticklish questions.
“How should The Times’s journalism benefit from generative A.I. technologies? Can these new tools help us work faster? Where should we draw the red lines around where we won’t use it?” the memo said. “These are important questions, and we’re thrilled to announce that Zach Seward, an accomplished and creative journalistic leader, will be taking them head-on as our first editorial director of Artificial Intelligence Initiatives.”
Seward will take over A.I. projects that were begun in the last half year at the Times. “[He] shares our firm belief that Times journalism will always be reported, written and edited by our expert journalists,” the memo said. “But Zach will also help guide how these new tools can assist our journalists in their work, and help us broaden our reach and expand our report.”
Seward will work closely with departments spanning the newsroom and Opinion, as well as the product development organization.
The influence of AI in journalism has been expected for a while. In 2019, JournalismAI was begun as a project of Polis – the journalism think-tank at the London School of Economics and Political Science. Supported by the Google News Initiative, JournalismAI has developed a range of programs to help journalists use AI in their newsroom.
“Journalists around the world are under enormous pressure, but they’ve also got incredible new technologies that enable them to do their work in a way that I couldn’t even imagine 15 years ago,” said Prof. Charlie Beckett, Director of JournalismAI. “We have a very strong sense of the power of these technologies and their potential … and we are trying to help journalists use AI in a way that’s going to support their editorial mission.”
The Center for News, Technology and Innovation released an Issue Primer in October warning of the pitfalls of AI in the newsroom.
“The use of AI in the production and distribution of news, as well as how AI systems use news content to learn will introduce novel legal and ethical challenges for journalists, creators, policymakers and social media platforms,” CNTI said. The use of A.I. risks inaccuracies, ethical issues and undermining public trust, along with present opportunities for abuse of copyright for journalists’ original work, the group added.
CNTI called for legislation requiring clear definitions of AI categories and specific disclosures for each, as well as establishing repercussions for infringement of copyright or terms-of-service agreements and the violation of people’s civil liberties by AI-generated content.