New Horizons in Journalism – Between Human and Artificial Intelligence

Artificial intelligence (AI), as a currently emerging technology, is transforming several sectors globally, including the news and journalism, with both remarkable potential and significant risks. 

AI is supposed to reshape journalism, enabling the processing of massive data for better reporting, fact-checking, and audience engagement while boosting organizational efficiency. However, these “otherworldly” innovations bring plenty of ethical concerns, risks of deepfake content, and potential job displacement in journalism. 

Amidst these crucial changes, it’s vital for professionals to exchange expertise and strategies. This is why the World Press Institute (WPI), in collaboration with the America for Bulgaria Foundation and the Association of European Journalists in Bulgaria, organized the conference New Horizons in Journalism – Between Human and Artificial Intelligence. Leading journalists from all around the world took part in the international event at the Astoria Grand Hotel in Sofia on November 8, 2023, where the fusion of journalism and AI was the center stage.

Among the lectures were Kenneth Cukier (deputy executive editor at The Economist), Abby Bertics (science correspondent at The Economist), Marina Tsekova (journalist and TV host at Nova TV), Christopher Brennan (journalist, AI-models builder, and co-founder of Overtone), Philip Hatcher-Moore (journalist at BBC News Labs), Venetia Menzies (journalist at The Times and Sunday Times), Rune Ytreberg (editor at iTromsø), Ademola Bello (editor at The New York Times), and Ivan Georgiev (news anchor and reporter at bTV).

At the conference also took part: Kristina Stoitsova (head of “Data Science” at Financial Times); Dr. Lawrence Dericks (researcher at the University of Bergen and Lecturer at the Free University of Brussels; Dr. Nikola Konstantinov (lecturer at the Institute of Computer Science, Artificial Intelligence, and Technology at Sofia University); Dr. Maya Koleva (director of “Research and Analysis” at Commetric); Alessandro Alviani (product manager for “Natural Language Processing” at Ippen Digital; Ville Juutilainen (head of the “Data-Driven Research” Department at Yle News Labs); Dr. Darina Sarelska (lecturer in Journalism and Communications at the American University in Bulgaria); Valentin Porcellini (software engineer at AFP Medialab).

All these professionals in journalism, prominent thinkers, and innovators in the vast field of media and communications were engaged in thought-provoking dialogues on the transformative impact of AI on news and storytelling, as well as on the already present and potential risks and issues that might occur.

Opening the Dialogue on Journalism and AI

The conference started with a series of cordial greetings and thanks from representatives of the organizers of the event. 

In her salutation, Nancy Schiller (President and CEO of America for Bulgaria Foundation) highlighted the pivotal role of media in upholding democracy and heartily extended her thanks to those contributing to this cause. 

Irina Nedeva (Chairperson of the Association of European Journalists – Bulgaria and a distinguished journalist at the Bulgarian National Radio) remarked on the challenges posed by the era of misinformation and the prompt emergence of artificial intelligence, noting that AI is often considered to surpass human capacity. Nedeva also raised the question about AI’s lack of opportunities for contextual comprehension, emphasizing that this is still a prerogative of human intelligence. 

David McDonald, the executive director of the World Press Institute, also took part in the opening remarks as a representative of the organization that was founded in 1961 and has more than sixty years of history. 

The conference’s tone was further continued by the insightful introduction of Milena Kirova (Journalist at the Bulgarian National Television). She brought attention to the distribution of bot-operated Twitter (X) accounts and the limitations of AI tools in conducting journalistic investigations. These imperfections of the lively emerging technology make journalists indispensable in this particular role.

Kirova also cautioned about the misuse of AI in fabricating deceptive videos of public figures, which could have serious implications, by giving the example of the recent fake advertising videos of popular TV hosts in Bulgaria that were created with the help of AI. She also expressed the worries of many people around the world that someone could maliciously do such a fake video of a politician, and this could be quite dangerous for. Milena Kirova’s closing remarks stated her anticipation to hear the opinions of the experts in the forthcoming panels of the conference.

AI from the Perspectives of Science and Society

Kenneth Cukier (deputy executive editor of The Economist) connected with the audience via video link, offering his perspectives on the evolving role of AI in journalism and the global society. This was complemented by insights from Abby Bertics, Science Correspondent of The Economist, who shared her experiences on the front lines of science journalism.

Cuikier started his presentation by showing a print screen of a job offer posted by a media outlet that is looking for an AI generator with an annual remuneration of 180,000-200,000$, which is quite a bountiful salary. To highlight the impact of innovations on young generations, he also presented some statistics revealing that people under 30 in the US trust social media as much as traditional media. 

AI is going to be positive, even though there would be also a lot of negatives, stated Keneth Cukier. He said that there could be 1,000 times more media content with AI. Journalistic content is 99% mediocre and 1% is exceptional, he claimed, but with AI it could be 99,99% mediocre and 0.01% exceptional journalistic content. So,  considering that there would be 1000 times more content with AI, high-quality journalistic content will be 10 times more than before the implementation of AI in media.

Abby Bertics gave an introduction to what AI is. She emphasized that the goal of AI is to surpass human intelligence and, by giving examples with recommendations, transcriptions, subtitles, etc., she reminded that AI is not a new thing. But what makes the situation different now is that it is the first time AI can generate human speech, she added. Bertics gave examples of how AI can predict our next words (for instance, when we write emails) and how this technology has been constantly improving and developing. She also discovered what are the most prominent AI textual generators nowadays, mentioning chatGPT, Bing, and others.

Could artificial intelligence serve as a handy tool for journalists to enhance their productivity, or might it completely transform their occupations? Abby Bertics believes that AI can indeed be utilized to streamline work, similar to how Google search aids us, especially as a tool for language-related tasks. Yet, she cautions that relying on AI for in-depth analysis and interpretation could pose significant risks for journalists.

Bertics highlighted that as AI improves, the quality of media becomes increasingly crucial. She also set the stage for ongoing debates about a significant new issue: how will different media companies stand out from one another using AI? How can a media outlet set itself apart from other outlets that will also be using the same tools?

AI in Journalism: How it is Being Used in Practice?

The first panel was moderated by the Bulgarian TV host Marina Tsekova (NOVA). In it, the professionals explored practical ways in which AI is being integrated into journalistic work. Quoting the Israeli intellectual Yuval Noah Harari, Tsekova reminded the audience in the conference hall that AI is the first tool that can make decisions itself. 

The session featured Venetia Menzies (The Times), Ademola Belo (The New York Times), Alessandro Alviani (Ippen Digital), and Christopher Brennan (Overtone) who shared their experiences on leveraging AI for data journalism, editorial development, and product management.

Venetia Menzies discussed the impact of AI on disinformation and verification of information. She observed that spreading false information today is effortless, costs nothing, and is widespread. The language and appearance of this information often seem trustworthy, which intensifies the problem. Menzies also highlighted the issue of customized, individually-tailored false information. Those who create it understand our passions and interests very well, which allows them to craft content that manipulates us by aligning with our personal preferences, she explained. 

Menzies also clarified the distinction between unintentional misinformation and deliberate disinformation, noting that AI can sometimes be the source of incorrect information. On a positive note, she mentioned that AI has the potential to assist in validating information by identifying language patterns typical of propaganda, detecting bots, and spotting fabricated images created by AI. Among these challenges, she stated that exposing deepfakes is exceptionally challenging.

Ademola Belo, another participant in the first panel, gave an example of the practical usage of AI in the newsrooms by representing a real investigation of visual depictions of skin conditions in a textbook for medicals. In this case, AI really helped the journalists in scaling up the investigation as it made it easier for the journalists to approach a problem that initially seemed hard to quantify. Also, AI allowed them to visualize properly the phenomenon they investigated, making it more clear for the audience. Belo observed that machines, similar to human beings, can also exhibit biases, potentially leading to substantial issues.

Christopher Brennen observed that recommender systems have become the main method through which individuals discover articles. While current tools excel at capturing initial impressions, they frequently fail to retain readers. Thus, outlets lose people from their audience. Despite appearing similar on the surface, articles can vary significantly in content and perspective, Brennen mentioned. He highlighted the distinct audiences that different genres of articles attract, suggesting that AI holds the potential for enhancing the field of journalism through more nuanced and effective content curation.

Alessandro Aviani discussed the role of generative AI in journalism at Ippen Digital, emphasizing the significance of transparency as outlined in the AI guidelines in their organization. According to these guidelines, they should never use photorealistic images that could deceive the audience into believing they are authentic. He said that large language models could help in testing the factuality of information. Aviani also revealed that they implemented a standardized framework to help editors double-check information. Since AI can never create an interview unlike a journalist, he stated that he is not afraid of these new technologies replacing the journalists.

The Ethics and Research of Generative AI

In the second panel, Dr. Darina Sarelska (AUBG) guided a conversation on the ethical considerations and research surrounding generative AI. Panelists included Nikola Konstantinov (INSAIT), Maya Koleva (Commetric), and Laurence Dierickx (from the University of Bergen and the Free University of Brussels).

Laurence Dierickx noted that newsrooms have not recently adopted AI tools, and they’ve been experimenting with them for a decade. The ethical challenges they pose, however, are significant and varied. A pressing concern is maintaining the human perspective in journalism (ensuring that people receive information that reflects human judgment and empathy). Large language models, such as ChatGPT, bring issues of transparency to the forefront, particularly from an ethical point of view. Presently, Dierickx noted, it is estimated that 50% of newsrooms worldwide are utilizing technologies like ChatGPT. 

Additionally, as Dierckx claimed, the use of AI in crafting disinformation is a serious concern. The ethical considerations surrounding AI in journalism also enclose the working conditions of journalists, indicating that the implications are not only about compliance with guidelines but also about the broader impact on the profession. This is also prompting important questions about the interaction of journalists with other copyrighted materials. 

Nikola Konstantinov provided detailed information about the hazards associated with generative AI. He acknowledged the imperfection of current AI algorithms and highlighted the continuous efforts by researchers to enhance AI’s robustness, fairness, and respect for privacy. Konstantinov talked about the malicious use of AI and the ongoing discussions on regulations. He pointed out prevalent misconceptions about AI’s capabilities, clarifying: “It’s not like a human but a technology”. 

Konstantinov emphasized the importance of not allowing such imperfect technology to dictate and control our lives. But he also paid attention to the discussions about how AI can be regulated, warning of the dangers that overregulation might prevail. Thus, Konstantinov helped the audience of the conference understand that regulation in AI is a process that requires careful consideration.

“Trust is the key component of reputation capital”, Maya Koleva said. She outlined that the rise in the number of online news media and social media increases the volume of content, and also clarified how corporate communication strategies evolve within organizations to meet the demands. Koleva pointed to the need for predictive analytics and automation to stay ahead of these challenges. She also raised concerns about AI’s potential to personalize marketing to even higher degrees, signaling the possibility of an even more tailored shift in advertising approaches in the future. 

During the panel questions, prof. Maria Neykova from Sofia University’s Faculty of Journalism asked the panelists about the absence of a European counterpart to dominant social media platforms like Facebook and Twitter (X). Responding, Koleva acknowledged previous efforts of creating globally popular European social media and applications, citing Skype’s creation and its subsequent acquisition by Microsoft as a notable example. However, she redirected the question towards the invasive nature of the current social platforms. The more accurate problem, Koleva said, is why are the currently popular social media platforms invasive to that huge level.

Innovations in the Newsroom

The final panel, moderated by Ivan Georgiev (bTV), looked at the opportunities arising in the information era. In his introduction, Georgiev reminded the conference attendees that trust is the biggest capital a journalist could ever have. 

Kristina Stoitsova (Financial Times), Ville Juutilainen (Yle News Labs), Philip Hatcher-Moore (BBC News Labs), and Rune Ytreberg (iTromsø) discussed the transformative innovations reshaping global and local newsrooms today.

Phillip Hatcher-Moore highlighted the evolving synergy between journalism and technology, emphasizing the BBC’s use of automated image cropping and the development of secure platforms tailored for innovation. He revealed that they are also working on building secure platforms for experimentation. 

Hatcher-Moore also discussed the potential of AI in offering diverse storytelling methods, including the use of ChatGPT for summarizing complex subjects. Showcasing that BBC is working in 43 different languages, he revealed they are working on a multilingual article tracker project that could nose out original content in other languages. 

Trust in news is declining, Hatcher-Moore also stated. Addressing the current decline in news trust, he acknowledged the significant current shifts in journalistic production and consumption.

Kristina Stoitsova shed light on the evolution of journalism towards sustainability. She reminded the audience about the transition journalists have made from traditional paper-based mediums to the digital fields, and she also emphasized the innovative use of artificial intelligence at the Financial Times. 

Furthermore, Stoitsova noted that the impact of AI in journalism extends beyond content creation to include various marketing-related applications. She pointed out that the Financial Times utilizes a suite of AI tools primarily for segmentation purposes. They use audience segmentation to address people with some marketing campaigns, she revealed. This strategic approach often culminates in readers signing up for a one-month trial license, and they could subscribe for a longer period after the free trial expires. So, AI helps the globally famous outlet for precise audience segmentation and tailoring marketing campaigns to more effectively engage readers.

Rune Ytreberg highlighted that within his organization, journalists leverage artificial intelligence to streamline repetitive working tasks. These tasks, often conducted daily, are efficiently managed using AI technologies. Additionally, the organization employs advanced language models, as well as a virtual assistant similar to Alexa, which is named Layla, to further assist in their day-to-day operations.

Ville Juutilainen, with exceptional sense of humor, provided insights into the practices of his relatively small organization, which is keen on organizing data into structured forms. He revealed that as a journalist, whenever he has a story assignment, he first begins with a thorough search on Google to ensure the accuracy of his work. 

Juutilainen maintains an optimistic view regarding efficiency within the newsroom. When the moderator Ivan Georgiev asked him what kind of young journalist he could hire, giving him several options to choose from, Juutilainen ignored the provided options and answered without restraint: “Someone who is curious about things”.

This pioneering event succeeded in setting the tone for the important role of AI in today’s and future journalism. Held in the Bulgarian capital of Sofia, an enchanting millennial city of various civilizations and epochs, the “New Horizons in Journalism” conference laid the groundwork for a future where AI and human intelligence interflow to redefine journalism for the better. But it also provoked professionals to deeply think about the inevitable current and potential risks of implementing AI in journalism and how these risks could be prevented, significantly reduced, or at least delayed until they could be avoided.

The conference New Horizons in Journalism – Between Human and Artificial Intelligence agenda was also enriched with the addition of three complementary workshops, dedicated to chatGPT for news, the hidden world of large language models, and the wonderful instrument for journalists and fact-checkers vera.ai. They all were conducted on the day after the conference.

Photos: Zdravko Yonchev

More Details about the Lecturers

KENNETH CUKIER (Deputy Executive Editor at The Economist)

Kenneth Cukier is the Deputy Executive Editor and “Babbage” podcast host at The Economist. He is also a co-author of “Big Data” – a New York Times bestseller in over 20 languages. Formerly with Wall Street Journal Asia and the International Herald Tribune, he’s also been a Harvard research fellow. Cukier is a Council on Foreign Relations member, serves on the Chatham House board, and is an associate at Oxford’s Saïd Business School. His new book “Framers” explores mental models and AI.

ABBY BERTICS (Science Correspondent at The Economist) 

Abby Bertics is a science correspondent at The Economist, covering AI and tech since 2022. With a background as a professional volleyball player, Fulbright researcher, and Google engineer, she holds advanced degrees in computer science from the Massachusetts Institute of Technology. 

 

MARINA TSEKOVA (Journalist and TV host at NOVA)

Marina Tsekova is a famous Bulgarian TV journalist and “Wake Up” host on NOVA, with a decade in reporting and producing. Her prior experience includes media outlets such as Deutsche Welle, BNR, and L’Europeo, with her start at BNT’s “Panorama”. She has degrees from FU Berlin and is a World Press Institute fellow in the United States, with multiple journalism awards.

CHRISTOPHER BRENNAN (Co-founder and Chief Product Officer at Overtone)

Christopher transitioned from international reporting for outlets like the BBC and France 24 to tech development with Overtone. He now leads the creation of language models for news data and tools for its use in newsrooms and analysis.

 

DR. LAURENCE DIERICKX (Researcher at the University of Bergen and Lecturer at the Free University of Brussels)

Dr. Laurence Dierickx researches fact-checking tech at the University of Bergen and teaches journalism at the Free University of Brussels. With a background in journalism and a Master’s in Science and Information Communication, her Ph.D. thesis examines news automation. She consults and trains in digital journalism in Belgium and Africa.

DR. NIKOLA KONSTANTINOV (Lecturer at the Institute of Computer Science, Artificial Intelligence, and Technology at Sofia University)

Dr. Nikola Konstantinov, heading the machine learning group at Sofia University, specializes in reliable AI, emphasizing mathematical assurances for algorithms. He was a postdoc at Zurich’s AI Center, mentored by Profs. Vechev and Yang, after a Ph.D. under Prof. Lampert at the Institute of Science and Technology, Austria.

DR. MAYA KOLEVA (Director of Research and Analysis at Commetric)

With more than 15 years in media research, dr. Koleva leads research at Commetric and has served on the board of the International Association for Measurement and Evaluation of Communication for 5 years. She has an editing background, holds a master’s in public policy from Central European University, and a PhD from Sofia University. Maya employs advanced AI and analytics to address complex PR and communications research queries.

PHILIP HATCHER-MOORE (Journalist at BBC News Labs)

Mr. Hatcher-Moore, a journalist with a computer science background, joined BBC News Labs in 2022 after freelancing globally in reporting and photography, including a five-year stay in Kenya. Previously, he researched knowledge management at Sheffield University and worked in the French tech industry. At BBC News Labs, he’s working on innovations to enhance newsroom efficiency and audience engagement.

ALESSANDRO ALVIANI (Product Manager for Natural Language Processing at Ippen Digital)

Alessandro Alviani manages Natural Language Processing products at Ippen Digital, leading a team on AI editorial tools. As a 2022 JournalismAI fellow, he developed a tool with The Times to track manipulated news. Formerly, he coordinated at Microsoft News Hub and reported for the Italian newspaper La Stampa in Germany, focusing on the topic of AI in journalism since 2020.

VILLE JUUTILAINEN (Head of Data-Driven Research Department at Yle News Labs)

Ville Juutilainen leads a team that works with databases and computational methods used for journalistic investigations at the Finnish Broadcasting Company (Yle).

 

VENETIA MENZIES (Journalist at The Times and Sunday Times)

Venisha Menzies is a senior journalist focusing on data work at The Times and Sunday Times, specializing in visual storytelling. She was a finalist for the Amnesty Media Awards with her report on domestic servitude in the UK. Last year she developed a prototype tool for detecting journalistic materials that have characteristics close to propaganda and are predominantly disseminated by state media.

RUNE YTREBERG (Data Editor for Data Journalism Lab at iTromsø)

Rune Ytreberg is a data editor at iTromsø‘s Data Journalism Lab, creating AI-driven investigative content and tools for Polaris Media. An ex-award-winning reporter at NRK‘s “Brennpunkt” and pioneer at Dagens Næringsliv‘s data journalism unit, Rune also teaches data-centric investigative journalism at Norwegian universities and advises on AI for Polaris Media.

ADEMOLA BELLO (Editor at The New York Times)

Ademola Bello is an editor focused on newsroom development and support at The New York Times in London, training staff and fostering newsroom-product team collaboration. Previously a data journalist with The Times/ The Sunday Times and a UK public data specialist, he contributed to a JournalismAI Fellowship in 2022, developing tools to trace manipulated media.

IVAN GEORGIEV (Journalist at bTV)

Ivan Georgiev boasts an impressive tenure of eighteen years as a distinguished news anchor and reporter at bTV, where he has covered pivotal international and regional events. His commitment to journalism was honored with the prestigious World Press Institute Fellowship in 2012. In addition, he has created a series of insightful TV documentaries that delve deep into the stories. 

DR. DARINA SARELSKA (Lecturer in Journalism and Communications at the American University in Bulgaria)

Dr. Darina Sarelska teaches journalism and communications at the American University in Bulgaria, focusing on Digital Storytelling, media literacy, and mobile journalism. With a Ph.D. from the University of Tennessee and teaching experience in the US, she has a 20-year background in Bulgarian TV as a producer for bTV and News Director for NOVA. Her research explores media literacy, misinformation, journalistic ethics, and free speech.

KRISTINA STOITSOVA (Head of Data Science at Financial Times)

Kristina Stoitsova is a tech expert with 20+ years in product development and data. She has a computer science background, an e-business master’s, and an MBA from Vienna University. Recently, she’s been pivotal in developing the Financial Times‘ subscription model and now heads their Data Science team.

 

DAVID DJAMBAZOV (Senior Data Scientist at Financial Times)

David Djambazov is a Senior Data Scientist at the FT. A graduate of the California Institute of Technology (Caltech) and UC Berkeley with degrees in Physics and Data Science. His career choices have taken him from the trading desks on Wall Street, in the role of bond trader, through the streets of New York, as a film production manager, to the Donbas region of Ukraine, as a journalist and filmmaker. Since embarking on a career in Data Science, he has focused on NLP, Remote Sensing and Computer Vision Machine Learning projects.

VALENTIN PORCELLINI (Software Engineer, Agence France-Presse Medialab)

Valentin Porcellini is a software engineer at AFP Medialab, developing and maintaining fact-checking tools like the InVID-WeVerify plugin, and contributes to the EU-funded vera.ai project.

 

 

 

What's New