(From left to right) Moderator Dylan Carter and panellists Ike Picone, Anja Wyrobek, and Dafydd ab Iago at the Press Club Brussels Europe. Photo: ©Heleen van Geest
By Jan Rehaag
As Artificial Intelligence continues to spread across industries, the technology is also transforming the media sector, placing journalism at a crossroads.
To better understand the pitfalls and potential of the technology, AEJ Belgium recently hosted a panel discussion titled “AI & Media: Risk or (R)evolution?” in the Press Club Brussels Europe, featuring VUB professor Ike Picone, Argus Media EU correspondent Dafydd ab Iago, legal advisor at the European Parliament Anja Wyrobek, and MEP Brando Benifei, who shared his views in a video message.
The media industry has undergone significant changes in the past few decades, with the transition from print to online journalism, and, more recently, the emergence of social media journalism, leaving many newspapers struggling to adapt.
Yet, Artificial Intelligence might pose the biggest challenge for the media sector to date. The technology has changed the game, its implications affecting the very roots of democracy.
The problem
Anja Wyrobek (Right), legal advisor at the European Parliament. Photo: ©Heleen van Geest
“At the moment, what we are fighting most with is actually the amplification of disinformation,” said Anja Wyrobek, legal advisor at the European Parliament. “How can we then guarantee that the people who want to be informed get a legitimate resource and legitimate information?”
AI’s capability to generate text, video, and audio in a matter of seconds has made it an effective propaganda tool for malicious actors in the online space.
Wyrobek mentioned the example of the Ukraine war, where AI-generated propaganda is used as a powerful weapon on the virtual battlefield.
Dylan Carter, freelance investigative journalist and moderator of the debate, agreed: “It’s never been easier to flood a comment section with hateful comments. It’s never been easier to generate a fake article, and it’s never been easier to generate a fake image.”
EU AI Act: The solution?
To counter the surge of AI-generated content, which is not always uploaded with malicious intent, the EU has passed the AI Act of 2024. The legislation aims to “address the risks of AI and position Europe to play a leading role globally,” as stated on the European Commission’s website.
MEP Brando Benifei © European Union 2020 – Source: EP
MEP Brando Benifei, who was one of the architects of the EU’s AI Act, stressed the importance of the legislation to combat harmful content:
“We need to protect the digital environment, […] from the use of AI to promote disinformation and manipulation. We need to work together so that we can enact, for example, full transparency, making AI-generated content recognisable.”
Transparency is one of the key promises of the new European AI legislation, which requires companies behind AI tools to label machine-generated content, such as deepfakes, clearly.
The transparency principle also addresses copyright issues that surround large language models (LLMs). Modern AI systems require an ever-growing dataset size to continually improve the technology’s capabilities, which often leads to companies cutting corners, such as violating copyright laws, in the pursuit of the best AI model.
In March 2025, an investigation by The Atlantic revealed that Meta’s flagship model, Llama 3, had been trained on large amounts of pirated content.
And Meta isn’t the only company that has used copyrighted content to build their AI model.
OpenAI’s ChatGPT was trained on articles written by The New York Times, leading to the newspaper filing a lawsuit against OpenAI for copyright infringement in the United States.
But in the EU, training an AI model on undisclosed copyrighted content is not possible under the new AI Act, according to Wyrobek.
“If you are setting up an AI system [in the EU], your sources, everything that you have trained your AI system on, have to be transparent, and this is something that the authorities can also request from you, and you have to be able to be held accountable,” she explained. “So, in case you are going to train your AI on copyrighted work, on journalistic work, on authors, on any copyrighted work, then you must have the permission of the author, or you will not be in compliance with the regulation.”
AI-ducation
Ike Picone, Professor at VUB (Right). Photo: ©Aagje van Raemdonck
Education is emerging as another frontier for Artificial Intelligence. Ike Picone, a Professor at Vrije Universiteit Brussel (VUB), stated that he has observed a significant shift in the university’s attitude towards the use of AI.
“The educators of journalists are not prepared. We do our best, but it’s also quite a dramatic change. […] In the last two years, we shifted from a kind of no, you cannot use AI because it’s close to plagiarism and we would not allow it to, under certain conditions, you can use AI, but then you have to be transparent about it and document it, to the point that from next academic year on, we’re gonna probably leave it be.”
Picone pointed out that teachers themselves are “in the midst of discovering how to use AI” and that they, too, need to adapt to the new technology.
“We’re not that prepared yet, but I think now, at least we’re ready in terms of accepting that we need to shift gears,” he added.
AI = Aiding Investigations?
Source: Image generated by ChatGPT
On the flip side, Artificial Intelligence is already aiding journalists in their work today, with capabilities set to grow as the technology improves.
“In my job as an investigative journalist, AI turns me from a humble B-level mathematician to a superb wizard crunching through data at speed, finding sources and ideas that I never thought I could possibly find,” shared moderator Dylan Carter.
Dafydd ab Iago, correspondent for Argus Media EU, said he and his colleagues often use AI for transcription and translation. He also stressed that maintaining human control is imperative:
“You have to check all those facts, the name, the spelling, the quote, everything. I mean, but these are standard things that you should have learned to do as a journalist.”
Picone argued that AI today is “enhancing journalists” and making tasks faster and easier. But he also sees potential in employing AI systems to tailor content to news consumers:
“I think the next step will be to devise forms of use of AI that can really add value to your customers and, in the end, also to you and your organisation.
Picone imagines an AI system that can sweep the archives of a newspaper for specific topics. He also thinks that AI could enable a new form of “conversational journalism”.
“When you get an answer back from the AI saying the war in Gaza, then you can just ask, I don’t know about the war in Gaza, could you please explain? And so you get personalisation, you get summaries, you get adapted language, and I think these are interesting ways in which we will see the AI add an extra layer of value to your audiences.”
As the AI revolution unfolds before our eyes, it is all the more crucial to monitor its evolution and take action when necessary.
“There was a time when we did not have a calculator, and math also did not die when it was introduced. Everything is adapting,” said Wyrobek.
“Now it’s our job to democratise this and to make it accessible, to make it understandable, and to hold those accountable if they are misusing it. And this is also the journalistic job, because here you can expose when it is being deployed in an unlawful way.”
This article was written with the help of Artificial Intelligence in transcription, image generation, and spell checking.