Follow us on twitter

Subscribe to Newsletter

Calendar of Events

See More

Media councils are developing guidelines on AI and ethics

The shadow of Uruguyan developer Tammara Leites poses in front of a text generated by (digital Simon) thanks to artificial intelligence ahead of the (dSimon) performance at the Avignon fringe festival, in Avignon on July 14, 2022. (Photo by Clement MAHOUDEAU / AFP)

Share this page

With the rapid transformation of Generative AI, newsrooms face new challenges and benchmarks when it comes to adhering to deontological standards. While there is no single ethical standard on the implementation of AI in journalism, various media councils have established guidelines, or even added chapters on it in their ethical codes.

The use of AI needs to be regulated accordingly, so as to empower journalists to be quicker, more efficient and more innovative, rather than substituting for their absence. That’s why media and press councils have introduced codes and guidelines on AI. 

“The Dutch-speaking press council in Belgium was the first press council in Europe to include a guideline on the use of artificial intelligence in its Code. The guideline contains two essential parts: the principle that the editorial office is always responsible for what is published, and the importance of transparency about the use of AI towards the public," says Pieter Knapen, former Secretary General of the Dutch-speaking press council in Belgium (Raad voor de Journalistiek).

Other media councils also have prioritised transparency and AI in their ethical codes. “Impress developed a Code on AI and transparency as part of our Code Review in 2022-23,” explained Lexie Kirkconnell-Kawana, CEO of Impress UK. “We are also in the process of developing specific guidance.”

For some media councils, there has been no need to create a new guideline altogether. Managing Director of the German Press Council (Deutschen Presserat) explained that one of their guidelines on symbolic photographs and illustrating was applied to AI-generated pictures in some recent cases. 

“The Council only makes such statements if there is new territory for which our Code does not provide sufficient support” said Frits Van Exter, Chairman of the Dutch Raad voor de Journalistiek, in a blogpost. “Journalistic responsibility is paramount and then it does not matter whether a publication is the work of the editor-in-chief, an intern or a chatbot. And whoever bears responsibility must, in our opinion, also be prepared to account to the public. The use of AI is therefore subject to the same principles as any other journalistic conduct.”

Many other media and press councils have in the pipeline new clauses or guidelines on the application of AI in newsrooms. 

Media freedom organisations led by Nobel Peace Prize laureate Maria Ressa developed the Paris Charter on AI and Journalism, a charter initiated by Reporters Without Borders (RSF) that defines ethics and principles for journalists and newsrooms to apply in their work with AI. The regret about this is the absence of any reference to journalistic self-regulation (press councils, ethics councils, etc.), particularly in the face of new forms of disinformation. “It is a pity that the Charter does not explicitly mention the role of these bodies in assessing the ethical nature of the use of AI in the media,” said European Federation of Journalists (EFJ) General Secretary, Ricardo Gutiérrez.


  • Raad (BE) guideline on the use of AI here
  • Impress UK code on AI here
  • Deutschen Presserat Press Code here
  • Paris Charter on AI and Journalism here