In this format from the OeAD Center for Citizen Science, two experts each give an insight into the state of research and practice. In 2025, the thematic arc ranged from a classification of trends and tools in the field of AI in science communication (follow-up report) to texts and research (follow-up report) and now to images and visualizations.
Possible applications and skills
This time, Dr. Erwin Feyersinger, who conducts research on audiovisual media, AI, and science communication at the University of Tübingen, including at the Center for Rhetorical Science Communication Research on Artificial Intelligence (RHET AI), kicked things off.
In his presentation, he pointed out that great care is needed when creating and using AI-generated images and videos, both in scientific publications and in formats aimed at people outside the scientific community, in order to ensure high content quality and aesthetic standards. He emphasized that although the manufacturers of such tools promise increased efficiency and reduced workload, the usability of AI tools must always be critically examined in each individual case and that they do not necessarily require less effort. Given the rapid developments in this field, communicators and researchers are encouraged to train their image analysis skills independently of individual tools in order to better assess whether AI visualizations are suitable.
Ethical and epistemic challenges
The second lecture was given by Prof. Dr. Elke Grittmann. She is a professor of media and society at the Institute of Journalism at Magdeburg-Stendal University of Applied Sciences and conducts research on visual communication with a focus on journalism and media, as well as generative image AI.
She gave participants an insight into the ethical and epistemic challenges arising from the use of AI-generated images in science communication. Even the imagery used in the AI discourse draws on stereotypical notions of technology and robots. However, a similar problem exists in all topics, as generative AI models are trained using data that does not represent a neutral image of the world, but contains and thus reproduces prejudices and stereotypes. Since science's high standards of truth and reality are also relevant in visual media, critical analysis must always take place. A current gap is that, although there are increasingly guidelines for the use of AI tools in a university context, there are still hardly any ethical guidelines for the creation and use of AI-generated media.
Outlook
In 2026, the OeAD Center for Citizen Science will once again organize a series of training courses on topics related to science communication and citizen science. To make sure you don't miss any dates, subscribe to our newsletter.
Further links
The participants and speakers were able to delve even deeper into the challenges and opportunities during the discussion. The following resources were shared for further exploration of the topic.