Generative AI Technologies as a Tool for Counter-Narratives to Russian Propaganda
DOI:
https://doi.org/10.31861/mediaforum.2025.16.48-58Keywords:
generative AI, Ukraine, EU, political communication, international relations, disinformation, russian propaganda, counter-narrative, LLM, deepfakeAbstract
This article explores the potential of generative artificial intelligence (AI) technologies as a tool for constructing counter-narratives against russian propaganda in the context of the full-scale war in Ukraine. It examines methodological approaches to AI in the field of international communication, identifies key characteristics of the information warfare environment – such as deepfakes, large language model (LLM) poisoning, and automated disinformation – and focuses on how these technologies are exploited by hostile actors. Special attention is given to Ukrainian governmental and civil society initiatives that utilize generative AI to produce fact-based, emotionally resonant, and multimodal counter-narratives. The article discusses the ethical and legal boundaries of such use, including the risks of eroding public trust, the opacity of algorithmic outputs, and the challenge of distinguishing legitimate information defense from manipulation. Finally, it outlines future research perspectives regarding regulatory frameworks, strategic communication, and the development of algorithmic trust in democratic societies. The analysis is based on 20 domestic and international sources.
Downloads
References
1. Avdieieva T (2024). Yak Rosiia vykorystovuie ShI dlia stvorennia feikiv, ta yak yikh rozpiznaty. Hromadske radio. URL: https://hromadske.radio/news/2024/09/09/yak-rosiia-vykorystovuie-shi-dlia-stvorennia-feykiv-ta-iak-ikh-rozpiznaty (in Ukrainian).
2. Hembik O. (2025). Dezinformatsiia y ShI: yak rosiiska propahanda distalas chatbotiv. URL: https://www.sestry.eu/statti/manipulyaciyi-u-merezhi-yak-rosiyska-dezinformaciya-distalasya-chatbotiv (in Ukrainian).
3. Kabinet Ministriv Ukrainy (2025). Bezpechnyi ShI dlia milioniv ukraintsiv. URL: https://www.kmu.gov.ua/news/bezpechnyi-shi-dlia-milioniv-ukraintsiv-ukraina-pidpysala-ramkovu-konventsiiu-pro-shtuchnyi-intelekt-ta-prava-liudyny (in Ukrainian).
4. Lytvynov V. (2024). Yak Ukraina vykorystovuie ShI u viini z Rosiieiu. URL: https://texty.org.ua/fragments/112210/yak-ukrayina-vykorystovuye-shtuchnyj-intelekt-u-vijni-z-rosiyeyu-the-economist (in Ukrainian).
5. MediaMaker (2025). Yak rosiiska propahanda manipuliuie shtuchnym intelektom. URL: https://mediamaker.me/shi-zhertva-dezinformacziyi-yak-rosijska-propaganda-navchylasya-manipulyuvaty-shtuchnym-intelektom-16206 (in Ukrainian).
6. Ornatskyi A (2024). Yak pratsiuie War of Words – novyi ShI-instrument dlia analizu rosiiskoi propahandy. Detector Media. URL: https://detector.media/infospace/article/228955/2024-06-30-yak-pratsyuie-war-of-words-novyy-shi-instrument-dlya-analizu-rosiyskoi-propagandy (in Ukrainian).
7. RBK Ukraina (2024). Rosiia vykorystovuie shtuchnyi intelekt dlia dezinformatsii. URL: https://www.rbc.ua/rus/news/rosiya-vikoristovue-shtuchniy-intelekt-dezinformatsiyi-1729204517.html (in Ukrainian).
8. Sydorskyi V (2023). Yak za dopomohoiu AI protydiiaty rosiiskii dezinformatsii. URL: https://dou.ua/forums/topic/44993 (in Ukrainian).
9. Ukrinform (2024). Rosiia vykorystovuie ShI tekhnolohiiu OpenAI dlia poshyrennia propahandy proty Ukrainy. URL: https://www.ukrinform.ua/rubric-world/3869959-rosia-vikoristovue-sitehnologiu-openai-dla-posirenna-propagandi-proti-ukraini.html (in Ukrainian).
10. Brandt J. (2023). Propaganda, foreign interference, and generative AI. Brookings Institution. URL: https://www.brookings.edu/articles/propaganda-foreign-interference-and-generative-ai
11. Ezzeddine F, Luceri L, Ayoub O, Sbeity I, Nogara G, Ferrara E, Giordano S (2022). Exposing Influence Campaigns in the Age of LLMs: A Behavioral Based AI Approach to Detecting State Sponsored Trolls. arXiv preprint. URL: https://arxiv.org/abs/2210.08786
12. Goldstein J A, Sastry G, Musser M, DiResta R, Gentzel M, Sedova K (2023). Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations. arXiv preprint. URL: https://arxiv.org/abs/2301.04246
13. Kuznetsova E, Makhortykh M, Vziatysheva V, Stolze M, Baghumyan A, Urman A (2023). In Generative AI we Trust: Can Chatbots Effectively Verify Political Information? arXiv preprint. URL: https://arxiv.org/abs/2312.13096
14. Majchrzak A (2023). Russian disinformation and the use of images generated by artificial intelligence (deepfake) in the first year of the invasion of Ukraine. Media Biznes Kultura 1(14). URL: https://ejournals.eu/pliki_artykulu_czasopisma/pelny_tekst/01936778-4cef-7088-b5be-d63c9c99d02b/pobierz
15. Marushchak A., Petrov S., Khoperiya A. (2025). Countering AI-powered disinformation through national regulation: learning from the case of Ukraine. Front. Artif. Intell. 7:1474034. https://doi.org/10.3389/frai.2024.1474034
16. Menn J, Zakrzewski C (2025). Russia seeds chatbots with lies. Washington Post. URL: https://www.washingtonpost.com/technology/2025/04/17/llm-poisoning-grooming-chatbots-russia
17. Sadeghi M S (2025). AI and Data Voids: How Propaganda Exploits Gaps in Online Information. Lawfare. URL: https://www.lawfaremedia.org/article/ai-and-data-voids--how-propaganda-exploits-gaps-in-online-information
18. Sprenkamp K, Jones D G, Zavolokina L (2023). Large Language Models for Propaganda Detection. arXiv preprint. URL: https://arxiv.org/abs/2310.06422
19. Wack M., Ehrett C., Linvill D., Warren P. (2025). Generative propaganda: Evidence of AI’s impact from a state-backed disinformation campaign. PNAS Nexus 4(4). April 2025. https://doi.org/10.1093/pnasnexus/pgaf083