Використання ChatGPT для виявлення пропаганди та дезінформації в умовах російсько-української війни

Автор(и)

  • Світлана Борисівна Фіялка Національний технічний університет України «Київський політехнічний інститут імені Ігоря Сікорського» , Україна image/svg+xml http://orcid.org/0000-0002-1855-7574
  • Вадим Олексійович Каменчук Національний технічний університет України «Київський політехнічний інститут імені Ігоря Сікорського» , Україна image/svg+xml https://orcid.org/0009-0004-4368-4366

DOI:

https://doi.org/10.20535/2077-7264.1(87).2025.318549

Ключові слова:

штучний інтелект, ChatGPT, пропаганда, російсько-українська війна, маніпулювання, емоційні маркери, критичне мислення

Анотація

The article examines the potential of using ChatGPT as a tool for detecting, analyzing, and countering propaganda and disinformation in the context of the russia-Ukraine war. The authors emphasize that the modern information space has become a critical battleground where manipulations, emotionally charged messages, distorted data, and propagandist techniques are widely employed to influence public opinion. The research is based on the analysis of over 1,500 messages from Telegram channels with diverse topics and audiences, including a public figure’s personal channel, local news channels, a national-level news channel, and a channel focusing on military events. Of these, 154 messages were selected that contained manipulative elements, including emotional appeals (fear, anger, compassion, hope), distortion of facts, selective presentation of information, and appeals to authority. The AI model identifies emotional triggers, reveals biased generalizations, and highlights logical fallacies often used to manipulate audiences. The results demonstrate that using ChatGPT enhances critical thinking by providing users with recommendations for content analysis. At the same time, the authors highlight the risks of utilizing artificial intelligence to generate propaganda materials, underscoring the importance of ethical use of these technologies. The conclusions emphasize the prospects for integrating ChatGPT into Ukraine’s information security systems to enable the early detection of disinformation campaigns. Additionally, the development of media literacy training programs is highlighted as a crucial step. Such initiatives will strengthen civil society’s resilience to manipulative influences, raising the level of media literacy and fostering a critical approach to media consumption. Leveraging localized datasets for further model training could significantly enhance its efficiency in detecting propaganda and manipulation.

Біографії авторів

Світлана Борисівна Фіялка, Національний технічний університет України «Київський політехнічний інститут імені Ігоря Сікорського»

канд. наук із соц. ком., доц., кафедра видавничої справи та редагування

Вадим Олексійович Каменчук, Національний технічний університет України «Київський політехнічний інститут імені Ігоря Сікорського»

асп.

Посилання

Voropaieva, T. S., & Aver’ianova, N. M. (2024). Shtuchnyi intelekt v systemi informatsiinoi bezpeky Ukrainy v umovakh rosiisko-ukrainskoi viiny [Artificial Intelligence in the Information Security System of Ukraine During the Russo-Ukrainian War]. Scientific Forum: Theory and Practice of Research. Valencia, Kingdom of Spain: Collection of Scientific Papers ‘SCIENTIA’, 62–67. Retrieved on the 20th of December 2024 from http://previous.scientia.report/index.php/archive/article/view/2026/2042?utm_source=chatgpt.com [in Ukrainian].

Jones, D. G. (2024). Detecting Propaganda in News Articles Using Large Language Models. Eng OA, 2(1), 01–12. Retrieved on the 20th of December 2024 from https://www.opastpublishers.com/open-access-articles/detecting-propaganda-in-news-articles-using-large-language-models.pdf [in English].

Li, L., Fan, L., Atreja, S., & Hemphill, L. (2023). ‘HOT’ ChatGPT: The promise of ChatGPT in detecting and discriminating hateful, offensive, and toxic comments on social media. arXiv:2304.10619 [cs.CL]. http://doi.org/10.48550/arXiv.2304.10619 [in English].

Melnyk, T. (2023, May 31). SHI proty rosiiskykh IPSO. Ukrainskyi startap Osavul navchyv neiromerezhi voiuvaty z propahandoiu. Yak prodaty taku tekhnolohiiu [AI against Russian IPSO. Ukrainian startup Osavul taught neural networks to fight propaganda. How to sell such technology]. Forbes Ukraine. Retrieved on the 20th of December 2024 from http://forbes.ua/innovations/ai-proti-rosiyskikh-ipso-ukrainskiy-startap-osavul-navchiv-neyromerezhi-voyuvati-z-propagandoyu-yak-prodati-taku-tekhnologiyu-30052023-13928 [in Ukrainian].

Costello, T., Pennycook, G., & Rand, D. (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science, 385(6714). 10.1126/science.adq1814 [in English].

Lorian, R. (2023, January 30). Shtuchnyi intelekt yak superinstrument dlia dezinformatsii ta propahandy [Artificial Intelligence as a Super Tool for Disinformation and Propaganda]. Retrieved on the 20th of December 2024 from http://www.oporaua.org/polit_ad/shtuchnii-intelekt-iaksuperinstrument-dlia-dezinformatsiyi-ta-propagandi-24507 [in Ukrainian].

Stokel-Walker, C. (2023, March 27). We Spoke To The Guy Who Created The Viral AI Image Of The Pope That Fooled The World. BuzzFeedNews. Retrieved on the 20th of December 2024 from http://www.buzzfeednews.com/article/chrisstokelwalker/pope-puffy-jacket-ai-midjourney-image-creator-interview [in English].

Jose, J., & Greenstadt, R. (2024). Large language models fall short in detecting propaganda. Proc. of the 18th International AAAI Conference on Web and Social Media, Workshop: CySoc 2024: 5th International Workshop on Cyber Social Threats. http://doi.org/10.36190/2024.06 [in English].

Klepper, D. (2023, January 24). It turns out that ChatGPT is really good at creating online propaganda: ‘I think what’s clear is that in the wrong hands there’s going to be a lot of trouble’. Fortune. Retrieved on the 20th of December 2024 from http://fortune.com/2023/01/24/chatgpt-open-ai-online-propaganda/ [in English].

De Vynck, G. (2024, May 30). OpenAI finds Russian and Chinese groups used its tech for propaganda campaigns. The Washington Post. Retrieved on the 20th of December 2024 from http://www.washingtonpost.com/technology/2024/05/30/openai-disinfo-influence-operations-china-russia/ [in English].

Heppell, F., Bakir, M., & Bontcheva, K. (2024). Lying blindly: Bypassing ChatGPT’s safeguards to generate hard-to-detect disinformation claims. arXiv. http://doi.org/10.48550/arXiv.2402.08467 [in English].

The criminal use of ChatGPT — A cautionary tale about large language models (2024). Business Insider. Retrieved on the 20th of December 2024 from http://www.europol.europa.eu/media-press/newsroom/news/criminal-use-of-chatgpt-cautionary-tale-about-large-language-models [in English].

(2020). Krytychne myslennia dlia osvitian [Critical thinking for educators]. Prometheus. Retrieved on the 20th of December 2024 from https://prometheus.org.ua/prometheus-free/krytychne-myslennya-dlya-osvityan/ [in Ukrainian].

Da San Martino, G., Cresci, S., Barron-Cedeno, A., Yu, S., Di Pietro, R., & Nakov, P. (2020). A survey on computational propaganda detection. arXiv. http://doi.org/10.48550/arXiv.2007.08024 [in Ukrainian].

##submission.downloads##

Опубліковано

2025-04-15

Як цитувати

Фіялка, С. Б., & Каменчук, В. О. (2025). Використання ChatGPT для виявлення пропаганди та дезінформації в умовах російсько-української війни. Технологія і техніка друкарства, (1(87), 142–152. https://doi.org/10.20535/2077-7264.1(87).2025.318549

Номер

Розділ

Соціальні комунікації