When Western artificial intelligence becomes vectors of Russian disinformation
Generative artificial intelligences, such as OpenAI’s Chat GPT and Mistral AI’s Le Chat, are increasingly infiltrating our daily lives. Although they are supposed to help us by providing answers to our questions, doubts persist regarding the accuracy of the information they relay. A Newsguard report highlights that some of these AIs have been infiltrated by pro-Russian narratives, thus amplifying disinformation on the international scene. This phenomenon raises questions about the impact of AI and the mechanisms by which Russian propaganda infiltrates these technologies. Newsguard analyzed various artificial intelligences, including Chat GPT and other models such as those from Google and Meta, and identified that these systems were capable of relaying fake news from disinformation networks. For example, some AIs repeated rumors about the Ukrainian president, contradicting verifiable facts. This highlights the risks associated with disinformation algorithms that shape information searches on the internet. The Disinformation Mechanisms of AI These artificial intelligence systems rely on massive data available on the internet. When information is frequently repeated on multiple sites, it is considered more credible by the algorithms. Thus, disinformation initiatives orchestrated by actors like Russia can cause an alarming impact of AI on public perception. Discover how artificial intelligence developed in the West can, despite itself, be misused to become disinformation tools serving Russian propaganda. An analysis of the issues, consequences, and solutions for the ethical use of the technology. The tactics used by propagandists
Disinformation actors rely on a vast network of sites with various names, which disseminate fake news in a coordinated manner. The “Pravda” network, for example, brings together hundreds of domains, each feeding biased stories in multiple languages. These stories are often relayed by influential figures, amplifying their reach while manipulating search engine algorithms and AI. Impact on platforms and their content https://twitter.com/JournalistFR/status/1859564460198830229 Platforms like Facebook
are also affected by this dynamic.
Disinformation

Companies developing artificial intelligence, such as Google AI, must assume greater responsibility for verifying information. As technology evolves, the need to implement safeguards becomes essential to counter the emergence of Russian propaganda via digital tools.
Summary table of the impacts of AI-mediated disinformation Type of disinformation Source
Impact on public opinion
Pravda network Destabilization of opinions about political leaders Fake health news Disinformation websites Creation of panic or distrust of public health measures
Biased conflict data
Russian propaganda articles Influence on perceptions of international conflictsDiscover how Western artificial intelligence can be misused to spread Russian disinformation, examining the ethical issues and consequences for the perception of truth in the digital age. https://www.youtube.com/watch?v=x-kHiJ8E3OI https://www.tiktok.com/@/video/6826492572551220485?u_code=d73jdb031g54mb&language=en&app=musically&user_id=6709544886045541382&tt_from=more
For further information, it’s worth consulting analyses such as this report on the geopolitical implications of AI. Furthermore, research on disinformation through platforms like Google and OpenAI is also very illuminating. The ramifications of these technologies on our societies are considerable, and it is becoming crucial to remain vigilant in the face of the rapid evolution of disinformation in Russia and technology.

Post Comment