This Disinformation Is Customized for You

This Disinformation Is Customized for You

The Rise of Targeted and Convincing Disinformation in the Age of Generative AI


In today’s digital era, it is widely acknowledged that generative AI will play a crucial role in the proliferation of disinformation on the internet. From deepfakes to fake news articles to bots, AI will not only generate more disinformation but also make it more convincing. However, what many people are just starting to grasp is how disinformation will become increasingly targeted and adept at engaging with individuals, potentially swaying their opinions.

When Russia attempted to influence the 2016 US presidential election through the now-defunct Internet Research Agency, the operation was run by humans who often lacked cultural fluency and proficiency in the English language, hindering their ability to relate to the target groups. With generative AI tools, those orchestrating disinformation campaigns will have the ability to finely tune their approach by profiling individuals and specific communities. These operatives can create content that appears authentic and relatable to the recipients, effectively targeting people with personalized disinformation based on the collected data. The rise of generative AI will also make it much easier to produce disinformation, resulting in a significant increase in the amount of misleading information circulating freely on the internet.

According to Kate Starbird, an associate professor at the University of Washington, generative AI lowers the financial barrier for creating tailored content for specific audiences. The narrative can be crafted to align with the values and beliefs of these audiences, resulting in a more strategic impact. Starbird adds that instead of writing a handful of articles a day, one article can be instantly adjusted to cater to twelve different audiences, with each adaptation taking just five minutes.

The profusion of content posted on social media and other platforms makes it effortless to collect data for building a disinformation campaign. Once operatives have successfully profiled different groups across a country, they can train the generative AI system to produce highly sophisticated content that manipulates these specific targets. As Renee Diresta from Stanford Internet Observatory explains, precision and relevance in disinformation campaigns will increase alongside the capacity for fine-tuning.

Professor Hany Farid of the University of California, Berkeley suggests that this kind of customized disinformation will become ubiquitous. While bad actors may initially target people in groups during large-scale disinformation campaigns, they could also leverage generative AI to target individuals directly. For instance, they could input a user’s tweet history and instruct the AI to generate engaging, tailored content for them. According to Farid, such advancements are not speculative but rather imminent.

Purveyors of disinformation will attempt various tactics until they find the most effective ones, with much of the impact and consequences of these campaigns likely only becoming apparent after some time. It is important to note that disinformation campaigns do not require a high success rate to achieve their objectives. As Farid points out, even if 99% of these campaigns fail, the remaining 1% can cause significant havoc.

The 2016 election cycle demonstrated how recommendation algorithms, such as those utilized by Facebook, played a role in the radicalization of individuals and the spread of disinformation and conspiracy theories. As the 2024 US election approaches, Facebook’s own algorithm, powered by AI, will likely recommend AI-generated posts alongside content created entirely by humans. Thus, we have reached a point where AI will generate disinformation that will then be recommended to users by another AI.

Starbird stresses that we have been easily deceived by low-quality content in the past, but we are now entering a period where higher-quality disinformation and propaganda will become more prevalent. It will be significantly easier to produce content tailored to specific audiences than ever before, and we must be cognizant of this new reality.

Addressing this mounting problem does not come with a simple solution. Diresta emphasizes the importance of raising awareness about these potential threats and encouraging people to be cautious about the content they engage with. For example, individuals should scrutinize the credibility of a website or social media profile by considering factors such as its creation date. Furthermore, pressure should be placed on AI companies to implement safeguards that limit the production of disinformation overall.

Recent efforts by the Biden administration to collaborate with major AI companies, including OpenAI, Google, Amazon, Microsoft, and Meta, have resulted in a promising initiative. The deal struck encourages these companies to adopt specific guardrails for their AI tools, such as external testing and content watermarking. Furthermore, a collaborative group has been established to develop safety standards for AI tools, and Congress is currently deliberating on potential regulations for AI.

Despite these initiatives, AI continues to advance rapidly, often outpacing efforts to regulate it effectively. Silicon Valley, notorious for frequently failing to deliver on promises of safe and tested products, must be held accountable. Moreover, not all players in this space may act responsibly, emphasizing the need for comprehensive and diligent efforts to address the issue.

Professor Farid underscores the recurring pattern of technological advancements inadvertently causing harm before any intervention is realized. This cycle, he argues, has been amplified since these advancements are now released alongside mobile devices, social media, and an already complicated ecosystem.

In conclusion, the rise of generative AI has both positive and negative implications. While AI has the potential to greatly benefit society, its misuse in the form of targeted and convincing disinformation poses a significant threat. Awareness, careful consumption of content, and effective regulation of AI are crucial components of mitigating this increasingly prevalent issue. It is imperative to remain vigilant and critically assess the information presented to us in this new era of sophisticated disinformation.