Crypto bot network using ChatGPT discovered on X

Crypto bot network using ChatGPT discovered on X

X’s Bot Problem: Elon Musk-Owned Social Media Platform Battling Fake Accounts

Fake Accounts

Social media platforms have long been plagued by the issue of fake accounts and bots, and X (formerly known as Twitter) is no exception. In fact, the prevalence of fake accounts on X was one of the main concerns for Elon Musk when he considered acquiring the company. This problem has recently been brought into focus by new research conducted by the Observatory on Social Media at Indiana University, Bloomington.

Led by Professor Filippo Menczer and student Kai-Cheng Yang, the research team uncovered a botnet named Fox8 that had infiltrated X’s network. This botnet consisted of at least 1,140 fake Twitter accounts, which were discovered in May of this year. These accounts were constantly posting tweets that directed users to spammy online “news” websites, which simply scraped content from legitimate sources.

Interestingly, the majority of posts made by these bot accounts were related to cryptocurrency. They often included hashtags like #bitcoin, #crypto, and #web3, and would frequently engage with popular crypto users on the platform. By using these tactics, the botnet aimed to reach as many legitimate users as possible, hoping they would click on the URLs included in the tweets.

What is particularly intriguing about this botnet is that it utilized artificial intelligence (AI) technology to automate its content creation process. Specifically, it leveraged an AI language model called ChatGPT to generate and publish tweets. The use of AI not only enabled the botnet to produce a massive amount of content, but it also proved to be its ultimate downfall.

In July, after the research was published, X suspended the accounts associated with the botnet. Professor Menczer’s research group had previously informed X of such botnets, but they stopped doing so after Elon Musk’s acquisition of the platform, as they felt the company was no longer responsive to their reports.

The researchers made an interesting observation while studying the bot accounts. They noticed that many of the tweets began with the phrase “as an AI language model.” This phrase is often appended by ChatGPT to indicate that the output may have potential issues since it is generated by an AI. The presence of this repeated phrase became a pattern that ultimately helped identify the botnet.

Had it not been for this “sloppy” mistake, the botnet may have continued operating undetected. It serves as a reminder that while AI tools like ChatGPT can be powerful for automated content creation, they can also inadvertently reveal the presence of nefarious activities.

This study sheds light on the challenges social media platforms face in combating fake accounts and bot networks. It showcases the need for ongoing efforts and collaboration between researchers, platform owners, and AI developers to stay one step ahead of those who seek to exploit these platforms for malicious purposes.

Twitter Logo

In conclusion, while X has made efforts to address its fake account problem, this study highlights the ongoing battle that social media platforms face in maintaining the integrity of their networks. The proliferation of bots and AI-generated content remains a persistent issue, but with continued research and collaboration, steps can be taken to create a safer and more trustworthy online environment.