Scammers deployed a Crypto Botnet on X using ChatGPT.

Scammers deployed a Crypto Botnet on X using ChatGPT.

The Rise of ChatGPT: From Revolutionizing Web Search to Crypto Scamming

Image Source: Pixabay

ChatGPT, the ever-popular and smooth-talking chatbot developed by OpenAI, has gained attention for its potential to revolutionize web search, streamline office tasks, and reshape education. However, it seems the charismatic AI isn’t just limited to these positive applications. Recent research conducted by Indiana University Bloomington has uncovered a botnet operating on X, the social network formerly known as Twitter, powered by ChatGPT. This botnet, known as Fox8, consisted of 1,140 accounts and utilized ChatGPT to generate auto-generated content aimed at luring unsuspecting humans into clicking links that led to cryptocurrency websites.

Micah Musser, a researcher studying the potential for AI-driven disinformation, believes that the Fox8 botnet may just be the tip of the iceberg. Large language models and chatbots have gained immense popularity, making it likely that there are countless other botnets operating in a more sophisticated manner. The Fox8 botnet was large in scale but wasn’t particularly sophisticated in its use of ChatGPT. The researchers discovered it by searching for the phrase “As an AI language model …”—a common response used by ChatGPT for sensitive prompts. Their manual analysis allowed them to identify accounts operated by bots.

However, despite the botnet’s lack of sophistication, it managed to post convincing messages promoting cryptocurrency sites. This ease of utilizing OpenAI’s artificial intelligence for scams indicates that more advanced chatbots might already be running undetected botnets. Filippo Menczer, a professor at Indiana University Bloomington involved in the research, suggests that any sophisticated bad actors would not make such sloppy mistakes. OpenAI has yet to respond to requests for comment regarding this botnet, but their usage policy prohibits the utilization of their AI models for scams or disinformation.

ChatGPT, along with other cutting-edge chatbots, relies on large language models to generate text based on prompts. With extensive training data obtained from various online sources and input from human testers, bots like ChatGPT can respond in remarkably sophisticated ways. However, they also have the potential to generate hateful messages, exhibit social biases, and disseminate fake information.

A well-configured ChatGPT-based botnet would be difficult to detect, more adept at deceiving users, and better at manipulating social media algorithms that prioritize content. Menczer explains that this type of botnet tricks both the platform and its users. Social media algorithms prioritize posts with high engagement, even if that engagement comes from other bots. Governments seeking to wage disinformation campaigns are likely already developing or utilizing similar tools, creating a formidable challenge for social media platforms and users.

Researchers have long been concerned about the potential for large language models like ChatGPT to be misused for disinformation purposes. OpenAI had previously delayed the release of a predecessor to ChatGPT due to similar concerns. However, concrete examples of large language models being extensively misused have been scarce. Some political campaigns have already employed AI techniques, such as sharing deepfake videos to disparage opponents.

William Wang, a professor at the University of California, Santa Barbara, finds the research on criminal usage of ChatGPT to be fascinating. He believes that many spam webpages are now automatically generated, making it increasingly challenging for humans to distinguish between genuine and AI-generated content. As AI technology continues to advance, it will become even more difficult to combat this issue, creating a never-ending cat-and-mouse game between researchers and malicious actors.

X, the social network formerly known as Twitter, appears to be a fertile testing ground for such tools. Malicious bots have become more prevalent on the platform, despite efforts by tech mogul Elon Musk, who once promised to eradicate them. However, the steep price hike imposed on the usage of OpenAI’s API has made it more challenging for researchers to study the problem. Menczer’s group used to alert Twitter of new findings, but they no longer do so due to the lack of responsiveness from X’s team, who seem understaffed.

Fortunately, after the publication of Menczer and Yang’s paper in July, it appears that someone at X took down the Fox8 botnet. Although this is a positive development, it highlights the ongoing struggle to combat the misuse of AI-powered tools on social media platforms. The need for improved safeguards and increased vigilance in identifying and mitigating the risks posed by chatbot-driven botnets has become paramount.

It’s crucial for developers, policymakers, and tech platforms to work together to ensure that AI models like ChatGPT are not exploited for malicious purposes. Striking a balance between enabling innovation and mitigating potential risks is essential to harness the full potential of AI technology for the betterment of society. Through collective efforts, we can continue to explore the positive applications of AI while diligently addressing the challenges it presents.