Letter sparked AI doomsday talk, but many signatories weren’t AI doomers.

Letter sparked AI doomsday talk, but many signatories weren't AI doomers.

The Myth of AI Apocalypse: Perspectives and Concerns

AI Development

This past March, an open letter from the nonprofit Future of Life Institute made headlines, urging a “pause” on AI development due to the perceived risks it posed to humanity. The letter, signed by nearly 35,000 AI researchers, technologists, entrepreneurs, and concerned citizens, raised important questions about the potential consequences of creating nonhuman minds that could eventually outsmart, outnumber, and replace us.

Six months down the line, with AI development moving at an unprecedented pace, it’s safe to say that civilization hasn’t crumbled. Microsoft’s ChatGPT-infused search oracle, Bing Chat, hasn’t displaced Google as the leader in search either. So, what should we make of the letter and the sci-fi warnings about AI’s risks?

To delve deeper into the motivations and concerns of the signatories, two enterprising students at MIT, Isabella Struckman and Sofie Kupiec, reached out to the first hundred people who signed the letter. Their findings revealed a diverse range of perspectives among those who backed the call for a pause on AI development. Surprisingly, only a small fraction of signatories expressed genuine fears of AI posing a looming threat to humanity itself.

Most of those contacted by Struckman and Kupiec doubted that a six-month pause would actually happen or have any significant effect. The “apocalyptic scenario” alluded to in the letter seemed to be dismissed by many. In fact, a majority of signatories were primarily concerned with the intense competition among tech giants like Google, OpenAI, and Microsoft. The hype surrounding AI tools like ChatGPT had reached fever pitch, and the worry was not about the technology obliterating humanity, but rather the potential spread of disinformation, biased advice, and the exacerbation of existing biases.

Another concern expressed by some signatories was the rapid displacement of workers by AI, an issue that could have serious societal implications. Additionally, support was found for the idea that the letter would draw public attention to the astonishing advancements in AI models, potentially leading regulators to take action to address the immediate risks associated with these developments.

In May, a few of the letter’s signatories were interviewed, and it became evident that they did not fully agree with every aspect of the letter. Nevertheless, they felt that the momentum building behind it would sufficiently highlight the risks they were concerned about. However, it could be argued that attempting to cover so many issues related to existing and recent AI developments in one letter was a mistake. The letter ended up being defined by its most sensational and terrifying claim, overshadowing the more tangible concerns raised by AI researchers regarding the societal problems posed by large language models and inherent biases.

Furthermore, a follow-up statement in May, signed by numerous high-profile AI researchers, likened the potential threat of AI to that of nuclear weapons and pandemics, further amplifying the apocalyptic narrative. Nirit Weiss-Blatt, author of “The Techlash and Tech Crisis Communication,” reviewing the MIT paper before its publication, believes that the letter and statement inadvertently served the interests of tech companies developing cutting-edge AI. By focusing on far-off worst-case scenarios, regulators may perceive AI as both highly valuable and difficult to manage.

Weiss-Blatt emphasizes that many of the professors who signed the letter did not view AI as an existential risk. However, by signing the letter, they inadvertently aligned themselves with the extreme AI doomers, effectively perpetuating misinformation.

In the end, the letter calling for a pause on AI development may have had unintended consequences. By emphasizing doomsday scenarios, it became increasingly challenging for concerns about less-than-superintelligent machines and their potential societal implications to gain traction or inspire action. It is crucial to redirect the conversation towards addressing immediate risks, ensuring ethical development, and mitigating the impact of AI on society.

Updated on August 17, 2023, at 1:50 pm EDT: Weiss-Blatt believes that most professors who signed weren’t thinking about existential risk, not all.