Being a Woman Online: The Alarming Rise of Deepfake Porn

The cases bring attention to the alarming growth of AI-created pornography - and the urgency for legal and cultural reform.

Taylor Swift deepfakes continue to go viral. How is this even possible?

Being a woman online is scary. In the digital world, as in the real world, harassment and abuse affects women and people of marginalized genders every day. One prime example of this is what happened to Taylor Swift recently, and it’s nothing short of terrifying. AI-generated, pornographic images of Swift went viral on X (formerly Twitter), with one post even accumulating over 45 million views! Outraged fans quickly flooded the platform with positive posts and images of the singer, attached to search terms like “Taylor Swift AI” and “Taylor Swift deepfake”. The incident involving Swift is not an isolated one, as deepfakes have been weaponized against other celebrities as well, including K-pop stars, TikTok creators, journalists, and even high school girls.

But what exactly is deepfake porn? 🤔

Deepfake porn refers to artificially-created images or videos that are generated using a specific kind of machine learning. These synthetic visuals depict people’s likenesses superimposed into sexual acts without their consent. Sharing such content is a form of image-based sexual abuse and has been criminalized in some countries. Just like revenge porn, deepfake porn has caused significant harm to the individuals it portrays, whether psychologically, personally, or professionally.

According to a 2023 study by Home Security Heroes, a research firm focusing on identity theft and digital harm, deepfake porn makes up a staggering 98 percent of all deepfake videos online, and 99 percent of deepfake targets are women. These numbers are deeply concerning and highlight the urgent need for action.

So, where does the law stand on deepfake porn? 📚

While some countries have implemented laws to protect survivors of the nonconsensual sharing of explicit images, many governments are still failing to address this issue adequately. England and Wales, for instance, criminalized the sharing of deepfake porn in June 2023. In the U.S., 48 states and the District of Columbia currently have anti-revenge porn laws, with some states working to update their language to include deepfake porn. However, regulations vary, and certain laws fail to address the pressing issue of technology’s role in creating and proliferating these harmful images and videos.

Given the increasing prevalence of deepfake porn, it’s crucial to examine how Big Tech companies have approached this problem. 🌐

X, formerly Twitter, explicitly forbids sharing “synthetic, manipulated, or out-of-context media.” The company has a zero-tolerance policy for such content, aiming to prevent deliberate deception or false claims about depictions of reality. Other platforms, like Reddit, also have policies against the sharing of intimate or sexually-explicit media without consent.

However, social media platforms and Big Tech companies still face challenges in detecting and preventing deepfakes. Meta, for instance, partnered with the UK Revenge Porn Helpline’s platform, StopNCII.org, to combat deepfakes. Additionally, Meta’s parent company, Facebook and Instagram, implemented a policy requiring the labeling of digitally-altered images related to social, electoral, and political issues. Nevertheless, these efforts are not foolproof, and many platforms struggle to contain such content.

So, how does this keep happening? The answer lies in the alarming rise of AI-generated images, combined with murky legal ground and inadequate safeguards by social media platforms. 📈

International women’s rights organization Equality Now has detailed these factors in a January 2024 report, calling for urgent and comprehensive responses to combat the rise of deepfake porn. A multifaceted approach that includes technological innovation, legal reform, and societal awareness is necessary to tackle this issue effectively.

The circumstances surrounding Taylor Swift’s case serve as a stark reminder that deepfake porn can happen to anyone. It’s high time for change before this digital crisis worsens. If you have experienced sexual abuse or had intimate images shared without your consent, there are resources available to help you. In the U.S., you can call the National Sexual Assault Hotline at 1-800-656-HOPE (4673) or access 24/7 online support at online.rainn.org. The Cyber Civil Rights Initiative’s hotline is also available for confidential support at 844-878-2274. In the UK, the Revenge Porn Helpline can be reached at 0345 6000 459.

Let’s work together to create a safer and more inclusive online environment. Share this article to spread awareness and protect others from the dangers of deepfake porn. 💪💻

Related Stories:5 Times AI Fooled the Internet in 2023Meta Shutters its Responsible AI TeamIt’s Time to Stop Obsessing Over Celebrities’ SexualitiesGoogle Updates its Policies on Revenge Porn in Search Results

Note: The article originally contains various hyperlinks directing to external sources, which have been removed. Please refer to the reference list below for the original links.


Q&A

Q: How can deepfake porn be criminalized? A: Laws criminalizing deepfake porn differ from country to country. Some countries have specific legislation that addresses this issue directly, while others classify it under existing laws on revenge porn or image-based sexual abuse. However, one of the main challenges is the enforcement of these laws, as deepfake technology continues to evolve rapidly.

Q: Are there any technological measures in place to detect deepfake porn? A: Big Tech companies are making efforts to detect and prevent deepfakes. Meta, for example, has implemented tools in partnership with organizations like the UK Revenge Porn Helpline’s StopNCII.org. These tools use AI algorithms to identify and flag potential deepfake content. However, the technology is still evolving, and there are limitations to its effectiveness.

Q: How can individuals protect themselves from becoming victims of deepfake porn? A: While it’s challenging to prevent someone from creating deepfakes, there are some precautions individuals can take. Safeguarding personal information and being cautious about sharing intimate images can reduce the risk. It’s also essential to report and document any instances of deepfake porn to the relevant authorities and seek support from organizations that specialize in combating image-based sexual abuse.

Q: What are the long-term implications of deepfake porn on society? A: Deepfake porn poses significant risks to individuals’ privacy, security, and mental well-being. It can have devastating consequences for victims, such as psychological trauma and damage to personal and professional reputations. Moreover, the widespread distribution of deepfake porn perpetuates harmful attitudes towards women and reinforces objectification and harassment. Addressing and combating deepfake porn is crucial to protecting individuals’ rights and fostering a safer online environment.


Reference List: 1. Being a woman online 2. AI-generated, pornographic images 3. Taylor Swift, Selena Gomez deepfakes used in Le Creuset giveaway scam 4. Deepfake porn 5. Stop sharing deepfake porn 6. Reddit 7. Meta implemented a new tool 8. 5 times AI fooled the internet in 2023