Microsoft Takes Action Against Developers of Celebrity Deepfakes
In a bid to demonstrate its commitment to AI safety, Microsoft has amended a lawsuit originally filed last year, aiming to reveal the identities of four developers allegedly bypassing guardrails on its AI tools to produce celebrity deepfakes. The tech giant initiated this legal action in December, and a court-ordered seizure of a website linked to the operation has facilitated the identification of these individuals.
Identities of the Developers Involved in the Cybercrime
The developers implicated in this global cybercrime network, dubbed Storm-2139, include:
- Arian Yadegarnia, known as “Fiz,” from Iran
- Alan Krysiak, known as “Drago,” from the United Kingdom
- Ricky Yuen, known as “cg-dot,” from Hong Kong
- Phát Phùng Tấn, known as “Asakuri,” from Vietnam
While Microsoft has identified additional individuals linked to this operation, the company has chosen not to disclose their identities to avoid jeopardizing an ongoing investigation. According to Microsoft, these developers gained unauthorized access to its generative AI tools, successfully “jailbreaking” the systems to create various kinds of images. Subsequently, they sold this access to others, who misused it to generate deepfake pornography featuring celebrities.
The Aftermath of the Lawsuit and Website Seizure
Following the submission of the lawsuit and the seizure of the group’s website, Microsoft reported that the defendants reacted swiftly and defensively. As detailed in a blog post, “The seizure of this website and subsequent unsealing of the legal filings in January generated an immediate reaction from actors, in some cases causing group members to turn on and point fingers at one another.”
Deepfake Pornography: A Growing Concern
Celebrities, including prominent figures like Taylor Swift, have increasingly fallen victim to deepfake pornography, which involves superimposing a person’s face onto a naked body with alarming realism. In January 2024, Microsoft had to refresh its text-to-image models after unauthorized images of Swift circulated online, demonstrating how easily the safeguards can be circumvented.
The Impact of Generative AI on Society and Recent Scandals
Generative AI simplifies the process of creating such images, often requiring minimal technical skill, leading to a rise in deepfake incidents across U.S. high schools. Numerous reports reflecting the experiences of deepfake victims highlight that these digital abuses are not merely theoretical—they inflict genuine emotional distress. Victims often feel anxious, afraid, and violated, aware that someone has used technology to exploit their likeness.
Debate Surrounding AI Safety: Open Source vs. Closed Source Models
The discussion around AI safety has sparked significant debate within the AI community, questioning the authenticity of concerns versus potential profit motivations for major firms like OpenAI. One perspective advocates for keeping AI models closed-source to mitigate the risk of misuse by preventing users from deactivating safety mechanisms. Conversely, proponents of open-source models argue that freeing these tools for public improvement is crucial for innovation while still combatting misuse effectively. Regardless of stance, the louder conversation may be overshadowing pressing issues such as the proliferation of inaccurate information and poor-quality content online.
Legal Frameworks for Addressing AI Misuse
Both proprietary and open-source AI models typically include licensing agreements that restrict certain applications; however, enforcing these terms is a separate challenge. Even though concerns about AI may seem exaggerated at times, the real danger lies in its potential misuse for generating deepfakes. Legal measures are one of the strategies available to combat such abuses, irrespective of the model’s open or closed status.
In the U.S., there have already been numerous arrests of individuals involved in creating deepfakes of minors. Additionally, the NO FAKES Act proposed in Congress last year aims to criminalize generating images based on an individual’s likeness without consent. The UK has started imposing penalties for distributing deepfake pornography, soon extending this legislation to include creation as well. Meanwhile, Australia has recently enacted laws criminalizing the creation and dissemination of non-consensual deepfakes.
FAQs about Deepfakes and AI Safety
What are deepfakes, and how are they created?
Deepfakes are synthetic media in which a person’s likeness is convincingly altered to depict them in scenarios that may not be true. They are typically created using artificial intelligence techniques such as deep learning and neural networks.
What legal actions are being taken against deepfake creators?
Various legal measures are emerging globally. In the U.S., bills like the NO FAKES Act aim to prohibit the unauthorized generation of images portraying someone without their consent. Other countries are also enacting laws against the production and distribution of deepfakes.
How can one protect themselves from deepfake exploitation?
Individuals can protect themselves by setting stringent privacy settings on social media, being cautious about what personal information they share online, and staying informed about legal protections against misuse of their likeness in media.
What is Microsoft’s role in tackling deepfake technology?
Microsoft is actively pursuing legal action against those exploiting its AI tools for malicious deepfake creation and is committed to enhancing safety measures in its technologies to prevent misuse.