Article illustration 1

In a bombshell lawsuit that could redefine the boundaries of AI ethics and copyright law, adult content producer Strike 3 Holdings is suing Meta for allegedly torrenting thousands of its copyrighted videos to train artificial intelligence models. Filed in a California federal court and recently unsealed, the complaint accuses Meta of using BitTorrent—a peer-to-peer file-sharing protocol—to download and redistribute 2,396 explicit videos since 2018, purportedly to gain an edge in developing what CEO Mark Zuckerberg has termed AI "superintelligence."

According to the lawsuit, Meta targeted Strike 3's content specifically for its "high quality," "feminist," and "ethical" adult videos, which offer "rare visual angles, parts of the human body, and extended, uninterrupted scenes" absent in mainstream media. This data, Strike 3 claims, was crucial for enhancing the "fluidity and humanity" of Meta's AI systems, such as its V-JEPA 2 "world model" released in June. Christian Waugh, an attorney for Strike 3, emphasized the competitive motive: "They have an interest in getting our content because it can give them a competitive advantage."

The allegations extend beyond adult material. Exhibits list torrented content from mainstream shows like Yellowstone and South Park, alongside concerning titles such as ExploitedTeens and Anal Teens—raising red flags about minor safety, as BitTorrent lacks age verification. Meta allegedly used this pirated content as "currency" to facilitate downloads of other training data, with the complaint asserting the practice made explicit videos accessible to underage users. With statutory penalties in play, Strike 3 is demanding $350 million in damages, citing its proprietary infringement detection systems that tracked the activity to 47 Meta-affiliated IP addresses.

Meta has denied the claims, with spokesperson Christopher Sgro stating, "We’re reviewing the complaint, but we don’t believe Strike’s claims are accurate." Yet, the timing is critical: Meta recently revealed its V-JEPA 2 model was trained on one million hours of unspecified "internet video," and Zuckerberg doubled down on AI ambitions at Meta Connect, framing products like smart glasses as tools for "personal superintelligence." The lawsuit suggests Meta executives, including Zuckerberg, directly approved using pirated data—a tactic allegedly common among AI firms battling similar copyright suits.

Legally, the case probes the contentious "fair use" defense often invoked by AI companies. While a June ruling in Kadrey v. Meta dismissed claims over book-based training, Judge Vince Chhabria left the door open, noting plaintiffs had simply made "the wrong arguments." Matthew Sag, an Emory University law professor specializing in AI, warns of broader risks: "This is a public relations disaster waiting to happen. Imagine a middle school student asks a Meta AI model for a video about pizza delivery, and before you know it, it’s porn." He adds that Strike 3's strongest argument may center on how piracy undermines legitimate markets for content access.

For developers and AI ethicists, this lawsuit underscores a volatile intersection: the hunger for diverse, real-world training data versus the legal and moral imperatives of copyright compliance. As AI models grow more sophisticated, reliance on unvetted sources could expose companies to massive liability and public backlash. Waugh frames it as a foundational issue: "It doesn’t matter if it’s a four-sentence poem or adult entertainment... There is no appetite in this country for what AI companies appear to be doing, which is making money off the backs of rights holders." The outcome could force a reckoning in how the industry sources data—potentially accelerating calls for transparent, ethical datasets that don’t trade innovation for exploitation.

Source: WIRED