TikTok’s New ‘See Less’ Toggle Signals a Shift in AI‑Generated Content Governance

Article illustration 1

In a high‑stakes bid to tame the flood of synthetic media, TikTok announced at its European Trust & Safety Forum that it will soon give users a toggle to “see less” AI‑generated content in their feeds. The feature, described as a dial‑down rather than a blanket block, will live in the app’s “Manage Topics” section and is slated for rollout in the coming weeks.

The company has already flagged more than 1.3 billion AI‑generated videos, a figure that dwarfs the roughly 100 million clips uploaded daily but still represents only a fraction of the total stream. TikTok’s detection pipeline relies on metadata embedded via the C2PA (Coalition for Content Provenance and Authenticity) standard, which marks videos that are produced or altered by AI.

“We want to safeguard and empower positive experiences with AI,” the company said, a statement that underscores its intent to avoid stinging the burgeoning AI ecosystem while addressing user fatigue with synthetic content.

Invisible Watermarking: A Technical Countermeasure

To combat the common problem of metadata stripping when content is re‑uploaded or edited elsewhere, TikTok is adding an invisible watermarking tool to its own AI‑infused editing suite. The watermark is designed to survive downstream transformations, making it harder for creators to masquerade AI‑generated clips as purely human‑made.

This dual‑layer approach—C2PA credentials plus watermarking—mirrors industry best practices in content provenance. It also dovetails with TikTok’s broader strategy to reduce the platform’s “AI slop” without alienating users who enjoy AI‑powered creativity.

The Human Cost of Automation

TikTok’s push for automation has not been without controversy. The company recently announced plans to cut 439 trust‑and‑safety roles, a move that trade unions and online‑safety advocates condemned. TikTok framed the layoffs as a “reorganisation” aimed at leveraging technological advances, but critics argue that human oversight remains essential for nuanced moderation.

Investing in AI Literacy

In an effort to offset the risks of a rapidly evolving AI landscape, TikTok is launching a $2 million AI literacy fund for organisations such as Girls Who Code. The initiative will fund “For You” feed content that teaches users about AI safety and responsible use.

The fund’s launch signals a recognition that user education is as critical as technical safeguards. By equipping viewers with the knowledge to spot synthetic media, TikTok hopes to reduce the need for aggressive moderation.

What This Means for Developers and Creators

For developers building AI‑driven tools, TikTok’s move highlights the importance of robust provenance metadata and watermarking. The platform’s reliance on C2PA suggests that compliance with open‑standards will become a prerequisite for any content‑generation service that wishes to distribute at scale.

Creators, meanwhile, face a new reality: AI‑generated videos will be more visible, but users can now exercise granular control over their exposure. The toggle may influence how creators market their content—those who lean heavily on AI may need to balance authenticity with audience expectations.

A Balancing Act

TikTok’s new toggle is a pragmatic compromise. It acknowledges the creative benefits of generative AI while addressing the platform’s growing concerns about content quality and user trust. By combining technical measures with user‑centric controls and educational outreach, TikTok is charting a path that could serve as a blueprint for other social media giants grappling with the same dilemma.

The success of this strategy will hinge on how well the platform can enforce the embedded credentials, how users respond to the new toggle, and whether the AI literacy fund translates into measurable improvements in media literacy. For now, TikTok’s experiment offers a fascinating glimpse into the future of content moderation in an AI‑rich world.