Moltbook's AI Social Network Exposed 1.5M API Keys Due to Misconfigured Supabase Database
#Vulnerabilities

Moltbook's AI Social Network Exposed 1.5M API Keys Due to Misconfigured Supabase Database

Startups Reporter
5 min read

Security researchers discovered a misconfigured Supabase database in Moltbook, an AI social network, exposing 1.5M API authentication tokens, 35K email addresses, and private messages between agents.

Moltbook, the AI social network that's been making waves in the tech community, recently faced a significant security incident that exposed sensitive user data. The platform, which positions itself as the "front page of the agent internet," allows AI agents to post, comment, and build reputation through a karma system. However, a misconfigured Supabase database left 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents vulnerable to unauthorized access.

The Discovery

The security team at Wiz discovered the exposed database while conducting a routine security review. What made this discovery particularly concerning was how quickly it was found - within minutes of browsing the platform like a normal user, researchers identified a Supabase API key exposed in client-side JavaScript code.

This API key granted unauthenticated access to the entire production database, including read and write operations on all tables. The exposed data painted a different picture than Moltbook's public image suggested. While the platform boasted 1.5 million registered agents, the database revealed only 17,000 human owners behind them - an 88:1 ratio of agents to humans.

How the Database Was Exposed

The vulnerability stemmed from hardcoded Supabase connection details in the production JavaScript file. The exposed credentials included:

  • Supabase Project: ehxbxtjliybbloantpwq.supabase.co
  • API Key: sb_publishable_4ZaiilhgPir-2ns8Hxg5Tw_JqZU_G6-

While Supabase is designed to operate with certain keys exposed to the client, the real danger lay in the missing Row Level Security (RLS) configuration. Without RLS policies, this key granted full database access to anyone who had it.

Researchers tested the API directly and confirmed unauthenticated access to sensitive authentication tokens, including API keys of the platform's top AI agents. This meant anyone could fully impersonate any agent on the platform with a single API call.

Sensitive Data Exposed

The database contained several categories of sensitive information:

API Keys and Authentication Tokens

  • Full authentication tokens allowing complete account takeover
  • Claim tokens for agent ownership
  • Verification codes used during registration

User Email Addresses and Identity Data

  • Personal information for 17,000+ users
  • Additional 29,631 email addresses from early access signups for Moltbook's upcoming developer product

Private Messages

  • 4,060 private DM conversations between agents
  • Some messages contained third-party API credentials, including plaintext OpenAI API keys

Write Access Vulnerabilities

  • Full ability to modify existing posts on the platform
  • Capability to inject malicious content or prompt injection payloads
  • Potential to deface the entire website

The Fix and Response

Upon discovery, the Wiz team immediately disclosed the issue to Moltbook. The team secured the database within hours with Wiz's assistance. The disclosure timeline shows multiple rounds of remediation:

  • Initial contact and reporting of Supabase RLS misconfiguration
  • First fix securing agents, owners, and site_admins tables
  • Second fix addressing agent_messages, notifications, votes, and follows
  • Third fix blocking write access to modify posts
  • Final fix securing all remaining exposed tables

Key Security Lessons

This incident highlights several important lessons for AI-built applications:

Speed Without Secure Defaults Creates Systemic Risk While vibe coding enables remarkable speed and creativity, today's AI tools don't yet reason about security posture or access controls. The issue traced back to a single Supabase configuration setting, demonstrating how small details can matter at scale.

Participation Metrics Need Verification

The 88:1 agent-to-human ratio shows how "agent internet" metrics can be easily inflated without guardrails like rate limits or identity verification. This likely reflects how early the "agent internet" category still is, with builders actively exploring what agent identity and participation should look like.

Privacy Breakdowns Can Cascade

Users shared OpenAI API keys and other credentials in direct messages under the assumption of privacy, but a configuration issue made those messages publicly accessible. A single platform misconfiguration was enough to expose credentials for entirely unrelated services.

Write Access Introduces Greater Risk

Beyond data exposure, the ability to modify content and inject prompts into an AI ecosystem introduces deeper integrity risks, including content manipulation, narrative control, and prompt injection that can propagate downstream to other AI agents.

The Future of AI-Native Applications

Moltbook illustrates both the excitement and growing pains of a brand-new category. The enthusiasm around AI-native social networks is well-founded, but the underlying systems are still catching up. As AI continues to lower the barrier to building software, more builders with bold ideas but limited security experience will ship applications that handle real users and real data.

The opportunity is not to slow down vibe coding but to elevate it. Security needs to become a first-class, built-in part of AI-powered development. AI assistants that generate Supabase backends can enable RLS by default. Deployment platforms can proactively scan for exposed credentials and unsafe configurations.

If we get this right, vibe coding does not just make software easier to build - it makes secure software the natural outcome and unlocks the full potential of AI-driven innovation.

Disclosure Timeline:

  • January 31, 2026 21:48 UTC - Initial contact with Moltbook maintainer via X DM
  • January 31, 2026 22:06 UTC - Reported Supabase RLS misconfiguration exposing agents table
  • January 31, 2026 23:29 UTC - First fix: agents, owners, site_admins tables secured
  • February 1, 2026 00:13 UTC - Second fix: agent_messages, notifications, votes, follows secured
  • February 1, 2026 00:31 UTC - Discovered POST write access vulnerability
  • February 1, 2026 00:44 UTC - Third fix: Write access blocked
  • February 1, 2026 00:50 UTC - Discovered additional exposed tables
  • February 1, 2026 01:00 UTC - Final fix: All tables secured

This incident serves as a reminder that as we build the next generation of AI-native applications, security must evolve alongside innovation. The most important outcome here is not what went wrong, but what the ecosystem can learn as builders, researchers, and platforms collectively define the next phase of AI-native applications.

Comments

Loading comments...