A Lovable-hosted app built with AI coding tools contained 16 vulnerabilities that exposed 18,000 users' data, highlighting the security risks of 'vibe coding' platforms that prioritize speed over safety.
A Lovable-hosted app built using AI coding tools was found to contain 16 vulnerabilities, including six critical flaws that exposed the personal data of more than 18,000 users, according to security researcher Taimur Khan. The incident raises serious questions about the responsibility of AI-powered development platforms when their generated code contains fundamental security flaws.
The app in question, which Khan declined to name during the disclosure process, was hosted on Lovable's platform and featured on its Discover page. At the time of Khan's investigation, the app had accumulated over 100,000 views and approximately 400 upvotes. The platform served teachers and students from institutions including UC Berkeley, UC Davis, and various K-12 schools.
The Core Security Issues
The fundamental problem stemmed from how Lovable's platform handles backend development. All apps created on the platform use Supabase for authentication, file storage, and real-time updates through PostgreSQL database connections. However, when developers—or in this case, the AI—fail to explicitly implement crucial security features like Supabase's row-level security and role-based access controls, the generated code appears functional but contains critical vulnerabilities.
Khan discovered a particularly egregious example: a malformed authentication function that implemented flawed access control logic. The AI-generated code essentially blocked authenticated users while allowing access to unauthenticated users—a complete inversion of the intended security logic. "This is backwards," Khan explained. "The guard blocks the people it should allow and allows the people it should block. A classic logic inversion that a human security reviewer would catch in seconds—but an AI code generator, optimizing for 'code that works,' produced and deployed to production."
What Attackers Could Access
With these security flaws in place, an unauthenticated attacker could trivially access every user record, send bulk emails through the platform, delete any user account, grade student test submissions, and access organizations' admin emails. The exposed dataset included:
- 18,697 total user records
- 14,928 unique email addresses
- 4,538 student accounts with email addresses
- 10,505 enterprise users
- 870 users with full PII exposure
The Broader Vibe Coding Problem
This incident highlights the growing security concerns surrounding "vibe coding," Collins Dictionary's Word of the Year for 2025. The approach promises to democratize software development by allowing anyone to create apps through simple prompts, but it comes with significant risks. Veracode recently found that 45 percent of AI-generated code contains security flaws, and similar incidents have been reported across the industry.
Lovable's response to the disclosure has drawn criticism. After Khan reported his findings through company support, his ticket was reportedly closed without response. "If Lovable is going to market itself as a platform that generates production-ready apps with authentication 'included,' it bears some responsibility for the security posture of the apps it generates and promotes," Khan said. "You can't showcase an app to 100,000 people, host it on your own infrastructure, and then close the ticket when someone tells you it's leaking user data."
Platform's Defense
Lovable CISO Igor Andriushchenko countered that the company only received "a proper disclosure report" on the evening of February 26 and acted on the findings "within minutes." He emphasized that every project built with Lovable includes a free security scan before publishing, which checks for vulnerabilities and provides recommendations.
"Ultimately, it is at the discretion of the user to implement these recommendations. In this case, that implementation did not happen," Andriushchenko said. He also noted that the project included code not generated by Lovable and that the vulnerable database is not hosted by Lovable.
The company stated it has contacted the app owner, who is now addressing the issue.
The incident underscores a critical tension in the AI development space: as platforms make app creation more accessible, they must also grapple with the security implications of code they help generate. With educational institutions and minors among the affected users, the stakes for getting this right have never been higher.


Comments
Please log in or register to join the discussion