AI coding platform Lovable faces backlash after researcher exposed a BOLA vulnerability allowing free accounts to access sensitive user data, with the company's shifting explanations raising more questions than answers.
Lovable, the AI-powered coding platform backed by a $6.6 billion valuation, has found itself in hot water after a security researcher exposed a critical vulnerability that allowed anyone to access other users' sensitive data through a free account. The incident has become a case study in how not to handle security disclosures, with the company's explanations evolving from "intentional behavior" to blaming its bug bounty partner, HackerOne.
The vulnerability that shouldn't have existed
The security flaw, identified as a Broken Object Level Authorization (BOLA) vulnerability, allowed free-tier users to access other accounts' source code, database credentials, AI chat histories, and customer data with just five API calls. BOLA vulnerabilities occur when APIs fail to properly validate whether a user has permission to access specific resources, essentially leaving the digital equivalent of a master key under the doormat.
Researcher @weezerOSINT demonstrated the vulnerability on X, showing how they could extract sensitive information including database credentials from source code and read private chat histories. The researcher had initially reported the issue 48 days prior through Lovable's official channels, only to have it dismissed as a "duplicate submission" that remained open.
A masterclass in contradictory messaging
Lovable's response to the disclosure was, to put it mildly, chaotic. The company first claimed it "did not suffer a data breach," then attributed the exposed information to "intentional behavior" and "unclear documentation." The messaging became even more confusing when Lovable stated that making code visible on public projects was "by design," while simultaneously noting that enterprise customers had been prevented from setting projects to public since May 2025.
This contradictory stance—claiming the behavior was both intentional and a documentation failure—did little to inspire confidence. The company seemed to be trying to have it both ways: defending the design while acknowledging it was confusing and problematic.
The real story emerges
In a later statement, Lovable finally provided a more coherent timeline of events. The company explained that public projects were initially meant to be fully public, including both code and chat. However, as the platform evolved, this became confusing for users who expected "public" to mean only their published apps were visible, not unpublished project chats.
Key changes in Lovable's approach:
- May 2025: Free-tier users gained the ability to create private projects; public setting disabled for enterprise customers
- December 2025: Platform switched to private by default across all tiers
- February 2026: During backend permission unification, access to chats on public projects was accidentally re-enabled
The critical failure occurred when this accidental re-enabling wasn't properly addressed. According to Lovable, HackerOne partners handling the bug report "thought that seeing public projects' chats was the intended behaviour" and closed the reports without escalation.
The blame game begins
In what can only be described as a deflection maneuver, Lovable threw HackerOne under the bus, claiming the bug bounty service's partners misunderstood the intended behavior and failed to escalate the critical security issue. While HackerOne declined to comment pending further review, this move raises questions about Lovable's internal security processes and whether the company properly communicated the severity of the issue to its partners.
Broader implications for AI security
This incident highlights a troubling pattern in the AI industry where companies with massive valuations appear to treat security vulnerabilities as PR problems rather than critical issues requiring immediate attention. The fact that Lovable dismissed the researcher's initial report as a duplicate and left it open for 48 days suggests a concerning lack of urgency around user security.
For the thousands of companies using Lovable's platform—including major names like Uber, Zendesk, and Deutsche Telekom—this incident should serve as a wake-up call. When an AI coding tool has access to your source code and database credentials, any security lapse could have catastrophic consequences.
Lessons learned (hopefully)
Lovable's final statement acknowledged that "pointing to documentation issues alone was not enough here" and promised to "do better." However, the damage to user trust may already be done. The company's initial attempts to minimize the issue, followed by contradictory explanations and ultimately blaming its security partners, created a perfect storm of bad PR.
For other AI companies, this serves as a cautionary tale: when a security researcher reports a vulnerability, the appropriate response is to treat it seriously, investigate promptly, and communicate transparently. Trying to reframe a security failure as "intentional behavior" or blaming third-party partners only compounds the problem and erodes user confidence.
As AI coding tools become increasingly central to software development workflows, the security of these platforms becomes paramount. Users are entrusting these services with their intellectual property, trade secrets, and customer data. Companies like Lovable need to recognize that with great power (and valuation) comes great responsibility—especially when it comes to protecting user data.
The incident also underscores the importance of proper security testing and clear communication channels between companies and their bug bounty partners. When lives, livelihoods, and sensitive data are on the line, there's no room for miscommunication or buck-passing.
For now, Lovable users should review their project settings, ensure sensitive information isn't stored in publicly accessible locations, and consider whether the convenience of AI coding tools outweighs the potential security risks. And for the rest of the AI industry, the message is clear: security can't be an afterthought, and when things go wrong, honesty and transparency are the only viable paths forward.


Comments
Please log in or register to join the discussion