Shielding AI Endpoints: How Firebase Secures Virtual Try-On Features from Abuse

In the fast-evolving world of e-commerce, AI-driven features are transforming how shoppers interact with products. Imagine a website where customers can virtually try on outfits using nothing but a webcam snapshot and a product SKU. This isn't science fiction—it's a reality powered by models like the nano banana image generator, as demonstrated in a recent Firebase blog post. But with great power comes great responsibility: these high-value AI endpoints can rack up prohibitive costs if abused by unauthorized users or bots. Firebase offers a multi-layered security approach to mitigate these risks, ensuring that innovative features remain both accessible and secure.

Article illustration 1

At the heart of this virtual try-on application lies a seamless integration of Firebase services with AI inference. Shoppers upload a profile image, select a product, and the system generates a personalized visualization. However, without proper safeguards, malicious actors could flood the endpoint with requests, driving up compute costs and potentially compromising user data. The Firebase team outlines several key strategies to lock down such endpoints, starting with verifying the legitimacy of incoming requests.

Verifying Legitimate Users with App Check

The first line of defense is Firebase App Check, a service designed to attest that requests originate from genuine users on real devices. This prevents unauthorized access via tools like cURL or third-party scrapers. In the client-side implementation, App Check runs in the background as users browse the site. Only if attestation passes does the virtual try-on button become visible, ensuring a frictionless experience for legitimate users while blocking others.

Here's how it's coded in the client:

// only show the btn if attestation passes
getLimitedUseToken(appCheck).then(() => {
  tryOnBtn.style.display = 'block';
}).catch((error) => {
  console.error('Failed to get limited use token:', error);
});

async function handleVirtualTryOnClick() {
  const response = await fetch(`${MY_DOMAIN}/tryItOn`, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${idToken}`,
      // send latest token
      'X-Firebase-AppCheck': (await getLimitedUseToken(appCheck)).token
    },
    body: JSON.stringify({data: { productSku }}),
  });
}

On the server side, a custom context provider decodes the Firebase authentication and App Check tokens, extracting the user's UID and validating the request's authenticity. This setup not only filters out unauthorized clients but also sets the stage for more granular controls.

Preventing Replay Attacks with Limited-Use Tokens

Even with App Check in place, savvy attackers might attempt to replay valid tokens multiple times. To counter this, Firebase introduces limited-use tokens, which are single-use and can be invalidated server-side. Each request to the /tryItOn endpoint includes a fresh token obtained via getLimitedUseToken(appCheck). The server then verifies and consumes the token using verifyToken(appCheckToken, {consume: true}), marking it as used and rejecting any replays.

This mechanism is elegantly implemented in the Genkit flow:

export const virtualTryOn = ai.defineFlow({
  //... omitted for brevity
},
async ({ productSku }, {context}) => {
  if (context?.appCheck === "") {
    throw new UserFacingError("UNAUTHENTICATED", "no app check");
  }
  const appCheckToken = await getAppCheck(app).verifyToken(
    context!.appCheck,
    { consume: true }
  );
  if (appCheckToken.alreadyConsumed) {
    throw new UserFacingError("UNAUTHENTICATED", "already consumed request");
  }
  //... omitted for brevity
});

By consuming the token upon verification, developers can enforce a one-request-per-token policy, significantly reducing the risk of automated abuse.

Enforcing Authentication and Rate Limiting

Authentication goes beyond mere presence; it must be robust. The example leverages Firebase Authentication to verify user identity, including checks for email verification if needed. Extracting the UID from the ID token allows for personalized controls, such as restricting features to verified accounts.

To further curb excessive usage, the implementation includes a custom rate limiter. Using Firestore, it tracks requests per user within a one-hour window, capping them at five. If exceeded, users receive a clear error message: "Quota exceeded. Please wait 1 hour before making additional requests." This approach resets hourly, encouraging repeated engagement without stifling legitimate use.

The rate limiting logic is handled through Firestore transactions:

const canMakeRequest = async (userId: string): Promise<boolean> => {
  const countOfReq = await countOfRequestsTimeFrame(userId);
  if (countOfReq >= MAX_REQUEST_PER_HOUR) {
    return false;
  }
  await updateTokens(userId);
  return true;
};

This per-user throttling is particularly valuable in cloud environments where AI inference costs scale with usage.

Mitigating Prompt Injection with Input Sanitization

Finally, securing the endpoint means controlling what enters the AI model. Rather than accepting raw inputs like custom images or prompts—which could enable prompt injection attacks—the system restricts requests to product SKUs only. The server then fetches predefined assets: the user's profile image from Cloud Storage, the product image, and a curated prompt from Firestore. This design ensures that the AI generates only intended outputs, preventing misuse as a general-purpose image generator.

For instance, loading a product image involves secure retrieval from Cloud Storage:

async function loadProductImg(sku: string) {
  const bucket = getStorage().bucket();
  const files = (
    await bucket.getFiles({ prefix: `products/imgs/${sku}.png` }))[0];
  if (files.length === 0) {
    throw new UserFacingError(
        'NOT_FOUND', `could not find product image for sku ${sku}`);
  }
  const img = files[0];
  const data = await img?.download();
  const [metadata] = await img!.getMetadata();
  const contentType = metadata.contentType || 'image/png';
  return {
    url: `data:${contentType};base64,${data![0].toString('base64')}`
  };
}

By curating inputs server-side, developers minimize exposure to adversarial inputs that could manipulate the AI's behavior.

As e-commerce platforms increasingly lean on AI to enhance personalization, securing these endpoints isn't just a technical necessity—it's a business imperative. Firebase's toolkit, from App Check's attestation to Genkit's flow definitions, empowers developers to deploy innovative features confidently. The virtual try-on example illustrates how layered defenses can transform potential vulnerabilities into robust, user-friendly experiences. For developers building the next wave of AI-driven apps, these practices offer a blueprint for innovation that doesn't compromise on security.

Source: Firebase Blog - Securing AI Endpoints from Abuse