Audarma: LLM‑Powered Dynamic Translation for Next.js with Smart Caching
Share this article
The Growing Need for Dynamic Translation
In a global marketplace, product catalogs, user reviews, and real‑time listings churn constantly. Traditional i18n libraries like next‑intl or react‑i18next assume static UI strings, requiring a build‑time translation pipeline. That model breaks down when the content you need to translate is changing every minute.
Audarma tackles this head‑on by treating the entire view as a single translation unit, caching results, and only re‑translating what has changed.
What Audarma Brings to the Table
- LLM‑powered – Supports OpenAI, Claude, Gemini, Nebius, and any future model via a simple provider interface.
- Smart caching – SHA‑256 hashes of source text mean you pay the LLM only once per content item per language.
- Dual‑mode operation – Lazy (on‑demand) for instant user experience, CLI for pre‑translation and SEO.
- Adapter pattern – Plug‑in your own database, LLM, and i18n library.
Architecture Overview
graph TD
A[ViewMount] --> B[HashCalc]
B --> C[CacheCheck]
C -->|Hit| D[Render]
C -->|Miss| E[DBQuery]
E -->|Found| D
E -->|Missing| F[LLMTranslate]
F --> G[CacheUpdate]
G --> D
The flow begins when a view mounts. Audarma calculates a hash of all items, checks local storage for a cached view hash, and queries the database for any missing translations. Only the missing items are sent to the LLM, after which the new translations are persisted and the view re‑renders.
Adapter Interfaces
interface DatabaseAdapter {
getCachedTranslations(items: TranslationItem[], targetLocale: string): Promise<TranslationResult[]>;
saveTranslations(translations: Array<TranslationResult>): Promise<void>;
}
interface LLMProvider {
translateBatch(items: TranslationItem[], sourceLocale: string, targetLocale: string): Promise<string[]>;
}
These abstractions let you swap in Supabase, Prisma, Redis, or a self‑hosted Llama model without touching the core logic.
Getting Started in a Next.js App
- Install
npm install audarma
- Configure Adapters
// lib/audarma-config.ts
import { AudarConfig } from 'audarma';
import { createSupabaseAdapter } from './adapters/supabase';
import { createNebiusProvider } from './adapters/nebius';
import { useLocale } from 'next-intl';
export function useAudarmaConfig(): AudarConfig {
const locale = useLocale();
return {
database: createSupabaseAdapter(supabaseClient),
llm: createNebiusProvider({
apiKey: process.env.NEBIUS_API_KEY!,
model: 'meta-llama/Llama-3.3-70B-Instruct',
}),
i18n: {
getCurrentLocale: () => locale,
getDefaultLocale: () => 'en',
getSupportedLocales: () => ['en', 'es', 'fr', 'de', 'ru', 'ja'],
},
defaultLocale: 'en',
debug: true,
};
}
- Wrap Your App
// app/layout.tsx
import { AudarProvider } from 'audarma';
import { useAudarmaConfig } from '@/lib/audarma-config';
export default function RootLayout({ children }) {
const config = useAudarmaConfig();
return (
<AudarProvider config={config}>
{children}
</AudarProvider>
);
}
- Translate Views
// app/products/page.tsx
import { ViewTranslationProvider, useViewTranslation } from 'audarma';
function ProductCard({ product }) {
const { text: title, isTranslating } = useViewTranslation(
'product_title',
product.id,
product.title
);
const { text: description } = useViewTranslation(
'product_description',
product.id,
product.description
);
return (
<div>
<h3>{title}</h3>
<p>{description}</p>
{isTranslating && <span>Translating…</span>}
</div>
);
}
export default function ProductsPage({ products }) {
const translationItems = products.flatMap(p => [
{ contentType: 'product_title', contentId: p.id, text: p.title },
{ contentType: 'product_description', contentId: p.id, text: p.description },
]);
return (
<ViewTranslationProvider viewName="products-feed" items={translationItems}>
{products.map(product => (
<ProductCard key={product.id} product={product} />
))}
</ViewTranslationProvider>
);
}
Cost & Performance
Because Audarma only sends unseen items to the LLM, the cost scales with the number of new content items, not the total catalog size. A rough estimate: 1 000 products × 5 languages × $0.001 per translation ≈ $5 one‑time. Subsequent views hit the database cache instantly.
The ViewTranslationProvider also supports progressive loading: the original English text flashes first, then the translated text replaces it once the LLM returns, keeping the UX snappy.
Known Limitations
| Limitation | Impact | Roadmap |
|---|---|---|
| Hard‑coded English source | Only works for English → X translations | Multi‑source support |
| No cache invalidation API | Must delete rows manually | Add cache API |
| No error boundaries | Translation errors crash views | Add fallback UI |
| No streaming | Full translation required | Streaming partial results |
| Client‑side only | Server components unsupported | RSC support |
Community & Contributions
The project is in alpha, and the roadmap is very community‑driven. If you’re interested in:
- Writing adapter examples for Prisma, MongoDB, or local Llama
- Adding retry logic or cost estimation helpers
- Implementing server‑side rendering or streaming
open an issue or submit a pull request. The core team welcomes fresh ideas.
Final Thoughts
Audarma represents a pragmatic shift in how we think about translation in dynamic web apps. By treating an entire view as a single translation unit, it sidesteps the pitfalls of per‑string i18n and leverages LLMs only where they add value. For teams building marketplaces, forums, or any content‑heavy application, Audarma could cut translation costs, reduce latency, and keep the user experience fluid—all while staying fully compatible with existing i18n libraries.
Its flexible adapter pattern and dual‑mode operation make it a drop‑in replacement for static translation pipelines, and its roadmap promises even richer features like streaming, multi‑source languages, and a translation memory.
If you’re tired of paying for every product title in every language, or you need a quick way to localize user‑generated content, Audarma is worth a look.