When AI Replaces Your Rule Engine: A Solo Developer's Architecture Choice
#AI

When AI Replaces Your Rule Engine: A Solo Developer's Architecture Choice

Backend Reporter
4 min read

A solo developer explains why he chose AI over building a complex recommendation engine for his workout app, detailing the architecture, trade-offs, and lessons learned about prompt engineering and system design.

Hello! I'm Jairo Jr. 👋 I hope you're doing great. I'm a backend software engineer and I'm currently building an app to generate gym workouts for beginners. And at some point, I hit a very real problem. I needed the app to create something personalized for each user. And that's where things got complicated.

The problem I started asking myself: Should I create another backend just for personalization? Should I build a microservice to handle user profiles and workout rules? Should I design a full recommendation engine with categories, difficulty levels, and decision trees?

Technically, all of that is possible. But then reality kicked in. This is a solo project. I already have one backend running. Creating another service just to manage complex workout logic would mean:

  • more infrastructure cost
  • more maintenance
  • more mental overhead
  • slower development

So I stopped and asked a different question: Do I really need to build all this logic manually?

The "backend-first" mindset (and why it didn't make sense)

My first idea was very classic backend thinking: collect user data (goal, level, duration) map everything into a workout category tree choose exercises based on rules define sets and repetitions programmatically store and version all of this logic

Basically, build a recommendation engine from scratch. For a big team? Maybe. For a solo app? Overkill.

So instead of asking how to build it, I asked: How can AI help me make this decision — without losing control?

The savior: LangChain + OpenAI

After evaluating complexity and cost, I chose a simpler approach: a lightweight service that calls OpenAI's API structured prompts strict schema validation controlled domain data

Instead of building a heavy rule engine, I built an AI-powered decision layer. And yes… I used TypeScript 😅 (I'm a Java fan, but LangChain is much more mature with TS right now.)

How the AI flow works

The architecture is simple and intentional.

PromptTemplate → ChatOpenAI → StructuredOutputParser

What this means in practice:

The prompt defines the role, business rules, and output format OpenAI generates the response StructuredOutputParser forces the output to match a strict Zod schema

No random text. No broken JSON. No guessing.

What the model actually sees

The AI receives real context, not just a generic prompt:

  • user profile (goal, level, duration, etc.)
  • an explicit catalog of allowed exercises
  • strict rules like: valid exercise IDs required YouTube links ISO date format difficulty and category mapping

The model doesn't invent workouts. It chooses from a controlled domain. That was a key decision.

Why this was cheaper than another backend

Instead of:

  • building complex domain logic
  • maintaining rule trees
  • evolving workout selection algorithms

I delegated the decision-making to AI. And here's the interesting part: As more workouts are generated, I reuse previous context to reduce unnecessary calls to OpenAI. So the system improves over time:

more workouts → better context better context → fewer calls fewer calls → lower cost

In the end, this solution was much cheaper than building and maintaining a full recommendation backend.

What I learned about prompt structure (the hard way)

This only worked because the prompt was treated like code, not text.

What worked well

Low ambiguity output StructuredOutputParser + format instructions keep responses clean.

Domain control Allowed exercise catalogs prevent hallucinated IDs.

Functional safety Constraints and realistic rules are explicit in natural language.

Higher determinism temperature: 0 makes responses more predictable.

This isn't "ask AI and hope". It's "ask AI with constraints".

Problems I found along the way

Not everything was perfect. Some issues I identified:

  • some fields were requested in the prompt but overwritten in the backend
  • difficulty and category were guided, but not strictly validated as enums
  • the prompt was too large and mixed responsibilities

AI works best with structure — and so does code.

Improvements that made a big difference

split system and user messages using ChatPromptTemplate validate business rules in code, not only in prompts remove redundant fields from the prompt strengthen exercise ID validation version the prompt (PROMPT_VERSION=v1)

AI doesn't replace architecture. It becomes part of it.

Final thoughts

This article isn't about saying "AI is magic". It's about realizing something important: Sometimes you don't need to build a massive system to solve a complex problem. Sometimes it's better to:

delegate the decision layer control the input validate the output optimize for cost and simplicity

AI is powerful, but only when you define the boundaries. You design the contract. You validate the result. AI generates the answer.

And that's the difference between building a product and just writing prompts.

Scale globally with MongoDB Atlas. Try free.

Comments

Loading comments...