#Infrastructure

Former Atlassian Engineer Releases 38-Minute Deep Dive on Company's Infrastructure After Termination

AI & ML Reporter
4 min read

A former Atlassian engineer who was let go from the company posted a comprehensive breakdown of the infrastructure systems he built, covering Envoy proxy architecture, sidecar patterns for cross-cutting concerns, DynamoDB with SQS for async workloads, and Packer/SaltStack for VM automation. The video, viewed millions of times, provides a rare look at enterprise-scale infrastructure decisions at a company serving 350,000 customers with $1.79 billion in quarterly revenue.

When Employee-Shareware Meets Enterprise Infrastructure

The old saying goes: don't bite the hand that feeds you. Unless, apparently, that hand just handed you a pink slip.

A former Atlassian engineer did something unusual after being terminated from the company. Instead of signing a non-disparagement agreement, doing some consulting, or just moving on, he posted a 38-minute video breakdown of every significant system he built during his tenure. All of it. Free. For anyone to copy.

The video has been viewed millions of times, and for good reason. What he revealed is a masterclass in building infrastructure at scale.

The Architecture

Here's what Atlassian was running, according to the breakdown:

Envoy proxy instead of enterprise load balancers

This is a deliberate architectural choice that tells you something about how they thought about traffic management. Instead of dropping money on expensive enterprise load balancers from vendors, they went with Envoy - an open-source edge and service proxy originally developed at Lyft. This gives them fine-grained control over traffic routing, canary deployments, circuit breaking, and retries without being locked into a vendor's appliance. The trade-off is operational complexity - Envoy requires people who understand proxy configuration, observability into proxy behavior, and the infrastructure to run it. But at Atlassian's scale, that's a worthwhile investment.

Sidecar architecture for auth, logging, and rate limits

This is the modern services pattern popularized by Istio and Linkerd. Instead of baking authentication, logging, and rate limiting into each service, you deploy a sidecar proxy alongside every service instance that handles these cross-cutting concerns. The benefit is consistency - every request gets the same auth checks, logging, and rate limiting regardless of which team wrote the service. The cost is complexity in the deployment pipeline and network debugging. This pattern only makes sense once you have enough services that inconsistent implementations become a real problem.

DynamoDB + SQS for async provisioning

DynamoDB as the database and Amazon SQS for message queuing. This is a textbook AWS-native architecture. DynamoDB gives them consistent single-digit millisecond latency at any scale, which matters when you're provisioning resources for hundreds of thousands of customers. SQS handles the async work - provisioning isn't synchronous, it's a workflow. Someone requests a new environment, that request goes into a queue, workers pick it up and process it. This decouples the user-facing API from the actual provisioning work and lets them scale workers independently from the API.

Packer + SaltStack for automated VM deployments

Packer to build machine images and SaltStack to configure them. This is infrastructure-as-code before it was a buzzword. Build a standardized image with Packer (including all dependencies baked in), then use SaltStack to handle configuration management on top. The benefit is reproducibility - every VM starts from a known good state. The cost is image rebuilds when you need to patch something. At Atlassian's scale, this is almost certainly automated.

The Business Angle

Here's what's interesting about this from a business perspective.

Atlassian charges per employee across 350,000 customers. Their infrastructure is a competitive advantage - the ability to provision resources quickly, reliably, at scale. That's what lets them serve all those customers without crumbling under the weight.

And now someone just handed that playbook to anyone who wants to watch a 38-minute video.

Is this a problem for Atlassian? Probably not as much as you'd think. First, architecture is only a small part of what makes infrastructure work. The operational knowledge - how to debug Envoy when things break at 3am, how to tune DynamoDB throughput, how to handle the edge cases - that's not in the video. Second, most companies don't have the engineering team size to run this kind of infrastructure anyway. Third, the specific technologies are well-documented elsewhere.

But it's still a fascinating moment in the relationship between employees and the companies they work for.

The Pattern

This isn't entirely new. There's a growing category of what you might call "employee-shareware" - where people build things at companies, get terminated or leave, and then share what they built with the world. The theory is that the knowledge inside a company is often more valuable than the company's products, and individuals have increasing power to share that knowledge directly.

The counter-argument is that this is a breach of trust, that proprietary information should stay proprietary, that these engineers are burning bridges for short-term attention.

The reality is probably somewhere in between. Companies have always had proprietary knowledge - the question is just whether keeping it secret is actually sustainable in an era where individual engineers can reach millions of people directly. Atlassian could have kept all of this internal. Instead, an engineer decided the world should know how they built it.

Whether that's a betrayal or a public service depends on who you ask.

Comments

Loading comments...