From Lightmass to Snow Occlusion: Inside the Performance Engine of a VR Thriller
Share this article
From Lightmass to Snow Occlusion: Inside the Performance Engine of a VR Thriller
A recent post from the developers behind Mannequin details the technical journey that transformed a VR title from a performance nightmare into a polished Quest 2 experience. The narrative is a masterclass in incremental optimization, illustrating how a deep understanding of Unreal Engine 5’s rendering pipeline can unlock significant gains.
Indirect Light Memory Bloat
The team’s first headache was the sheer size of the lighting data. On the earlier Vampire: The Masquerade – Justice project, baking a single level’s lighting with CPU Lightmass consumed an entire day. Even after moving to GPU‑based bakes, the problem persisted.
Root cause: An oversized MapBuildData file dominated disk and VRAM usage, driven by a dense field of volumetric lightmap (VLM) samples.
The culprit was a single Importance Volume that wrapped the whole level, combined with a very small Volumetric Lighting Detail Cell Size. The result was a sea of samples scattered across empty space, as illustrated in the following diagram:
The solution was deceptively simple: replace the single volume with multiple smaller volumes that hug playable areas and increase the cell size where detail was not critical. This tweak cut bake times by hours, shrank VLM memory usage by 60–70 %, and preserved visual quality.
 and Color Atlases
To avoid creating dozens of material instances, the team used CPD to drive color variations on small props. A dedicated tool allowed batch editing of CPD values across large selections. For larger props, a **color atlas** combined RGB channels for tint and alpha for specular strength, further reducing material count.
### LOD Bias and Editor Simulation
Per‑platform LODs were tuned to reduce triangle counts by ~75 % for characters on Quest 2. Because HMDs render at higher per‑eye resolution than the editor viewport, the team added a GameState function that adjusted the editor’s render resolution and applied cvars to match HMD LOD distances. This let artists preview LOD transitions without VR headsets.
// Pseudocode for LOD distance adjustment
void SetEditorLODScale(float Scale)
{
// Adjust render resolution
GEngine->SetResolutionScale(Scale);
// Apply LOD bias cvars
IConsoleVariable* LODBias = IConsoleManager::Get().FindConsoleVariable(TEXT("r.LODBias"));
LODBias->Set(Scale * 10);
}
Takeaway
The Mannequin performance story underscores that high‑quality VR on constrained hardware is achievable through a layered approach: optimize lighting data, revive and extend occlusion tools, and employ smart culling, instancing, and LOD strategies. Each small tweak—adjusting importance volumes, customizing occluders, or batching CPD values—contributed to a cumulative performance lift that made a smooth Quest 2 experience possible.
The post serves as a valuable reference for developers facing similar challenges, illustrating how a disciplined, iterative optimization pipeline can turn a performance nightmare into a polished product.
Source: https://real-mrbeam.github.io/2025/12/11/Optimizing-Mannequin.html