Physics-Aware Flies: An Interactive Dive into SE(3) Swarm Intelligence
Share this article
The Geometry of Swarm Intelligence
Imagine flies not as simple insects, but as SE(3) agents—entities with positions and orientations in 3D space, governed by the mathematics of rigid body transformations. This interactive simulation makes this abstract concept tangible. Each fly moves by sampling on-manifold increments:
\text{New Pose} = \Delta_{\text{body}} \times \text{Current Pose}
where body-frame steps (translation + rotation) are left-multiplied—a fundamental operation in Lie group dynamics.
The Dance of Generator and Critic
At each step, a generator proposes movements from four strategies:
1. Haar: Uniform random rotation (measure-preserving)
2. Goal: Attractor-seeking vectors
3. Plume: Gradient-following in attention fields
4. Explore: Curiosity-driven jitter
Ghost trails visualize these proposals near select flies—a rare window into an agent's decision distribution. A GAN-like critic then scores proposals using rewards:
"Closeness to attractors, avoidance of repellents, plume alignment, and collision prevention dynamically shape the reward function," explains the simulation's design.
When evolution is enabled (REINFORCE-style updates), the generator's mixture weights adapt in real-time. Sliders adjust critic weights for goal/plume/repellent preferences, creating an emergent ballet of swarm intelligence.
Interactive Experimentation
Users directly influence dynamics by clicking to place:
- Attractors (gold): Goal locations
- Repellents (red): Hazard zones
- Plumes (blue): Diffusing signal gradients
Shift-click removes objects, while parameter knobs tweak concentration (κ), noise (σ), and sampling density. This hands-on control reveals how swarm behavior emerges from individual agents' learning loops.
Why SE(3) Matters
Most swarm simulations operate in Euclidean space. By contrast, SE(3)'s non-Euclidean structure captures real-world constraints like drone orientation or robotic arm kinematics. The simulation showcases:
- Manifold-Aware RL: Policy gradients operating directly on SO(3) rotations
- Attention as Physics: Plumes modeled as diffusing scalar fields
- Emergent Coordination: Flows that balance exploration and goal pursuit
This framework bridges geometric control theory and multi-agent reinforcement learning—a potential blueprint for designing resilient autonomous systems that navigate complex 3D environments.
As you watch flies swarm toward gold or flee red spheres, consider the implications: These physics-aware agents aren't just algorithms. They're prototypes for future robots that must reason about orientation, turbulence, and collective intelligence in an unruly world.