#Hardware

SynapX Unveils SYNData: A Multimodal Capture Suite for Dexterous Manipulation

AI & ML Reporter
4 min read

SynapX released SYNData, a hardware kit that records egocentric video, EMG, and exoskeleton glove data to build large‑scale manipulation datasets. The company claims the system solves the data bottleneck for embodied AI, but the novelty lies more in integration than in new sensing technology, and practical scaling still faces usability and standardization hurdles.

What SynapX claims

SynapX announced SYNData, a turnkey kit for collecting multimodal manipulation data. The package bundles three pieces of hardware:

  1. Quad‑camera ego headset – four synchronized fisheye lenses mounted on a head‑mounted display to capture first‑person video.
  2. EMG wristbands – dry‑electrode arrays that record muscle activation from forearm flexors and extensors.
  3. Bionic exoskeleton data glove – a force‑sensing glove that reports joint angles, contact locations, and distributed pressure across the palm.

According to the press release, the system can record all streams simultaneously, producing a synchronized dataset that includes:

  • RGB video from four viewpoints
  • 3‑D hand pose derived from the glove’s kinematics
  • Contact‑force maps over the entire palm
  • Raw EMG waveforms

SynapX argues that the real bottleneck for embodied AI is not model size or compute, but the lack of scalable, high‑quality interaction data. By packaging these sensors together and providing a “Bio2Robot” conversion model that maps biological signals to robot‑usable representations, they claim researchers can collect “human‑level” manipulation data at a rate comparable to video‑only pipelines.

The company also highlighted a recent competition result: a second‑place finish in the AGIBOT World Challenge – Reasoning to Action track at ICRA 2026, achieved only three weeks after the firm’s founding.


What is actually new?

Integration, not sensor invention

The individual components in SYNData are not new. Multi‑camera rigs for egocentric vision have been commercialized for months (e.g., Intel RealSense T265, GoPro Fusion). EMG wristbands are widely used in prosthetics research (e.g., Myo, Delsys Trigno) and open‑source projects. Exoskeleton gloves with force sensing have existed in labs such as the Shadow Dexterous Hand and Manus VR.

What SYNData does differently is bundle these devices, provide a synchronized software stack, and ship a pre‑trained Bio2Robot model that translates raw EMG into high‑level intent signals. From an engineering perspective, that reduces the integration effort for a lab that would otherwise have to write custom drivers, time‑align streams, and calibrate force sensors.

The Bio2Robot mechanism

The press release mentions a “Bio2Robot mechanism” – an AI model that maps human biological signals (EMG, glove kinematics) to robot‑friendly representations (e.g., joint torque commands). The company released a brief white‑paper that describes a two‑stage pipeline:

  1. Signal preprocessing – band‑pass filtering, RMS envelope extraction, and dimensionality reduction via PCA.
  2. Cross‑modal translation – a lightweight transformer that learns a joint embedding of EMG and glove data, then decodes to a robot‑centric action space.

In benchmark tests on a Franka‑Emika Panda equipped with a parallel‑jaw gripper, the model achieved a mean absolute error of 0.12 N·m on joint torque prediction, comparable to a supervised baseline trained on 200 hours of paired human‑robot data. The improvement over a vision‑only baseline was modest (≈ 8 % lower error), suggesting the added modalities help but are not a silver bullet.

Dataset scale claim

SynapX promises “scalable collection without interfering with natural human behavior.” Their demo video shows a user performing a kitchen‑task sequence while wearing the headset and glove; the hardware appears lightweight enough not to impede motion. However, the demo only covers a single participant performing 15 minutes of activity. No public dataset has been released yet, so the claim of “large‑scale” remains unverified.


Limitations and practical concerns

  1. Usability overhead – Even with synchronized software, setting up four cameras, EMG bands, and a data glove takes at least 30 minutes per participant. Calibration of the glove’s force sensors requires a separate rig and can drift over time.
  2. Signal quality variance – Dry‑electrode EMG is notoriously sensitive to skin preparation and motion artefacts. In uncontrolled environments (e.g., a real kitchen), signal dropout could be frequent, reducing the usefulness of the Bio2Robot model.
  3. Standardization gap – The community currently lacks a common format for fusing video, EMG, and contact‑force data. SYNData ships its own JSON‑based schema; converting to widely used formats like ROS bag or HDF5 will require extra tooling.
  4. Cost barrier – The quoted price for the full kit (≈ $7,500) is comparable to a high‑end motion‑capture system. Smaller labs may still find it prohibitive, especially when multiple participants are needed for statistical power.
  5. Benchmark relevance – The AGIBOT competition focuses on reasoning over a pre‑recorded dataset, not on real‑time control. A high placement there does not directly prove that SYNData improves robot manipulation performance in the wild.

Outlook

SynapX’s SYNData is a useful step toward more holistic manipulation datasets. By packaging existing sensors and providing a baseline translation model, the company lowers the entry barrier for researchers who need synchronized vision, force, and biosignal streams. The real test will be whether the community adopts a common data format and whether the Bio2Robot model can generalize across tasks, users, and robot platforms.

If SynapX releases an open‑source dataset covering diverse objects, multiple users, and realistic occlusions, the claim of “solving the data bottleneck” will gain substance. Until then, the system should be viewed as a well‑engineered integration kit rather than a transformative breakthrough in embodied AI.


Further reading

Comments

Loading comments...