Six innovations that make Softer Robotics the most capable and efficient legged robot platform for real-world deployment.
Content coming soon.
Robots operating near humans must do more than avoid collisions — they must understand and respond to physical interaction in real time.
Our approach combines high-fidelity sensing with joint-level control running at high frequency, capturing the complex, non-linear characteristics of each actuator. Built-in mechanical compliance on each joint level absorbs unexpected forces naturally, making the robot inherently safe by design — not just by software.
On-the-edge data-augmented robot modelling enables high-fidelity environment interaction, allowing safe unsupervised deployment without relying on constant external oversight. Preliminary work focuses on data-based modelling to predict and manage interaction forces accurately.
Looking ahead, we are building toward TPU-style inference chips that enable on-device prediction of complex interaction scenarios — making safe human-robot collaboration faster, more reliable, and truly scalable.
Content coming soon.
Every wheeled robot has a boundary — the first step it cannot climb. Beyond that line, automation fails and humans step in. That boundary has defined the limits of automated delivery since the first autonomous delivery trials.
Legged robots have no such boundary. Our platform handles stairs, slopes, cobblestones, and outdoor terrain as naturally as flat ground. With four legs, adaptive locomotion and intelligent leg coordination, it goes where wheels cannot — making true end-to-end automation finally possible.
Direct-drive actuators are characterised by low friction and good backdrivability — but this comes at a cost. Maintaining a standing position or holding a payload drains power continuously, drastically limiting autonomous operating time.
Our spring-loaded actuators solve this at the root. By storing and releasing energy dynamically, they consume up to 9 times less power than conventional QDD systems in static load situations — as shown in the comparison above.
Combined with an integrated wheel system, the robot switches seamlessly between rolling on flat ground and walking over obstacles. Wheels handle the distance. Legs handle the terrain. The result: dramatically lower energy consumption and longer operational range.
Our robot doesn't follow pre-programmed motion sequences — it learns. Using reinforcement learning trained in NVIDIA Isaac, the robot develops controllers that react to the environment in real time: adapting its gait to uneven terrain, recovering from disturbances, and handling edge cases it has never explicitly been programmed for. Simulation lets us run millions of scenarios overnight, and the resulting policies transfer directly to hardware — keeping development cycles short as the platform scales.