robotics hardware embodied-ai unitree rosclaw

OpenClaw Goes Physical: From Screens to Robots with Unitree G1 and ROSClaw

OpenClaws.io Team

OpenClaws.io Team

@openclaws

March 5, 2026

7 min read

OpenClaw Goes Physical: From Screens to Robots with Unitree G1 and ROSClaw

Beyond the Screen

For most of its short life, OpenClaw has lived inside terminals and chat windows — answering questions, running scripts, managing schedules. But a wave of projects in early 2026 is pushing OpenClaw into the physical world, giving AI agents bodies, cameras, and the ability to move through real space.

The most striking example: a Unitree G1 humanoid robot running OpenClaw, capable of understanding rooms, recognizing people, and remembering what happened and when. This is not science fiction. It is running today, and it is fully open source.

Unitree G1 + Spatial Agent Memory

A project called Dimensional integrated OpenClaw with the Unitree G1 humanoid robot and introduced a capability called Spatial Agent Memory — essentially giving the robot "world memory."

The agent understands physical space and temporality:

  • Where things are: It knows the layout of rooms, the location of objects, and where people tend to be
  • What happened when: It maintains a temporal log of events — who entered a room, when an object was moved, what was said during a conversation
  • Camera integration: It connects to any camera system, processing visual input in real time to update its spatial model

This transforms OpenClaw from a text-based assistant into a spatially aware agent that can navigate and reason about the physical world.

ROSClaw: Hackathon Champion

At the SF OpenClaw Hackathon, a project called ROSClaw won first place by building a bridge between OpenClaw and the Robot Operating System (ROS 2) — the industry-standard middleware for robotics.

ROSClaw's architecture:

  1. 1.Plugin layer: A custom OpenClaw skill that translates natural-language commands into ROS 2 topics and services
  2. 2.WebRTC connection: Low-latency, secure remote control over the internet — operate a robot in Tokyo from a laptop in San Francisco
  3. 3.Sensor fusion: The agent receives camera feeds, LIDAR data, and joint states, then reasons about what to do next
  4. 4.Action execution: The agent can drive motors, move arms, and trigger grippers — all through conversational commands

In the hackathon demo, participants commanded a robot arm to pick up objects, navigate obstacles, and report on its environment — all by chatting with their OpenClaw agent.

Hardware Compatibility

OpenClaw's robotics integration is not limited to humanoid robots. The community has demonstrated deployments on:

  • Unitree G1 and H1: Full-size humanoid robots with walking, manipulation, and camera capabilities
  • Unitree Go2: Quadruped robot dogs used for patrol, inspection, and delivery
  • DJI drones: Aerial agents that can survey areas, track objects, and respond to natural-language flight commands
  • Custom ROS 2 robots: Any robot running ROS 2 can connect via the ROSClaw bridge

The peaq Robotics SDK

The peaq network released a Robotics SDK specifically designed to make robots "OpenClaw-ready." The SDK handles:

  • Device identity and authentication for robots
  • Secure communication channels between OpenClaw agents and robot hardware
  • Data logging and audit trails for autonomous robot actions

This infrastructure layer addresses one of the biggest concerns in robotic AI: accountability. When a robot takes an action in the physical world, you need to know which agent authorized it, what data informed the decision, and how to audit the entire chain.

What This Means

OpenClaw's move into robotics represents something bigger than a cool hack. It is the convergence of three trends:

  1. 1.AI agents mature enough to reason about complex, multi-step physical tasks
  2. 2.Robot hardware cheap enough for individuals and small teams to experiment with (the Unitree Go2 starts under $2,000)
  3. 3.Open-source infrastructure that lets anyone connect the two without vendor lock-in

We are watching the early days of a new paradigm: AI agents that do not just think and type, but move, see, and act in the real world. And OpenClaw is at the center of it.

For more details, check out the ROSClaw project and the peaq Robotics SDK documentation.

Stay in the Loop

Get updates on new features, integrations, and lobster wisdom. No spam, unsubscribe anytime.