Coming Soon

RoboPlayground

Democratizing Robotic Evaluation
through Structured Physical Domains

Yi Ru Wang* Carter Ung* Evan Gubarev Christopher Tan Siddhartha Srinivasa Dieter Fox

University of Washington  ยท  Allen Institute for AI

Evaluation of robotic manipulation systems has largely relied on fixed benchmarks authored by a small number of experts, where task instances, constraints, and success criteria are predefined and difficult to extend. This paradigm limits who can shape evaluation and obscures how policies respond to user-authored variations in task intent, constraints, and notions of success.

We present RoboPlayground, a framework that enables users to author executable manipulation tasks using natural language within a structured physical domain. Natural language instructions are compiled into reproducible task specifications with explicit asset definitions, initialization distributions, and success predicates. Each instruction defines a structured family of related tasks, enabling controlled semantic and behavioral variation while preserving executability and comparability.

Full project page under construction