Autonomous science requires robots that understand science and not just motion and manipulation. OSS treats robots as first-class lab task operators, alongside humans and instruments.
To enable robots to be used in dynamic workspace environment such as science R&D labs, OSS has developed the Robotic Application Stack (RAS), a flexible middleware that simplifies robotics application development.
RAS includes four modules, as shown below:
The Model module of RAS is leveraged by ULA to create a digital twin of the physical lab. Other three modules (Viewer, Planner, and Controller) allow ULI to seamlessly leverage robots for performing various operator tasks needed to handle a wide spectrum of scientific instruments across a diversity of labs.
Model includes a lab simulation environment built using Gazebo and Issac Sim. The lab model is kept in sync with the physical lab environment via a perception system that constantly monitors the actual lab. The model enables lab workspace to be defined in terms of logical locations instead of referring to specific physical locations. Overall, the model enables robotics developers to build and validate their robotic programs even before they get access to the actual robots.
Viewer includes a perception system that provides 3D visualization of instruments and the lab space. It encodes the full state of the lab’s physical environment: locations of target objects and obstacles in the real world. The viewer enables robotic applications to work in a flexible and scalable manner across different lab environments.
The viewer is capable of handling transparent labware such as test-tubes and flasks. In addition, viewer maps closed and constrained concave spaces within instruments properly.
Planner converts operator tasks into motion plans and manipulation trajectories for handling samples, operating instruments, etc. The planner generates optimal, collision-free trajectories to handle objects and operate instruments from different orientations; e.g., work in constrained space inside Opentrons. As mentioned earlier, ULI consists of a small set of atomic and stateless operator tasks, which are needed to handle and operate lab equipment. These atomic and stateless tasks are implemented in ROS2 so that the AI science labs can work with any ROS2-compliant robots.
Controller uses real-time visual feedback and dynamic adjustments of robot parameters (such as poses, coordinates, torque limits, speed profiles) to ensure precise manipulation execution, allowing tasks to be completed successfully.
While ROS2 provides hardware abstraction, its APIs are too low-level for most developers. RAS builds on ROS2 but abstracts away technical complexities like DDS configuration, topic management, node lifecycle, and message serialization. RAS enables robotics developers to build general-purpose robotic applications without dealing directly with these low-level details, as shown below:
The enable robotics developers worldwide to build their applications using this middleware, the first version of RAS was open-sourced in December 2024. Though we use RAS to develop robotic support for intelligent science labs, it is a general-purpose middleware which can be used by the broader robotics developer community to build their applications. The RAS code is at https://github.com/ras-ros2.
