NVIDIA announces new AI perception for ROS developers

Update: September 24, 2021

NVIDIA announces new AI perception for ROS developers

NVIDIA announces new AI perception for ROS developers

NVIDIA and Open Robotics have entered into an agreement to accelerate ROS 2 performance on NVIDIA’s Jetson edge AI platform and GPU-based systems.

A number of initiatives will look to reduce development time and improve performance for developers seeking to incorporate computer vision and AI/machine learning functionality into their ROS-based applications.

Open Robotics will enhance ROS 2 to enable efficient management of data flow and shared memory across GPU and other processors present on the NVIDIA Jetson edge AI platform. This will help to significantly improve the performance of applications that have to process high-bandwidth data from sensors such as cameras and lidars in real time.

In addition, Open Robotics and NVIDIA are working to enable seamless simulation interoperability between Open Robotics’s Ignition Gazebo and NVIDIA Isaac Sim on Omniverse. Isaac Sim already supports ROS 1 and 2 out of the box and features an ecosystem of 3D content with its connection to a range of applications, such as Blender and Unreal Engine 4.

Ignition Gazebo has a long track record and is used widely by the robotics community, including in high-profile competition events such as the ongoing DARPA Subterranean Challenge.

“As more ROS developers leverage hardware platforms that contain additional compute capabilities designed to offload the host CPU, ROS is evolving to make it easier to efficiently take advantage of these advanced hardware resources,” said Brian Gerkey, CEO of Open Robotics. “Working with an accelerated computing leader like NVIDIA and its vast experience in AI and robotics innovation will bring significant benefits to the entire ROS community.”

With the two simulators connected, ROS developers will be able to move their robots and environments between Ignition Gazebo and Isaac Sim to run large-scale simulation and take advantage of each simulator’s advanced features such as high-fidelity dynamics, accurate sensor models and photorealistic rendering to generate synthetic data for training and testing of AI models.

Software resulting from this collaboration is expected to be released in the spring of 2022.