Designing safer, intelligent, and efficient autonomous robots

Update: November 9, 2021

Autonomous robots are intelligent machines that can understand and navigate through their environment without human control or intervention. Although autonomous robot technology is relatively young, there are many different use cases of autonomous robots in factories, warehouses, cities, and homes. For example, autonomous robots can be used to transport goods around warehouses, like in Figure 1, or perform last-mile delivery, while other kinds of autonomous robots can vacuum homes or mow lawns.

Designing safer, intelligent, and efficient autonomous robots

Figure 1: A robot performs tasks in a warehouse. (Source: Texas Instruments)

Autonomy requires that robots can sense and orient themselves within a mapped environment, dynamically detect the obstacles around them, track those obstacles, plan their route to reach a specified destination, and control the vehicle to follow that plan. In addition, the robot must perform these tasks only when it is safe to do so, avoiding situations that pose risks to humans, property, or the autonomous system itself.

With robots working in greater proximity to humans than ever before, they must not only be autonomous, mobile, and energy-efficient but also meet functional safety requirements. Sensors, processors, and control devices can help designers reach the rigorous requirements of functional safety standards, such as International Electrotechnical Commission (IEC) 61508.

Considerations for sensing in autonomous robots

A robot without sensors will inevitably crash into obstacles, including walls, other robots, or humans, and could potentially result in serious injury. There are several different types of sensors that can help solve the challenges posed by autonomous robots.

Vision sensors closely emulate human vision and perception. Vision systems can solve the challenges of localization, obstacle detection, and collision avoidance because they have high-resolution spatial coverage and the ability to not only detect objects but classify those objects. Vision sensors are also more cost-efficient when compared with sensors like LiDAR. However, vision sensors are very computationally intensive.

Power-hungry central processing units (CPUs) and graphics processing units (GPUs) can pose a challenge in power-constrained autonomous robot systems. When designing an energy-efficient robotic system, CPU- or GPU-based processing should be minimal.

The system-on-chip (SoC) in an efficient vision system should process the vision signal chain at high speeds and low power, with optimized system costs. The SoC must also offload computationally intensive tasks such as raw image processing, dewarping, stereo depth estimation, scaling, image pyramid generation, and deep learning for maximum system efficiency. SoCs used for vision processing must be smart, safe, and energy-efficient, which high levels of on-chip integration in a heterogeneous SoC architecture can achieve.

Let’s take a closer look at the use of Texas Instruments’ (TI’s) millimeter-wave (mmWave) radar sensing, as an example, in autonomous robots. Using TI mmWave radar in robotic applications is a relatively new concept, but the idea of using TI mmWave sensing for autonomy has been around for a while. In automotive applications, TI mmWave radar is one of the key components of advanced driver-assistance systems (ADAS) and has been used to monitor a vehicle’s surroundings. You can take some of those same ADAS concepts, like surround-view monitoring or collision avoidance, and apply them to autonomous robots.

TI mmWave radar is unique from a sensing technology perspective because these sensors provide range, velocity, and angle-of-arrival information of objects and better instruct the robot how to navigate for collision avoidance. Using radar sensor data, the robot can decide to either safely continue on its path or slow down or even stop, depending on the position, speed, and trajectory of an approaching person or object, as shown in Figure 2.

It is important to note that TI mmWave radar views the environment in three dimensions, which enables the sensor to perceive objects that might not be directly in the driving path of the robot. Because of this 3D detection capability, TI mmWave radar sensors can additionally provide height information that is critical in not only detecting objects lying on the ground but also objects that might be protruding into a robot’s path from above.

TI mmWave sensors can also reliably detect glass and other transparent materials that other sensors like cameras and LiDAR might “see” through the transparent object and fail to accurately detect. TI mmWave radar is also more robust in challenging environmental conditions where optical sensors tend to have more difficulty. Because TI mmWave radar uses radio waves instead of light to detect objects, it is immune to environmental factors like low lighting, rain, fog, dust, and smoke.

Figure 2: A warehouse robot uses radar sensing. (Source: Texas Instruments Inc.)

Addressing complex autonomous robot problems with sensor fusion and AI

For more complicated autonomous robot applications, a single sensor alone might not be sufficient to enable autonomy, regardless of the type of sensor. Different sensing modalities have distinctive strengths and limitations.

Radar is a good fit for object detection and provides a long range of visibility in challenging environments but has limitations when it comes to object classification or object edge precision. LiDAR sensors can provide precision and accuracy but can be costly and power-hungry. Vision sensors can provide object classification and scene intelligence with high resolution but can be computationally intensive and require an external light source to operate. Ultimately, sensors such as camera or radar should complement each other in a system. Leveraging the strengths of different sensor modalities through sensor fusion can help solve some of the more complex autonomous robot challenges.

While sensor fusion helps autonomous robots to be more accurate, using artificial intelligence at the edge can help make robots intelligent. Incorporating AI into autonomous robot systems can help enable robots to intelligently perceive, make decisions, and perform actions.

An autonomous robot with AI can intelligently detect the object and its position, classify the object, and take action accordingly. For example, when a robot is navigating a busy warehouse, AI can help the robot infer what kinds of objects — including humans, boxes, machinery, or even other robots — are in its path and decide what actions are appropriate to navigate around them.

AI can also help robots perform specific tasks more autonomously. For example, if a robot is moving a dolly around a warehouse, vision-based AI helps the robot to detect and infer the pose and position of the dolly so that the robot can accurately position itself, attach to the dolly, and then move it around the warehouse floor.

When designing a robot system that incorporates AI, there should be design considerations for both hardware and software. Ideally, the SoC should have hardware accelerators for AI functions to help perform computationally intensive tasks in real time. Having access to an easy-to-use AI software development environment can help simplify and speed up the application development and hardware deployment processes.

Conclusion

Designing more intelligent and autonomous robots is a necessity to continue improving automation. Robots can be used in warehouses and delivery to keep up with and enhance e-commerce growth. Robots can perform mundane household tasks like vacuuming and mowing. Using autonomous robots unlocks productivity and efficiency that helps to improve and add value to our lives.

About the authors

Manisha Agrawal is a product marketing engineer for the Jacinto processor product line. She has years of experience in end-to-end vision signal processing on TI SoCs through various roles in software, applications, and systems engineering. Her recent focus and area of interest is on AI and robotics. Manisha has an M.S. in electrical engineering from IIT Kanpur, India, and holds three patents in her name.

Jitin George is a product marketing engineer for industrial mmWave radar sensors at Texas Instruments. Since 2019, he has led the worldwide marketing efforts for industrial radar in factory automation, with a specific focus on growing business in the robotics market.

Sam Visalli is the systems manager for the Sitara MCU product line. Sam has spent the last several years working as the functional safety manager for the Jacinto and Sitara processor product lines. He has helped TI design products and systems for such diverse functional safety applications as autonomous driving, factory automation, and robotics. Sam also serves on the U.S. committees for the IEC61508 and ISO26262 functional safety standards and works with multiple TI-wide functional safety initiatives.

about Texas Instruments