В библиотеку

Sensor Fusion for Mobile Robot Navigation

Moshe Kam, Xiaoxun Zhu, Paul Kalata

Abstract

We review techniques for sensor fusion in robot navigation, emphasizing algorithms for self-location. These find use when the sensor suite of a mobile robot comprises several different sensors, some complementary and some redundant. Integrating the sensor readings, the robot seeks to accomplish tasks such as constructing a map of its environment, locating itself in that map, and recognizing objects that should be avoided or sought. Our review describes integration techniques in two categories: low-level fusion is used for direct integration of sensory data, resulting in parameter and state estimates; high-level fusion is used for indirect integration of sensory data in hierarchical architectures, through command arbitration and integration of control signals suggested by different modules.

The review provides an arsenal of tools for addressing this (rather ill-posed) problem in machine intelligence, including Kalman filtering, rule-based techniques, behavior based algorithms, and approaches that borrow from information theory, Dempster–Shafer reasoning, fuzzy logic and neural networks. It points to several further-research needs, including: robustness of decision rules; simultaneous consideration of self-location, motion planning, motion control and vehicle dynamics; the effect of sensor placement and attention focusing on sensor fusion; and adaptation of techniques from biological sensor fusion.

Introduction

Robot navigation requires the guiding of a mobile robot to a desired destination or along a desired path in an environment characterized by a terrain and a set of distinct objects (such as obstacles, milestones, and landmarks). Motion planning is often designed to optimize specific performance criteria and to satisfy constraints on the robot’s motion. Typical performance criteria are minimum time to arrive at a milestone and minimum control effort. Typical constraints are obstacle avoidance and a maximum robot velocity. The complexity and variety of environments where robots need to navigate, and the large number of objectives and constraints that they must satisfy, make the mobilerobot navigation problem ill posed. Moreover, physical platforms and sensor suites vary significantly from system to system, complicating further the task of generating a unified framework for sensing and control.

Mobile robot designers can choose from a large number of sensor types and sensing modules. These are sometimes complementary, sometimes redundant, and there exist architectures where sensors are used in both fashions. Many mobile robots carry sensors for dead reckoning (such as optical encoders and geomagnetic sensors) and for map making and self-location (such as time-of-flight ultrasonic systems and laser-based ranging systems). Some use active beacons (such as the global positioning system), or landmarks whose positions in the robot’s environments are known. Other mobile robots use tactile sensors to touch obstacles in the environment and plan paths around them. In almost all designs, several sensors are operated simultaneously, and in most—sensors of different sensing principles, capabilities, and volumes-of-coverage are used in parallel. Consequently, methods of sensor fusion are needed to translate the different sensory inputs into reliable estimates and environment models that can be used by other navigation subsystems. Sensor fusion in this context is the process of integrating data from distinctly different sensors for detecting objects, and for estimating parameters and states needed for robot self-location, map making, path computing, motion planning, and motion execution.

The full text can be downloaded from: http://www.control.aau.dk/~tb/ESIF/00554212.pdf


В библиотеку