Abstract: Visual navigation is a major goal in machine vision research, and one of both practical and basic scientific significance. The practical interest reflects a desire to produce systems which move about the world with some degree of autonomy. The scientific interest arises from the fact that navigation seems to be one of the primary functions of vision in biological systems. Navigation has typically been approached through reconstructive techniques since a quantitative description of the environment allows well understood geometric principles to be used to determine a course. However, reconstructive vision has had limited success in extracting accurate information from real-world images. This thesis argues that a number of basic navigational operations can be realized using qualitative methods based on inexact measurement and pattern recognition techniques.
Navigational capabilities form a natural hierarchy beginning with simple abilities such as orientation and obstacle avoidance, and extending to more complex ones such as target pursuit and homing. Within a system, the levels can operate more or less independently, with only occasional interaction necessary. This thesis considers three basic navigational abilities: \fIpassive navigation\fR, \fIobstacle avoidance\fR, and \fIvisual homing\fR, which together represent a solid set of elementary, navigational tools for practical applications. It is demonstrated that all three can be approached by qualitative, pattern-recognition techniques. For passive navigation, global patterns in the spherical motion field are used to robustly determine the motion parameters. For obstacle avoidance, divergence-like measurements on the motion field are used to warn of potential collisions. For visual homing an associative memory is used to construct a system which can be trained to home visually in a wide variety of natural environments. Theoretical analyses of the techniques are presented, and implementation and testing of working systems described.