Object Recognition |
Visual Control |
Visual Motion |
Boundary Extraction |
Randal Nelson currently runs an undergraduate class in robot construction. Basic interests involve machine vision and robotics with an emphasis on systems that engage in sensory-mediated interaction with the physical world. Previous research projects include work on object recognition and learning representations at the interface between feature-based and appearance-based approaches, vision for manipulation and hand-eye coordination, motion recognition and analysis, and visual navigation.
One previous area of activity is feature-based recognition of complex 3-D objects using massive, interactively acquired databases to achieve robustness to clutter, lighting and orientation and generalization over categorical classes. This is built on the robust, probablisitic extraction of intermediate level features, including object boundaries . Work with Andrea Selinger in this area involves the development of methods for learning recognition representations from imagery that is unlabeled, cluttered, or both.
Another interest area involves the integration of vision with action in areas such as hand-eye coordination. Two primary goals here are the development of strategies for the decomposition of complex sensory motor tasks into loosely coupled modules that can be developed and tested independently, and the use of interaction with the physical world to simplify visual perception processes. This includes work with Martin Jagersand on visual-space control through perceptual actions and differential visual feedback , and with Olac Fuentes on virtual tool interfaces for simplifying the use of complex robot devices.
A somewhat older project concerns the development of robust, qualitative methods for the detection and recognition of movement. This has included the development of a real-time system for recognizing moving objects from a moving platform. Work with Ramprasad Polana concerns the use of motion features and statistics (temporal texture) to identify particular sources of image motion (e.g. is it a man walking or a tree waving in the breeze?).
Older navigation work centered around the development of a repertoire of primitive operations for visual navigation that are demonstrably usable in a wide range of real-world environments. Examples include robust methods for obstacle avoidance, determination of rotational and translational movement, and visual homing.
The above research was supported by the URCS vision and robotics lab equipped with 2 PUMA robots, a two-eyed, three-degree-of-freedom robot head, a UTAH-MIT hand, a large number of computational sensors (dual-processor workstations, each equipped with a pair of pan-tilt-zoom cameras, microphones, and networked together) and two networked cluster computational resources.
Randal Nelson was a member of the University of Rochester's
Center for Future Health.
The center was a consortium of groups from engineering, computer science,
research and clinical medicine, and health services that was founded with the
goal of utilizing technology to empower the individual in his or her
own health care, provide whole-person diagnosis, point of need
care, and more efficiently utilize available human resources.