Abstract: In the original list of physical interaction tasks which served as inspiration for this workshop, are several that involve sophisticated manipulation of complex objects in three dimensions. Unless the initial shape and configuration of all relevant objects are precisely known a-priori, in which case a fixed manipulator trajectory can be used, such manipulation is a difficult problem. On closer examination, we find both difficult computational and hard representational issues. Computationally, we have a problem because the relatively high dimensionality of the configuration spaces of both the manipulated object and the manipulator precludes direct search for a solution. Representationally, we need compatible descriptions for three distinct regimes: first, a description of the world that is dependent on, though not necessarily fully derived from, sensor data; second, a means of specifying the task to be carried out; and third, information about the state and capabilities of the manipulator itself. We describe the behavioral vision paradigm as a means of unifying these problems, and discuss some implications and propose some methods of solution.