Next: Identifying Motion Events Up: State Modeling and Previous: Developing the Observer


Experiments were performed to observe the robot hand. The Lord experimental gripper is used as the manipulating hand. Different views of the gripper are shown in Figure 8. Tracking is performed for some features on the gripper in real time. The visual tracking system works in real time and a position control vector is supplied to the observer manipulator.

Some visual states for a grasping task using the Lord gripper, as seen by the observer camera, is shown in figure 9. The sequence is defined by our model, and the visual states correspond to the gripper movement as it approaches an object an then grasps it.

The full system is implemented and tested for some simple visual action sequences. One such example is shown in figure 10. The automaton encodes an observer which tracks the hand by keeping a fixed geometric relationship between the observer's camera and the hand as so long as the hand does not approach the observer's camera rapidly. In that case, the observer tends to move sideways, that is, dodge and start viewing and tracking from sideways. It can be thought of as an action to avoid collision, due to the fact that the intersection of the workspaces of both robots is not empty. State represents the visual situation where the hand is in a centered viewing position with respect to the observer and viewed from a frontal position. State represents the hand in a non-centered position and tending to escape the visual view, but not approaching the observer rapidly. State represents a ``dangerous'' situation as the hand has approached the observer rapidly. State represents the hand being viewed from sideways, and the hand is centered within the imaging plane.

After having defined the states, the events causing state transitions can be easily described. Event represents no hand movements, event represents all hand movements in which the hand does not approach the camera rapidly. Event represents a large movement towards the observer. Events and are controllable tracking events, where always compensates for in order to keep a fixed 3-D relationship and is the ``dodging'' action where the observer moves to start viewing from sideways, while keeping the hand in a centered position.

The events can thus be defined precisely as ranges on the recovered world motion parameters. For example, can be defined as any motion . Event is defined as any motion such that :

It should be noted that defining in this manner helps a lot in suppressing noise. Having defined the events, the task reduces to computing the relevant areas under the distribution curves for the various 3-D motion parameters and computing the probabilities for the ranges of , and at states and . State transitions is asserted and reported when the probability value exceeds a preset threshold. States and are considered to be the set of stable states, by enabling the tracking events and the system can be made stable with respect to that set.

The low level visual feature acquisition is performed on the MaxVideo pipelined video processor at frame rate. The state machine resides on a Sun SparcStation . The Lord gripper is mounted on a PUMA 560 arm and the observer's camera is mounted on a second PUMA 560.

Next: Identifying Motion Events Up: State Modeling and Previous: Developing the Observer
Tue Nov 22 21:30:54 MST 1994