In conducting our experiments, we use a B/W CCD camera mounted on a Puma 560 robot arm, that observe and guide the interaction between the CMM probe and the machined part (see Figure 16.) In order for the state machine to provide control, it must be aware of state changes in the system. As inspection takes place, the camera supplies images that are interpreted by a set of 2D and 3D vision processing algorithms and used to drive the DRFSM. These algorithms are described in greater detail in other publications , but include thresholding, edge detection, region growing, stereo vision, etc. The robot arm is used to position the camera in the workplace and move in the case of occlusion problems.
The object of these experiments was to test the operation of the visual system with the state machine. Two facets of this were the generation of an initial model from stereo vision and the generation of events that describe a probe's relationship to features in that model.
This stereo process used the Puma arm to gather pairs of images. The resulting model was used to determine feature relationships used in the DEDS controller. The models shown are from this initial visual inspection.
The event generation method, consisting of 2-d image processing routines, was used to detect the relationship of a simulated (hand-held) CMM probe to the features in the initial model. These events were processed by the controller, which output text messages guiding the experimenter to move the probe or indicate that a touch had occurred.
The automaton used in the environment is shown in Figure 17. This machine has the following states: