Due to the increase in processor speeds, complex calculations for 3D algorithms are now practical at production speeds. This opens new possibilities for solving problems not easily done with tradition 2D solutions for a wide variety of identification, inspection, and guidance applications
Laser proﬁling captures the 3D surface of a part through triangulation. A laser beam is projected onto the part and is deformed by the height variance of the part. Through calibration, the relative position of the laser beam within the camera's image will correspond to a Z-height. By stitching together multiple images, a depth map of the whole part can be generated.
Multi-camera stereo vision is used to capture 3-D image information based on multiple 2-D views of a part or product. Stereo vision is often used in robot navigation to estimate the distance (range) of a particular object from a camera. Image information is presented in a stereo disparity map created by matching corresponding image coordinates gathered from multiple cameras.
Photometric stereo uses a number of images to reconstruct the object surface. The camera and the object are ﬁxed, while the object is illuminated by turning on multiple light sources, one light at a time for each image. By knowing the orientations and positions of the various light sources, as well as the reﬂectance properties of the object, the surface can be reconstructed from the multiple images.
Depth From Focus
A three-dimensional object within the same scene has multiple points that are diﬀerent distances to the camera. Configuring the optics of a camera to have a limited depth of ﬁeld, such that at any given focal distance some object points will be in focus and others will not be. Acquiring multiple images at various focal distances, each object point can be displayed sharply in at least one image. Algrothims then determine which image has an object point that is projected sharply, and from there the distance to that point can be determined.
Time of Flight
The time-of-ﬂight method is a hardware-based technology that uses a camera with an integrated light source. A sensor in the camera measures the time between when light is emitted by the camera, and the time when light reﬂected at the object’s surface returns to the camera. Diﬀerent distances correspond to diﬀerent time gaps as measured by the camera.