Vendors of machine vision equipment are looking to sell the hardware and not guarantee the solution. Suppose the LED lighting, lens, and sensor combination purchased cannot provide the measurement accuracy required because of poor contrast or not all of the object is in focus; in that case, one or more pieces of equipment may become surplus. Sitting on the shelf and collecting dust, hoping it can be used on the next project.

Machine vision is a field where mathematical principles play a crucial role in ensuring the robustness of the system. Although a rough estimation of the camera’s location and the required lens can be made, more detailed calculations can mitigate risks and improve image quality.

The spatial resolution of a vision system is much more than how many megapixels the camera has. One interesting fact is that a camera with higher resolution and smaller pixels may produce a poorer quality image because the smallest achievable spot size of the lens exceeds the size of the sensor’s pixel.

When light rays pass through a small aperture, they will begin to diverge and interfere. The interference becomes more significant as the size of the aperture decreases relative to the wavelength of light passing through but occurs to some extent for any aperture or concentrated light source. Since the divergent rays now travel different distances, some move out of phase and begin interfering with each other — adding in some places and partially or completely canceling out in others. This interference produces a diffraction pattern with peak intensities where the amplitude of the light waves add and less light where they subtract. If one were to measure the intensity of light reaching each position on a line, the measurements would appear as bands similar to those shown below.

For an ideal circular aperture, the 2-D diffraction pattern is called an “Airy disk.” The width of the Airy disk is used to define the theoretical maximum resolution for an optical system (defined as the diameter of the first dark circle).

However, when the diameter of the Airy disk’s central peak becomes large relative to the pixel size in the camera, it begins to have a visual impact on the image. Once 2 Airy disks become closer to half their width, they are no longer resolvable – Diffraction limit.

Diffraction thus sets a fundamental resolution limit that is independent of the number of megapixels and can be calculated with the following formula:

**∅Airy Disk ≈ 2.44λN**

Where:

• **N** is the f-number (f/#) of the lens

• **λ** is the wavelength of the light

It is best to check the math for a particular wavelength such that the Airy disk diameter doesn’t exceed 2 pixels because it will appear as blur when converted to an image. This rule of thumb is 2 pixels (3 are pushing it) and that is called the circle of confusion.

In gauging application, it is especially important NOT to use white light but instead monochromatic light. A mono camera sensor is sensitive to multiple wavelengths, which are averaged together to produce the image.

To recap, the shorter the wavelength, the smaller the Airy disk diameter. The smaller the f-number, the smaller the Airy disk diameter. Depending on the sensor chosen, Red LED lighting might not be the best choice; for a vision application to get the best contrast, a shorter wavelength like Green may produce a better image.

(source - VS Technlology)

The challenge when designing the optical system for a machine vision system is that as the spatial resolution increases when the f-number gets smaller, the depth of field (DOF) decreases. DOF refers to the range of distances (in the object plane) within an image that appears acceptably sharp. It is a critical parameter in imaging systems, particularly in machine vision applications, where objects are often at varying distances from the camera. A larger depth of field allows for more objects to be in focus, but it also reduces the contrast of the image, which becomes a balancing act for the engineers designing the imaging system.

The depth of field is affected by several factors, including the lens aperture, focal length, and the distance between the camera and the object. The depth of field DOF can be calculated using the following formula:

**DOF = 2Ncf/(N^2f^2 + c^2s^2)^0.5**

Where:

• **N** is the f-number of the lens

• **f **is the focal length of the lens

• **c **is the circle of confusion

• **s** is the distance from the lens to the object in focus

As stated previously, the circle of confusion represents the maximum allowable blur diameter that can still be perceived as sharp by the human eye. It is typically determined by the pixel size of the camera sensor and the viewing distance.

As the f-number gets smaller, the field depth decreases, and the image’s sharpness increases. This is because a smaller aperture (larger f-number) results in less light entering the lens, which increases the diffraction effects and reduces the contrast. Increasing the distance between the camera and the object in focus can increase the depth of field, but this is not always possible in machine vision applications where the object’s position may be fixed.

The distances mentioned above is not located in the center of the DOF; it is roughly a 1:2 ratio relationship. The total range of the DOF is approximately 1/3rd in the front and 2/3rd behind the distance from the front of the lens to the object.

Lens selection can get far more in-depth. Lens aberrations such as distortion, chromatic aberration, and spherical aberration can also affect the accuracy and quality of machine vision systems. The performance characteristics, like the total distortion percentage, need to be reviewed on a lens specification sheet. Still, at minimum, the spatial resolution of a lens should exceed the spatial resolution of the camera’s sensor.

The spatial resolution of a lens refers to its ability to distinguish between two closely spaced objects or features in an image. It is determined by the lens’s optical quality and design, as well as its aperture and focal length. A lens with higher spatial resolution can produce sharper and more detailed images with better contrast and less distortion. The spatial resolution of a lens is typically specified in terms of its resolving power, which is measured in line pairs per millimeter (lp/mm) on the image plane. The higher the resolving power, the greater the lens can resolve fine details in an image.

The spatial resolution of an imaging system is ultimately determined by the lower of the two resolutions. That means that if the lp/mm of the lens is higher than the spatial resolution determined by the pixel pitch of the sensor, the full resolution of the lens cannot be realized in the final image. Conversely, if the lp/mm of the lens is lower than the spatial resolution determined by the pixel pitch, the final image will be limited by the resolution of the lens.

The spatial resolution of a sensor can be calculated by:

**Sensor lp/mm = w/2r **

Where:

• **w **is the width of the sensor

• **r** is the total resolution of the sensor in the width

In summary, machine vision is a field that relies heavily on mathematical principles to ensure the accuracy and robustness of imaging systems. The diffraction limit sets a fundamental resolution limit that is independent of the number of megapixels for a camera, and the depth of field is a critical parameter that needs to be balanced against the spatial resolution in imaging systems. By understanding these concepts and applying them in the design and implementation of machine vision systems, engineers can ensure optimal performance and reliability.