Before applying cement, the smooth side of the tread surface must be inspected to ensure the quality adhesion of the tread to a tire carcass. Although a human inspector can easily find defects like blisters and cracks, small defects can be a challenge, and other surface defect anomalies may require shifting positions to find variation in surface sheen.
Our team shadowed human inspectors while collecting images of the product, annotated images, and then an image-based deep learning model was developed to detect surface anomalies. Through further data collection, annotation, and model iterations, the system gained the ability to classify the surface anomalies into several categories. Traditional vision (rule-based) algorithms were added to established pass/fail criteria based on shape, size, and brightness for each classification, thus reducing the need for 100% human inspection.
The inside of a tire is manually inspected for surface defects and penetration by foreign objects. It can be physically awkward and challenging to see small objects manually, so the vision system handles that task.
For this pilot project, a camera on a linear slide and rotating mirror constrained to a 12" x 3" package were mounted on the end of the linear arm and joysticked into the center of the tire. Laser gauges provide feedback to the operator for their ideal placement. Interior is laser profiled to calculate optimal camera distance such that the entire field of view is within focus. The mirror is centered to the field of view, and the tire is rotated while the camera acquires a series of images. After each series of images, the mirror is adjusted to the next field of view, and the process is repeated until the interior is fully imaged. Images are then presented to the operator on a 34" curve monitor for manual inspection. The operator reviews the images and draws boxes around possible defects. Afterward, this information is archived with an identifier. At a later date, this data can be used to train a Deep Learning system.
Animal Health Sciences start-up was seeking a subject matter expert to develop a machine vision solution to determine the orientation of a baby chicken's head for the application of vaccine.
For this SBIR Phase 1 project, a custom lighting and optical solution were developed to acquire images of baby chicks as they passed under the camera. 1000 images were collected and annotated, 2/3rd of the images were then fed into Deep Learning algorithms. Detection performance was 97%, with an execution time of 19-22ms.
From new to end of life, welding tips used to spot weld the housing to the band create varied results. From time to time, a weld would fail to fully engage and would produce a light or no weld.
5000 images were collected and annotated, then applied to a neural network-based descision engine (predecessor to modern deep learning algrothims). System performed well after new tips were broken in after 2000 cycles. (Average life cylce of tips 35k - 40k)