Shrink wrap product sleeve is placed onto the bottle before entering into a steam tunnel. Occasionally the shrink wrap label is not in the correct position or gets snagged. This causes an unaesthetic package that is not suitable for retail. Each bottle must be manually inspected prior to filling. Shrink wrap colors are anything across the rainbow and many shades in between, including shades of white placed on a white bottle.
The bottle is side belt transfered to an exit conveyor and the bottle is inspected with a GigE camera. Using a combination of UV and white LED lighting, along with a specialty cut filter, the UV lighting causes a blue shift in the color of the product sleeve, while the color of the white bottle appear to be grey. Due to the reliability with traditional color segmentation working with greys, an in-house custom color segmentation routine was used to segment the exposed bottom of bottle that is not covered by the product sleeve. This allowed bottle to be profiled and measured, allowing for a quantitative pass/fail decision.
When customers are finished with a laser ink cartridge, they often send it back using the return box provided with new cartridges. These cartridges are returned to centralized recycling centers across the country by the truck loads. Used cartridges are removed from their packaging and thrown onto a conveyor belt for manual sorting. As volumes steadily increase, addtional manpower is required to aid in the sorting. Since the recycling center is credited for each cartridge processed and certain cartridges have higher number of recyclable components then others, accurate count of models is highly desired.
PC-based vision system monitors the conveyor for cartridges as they enter into the field of view, at which point it makes note of the cartridges encoder location and its orientation. Using a blob tool, the system pre-classifies cartridges based on their shape using a high-end multi-core processsor for parallelized pattern matching. Subsequent secondary pattern matching is performed in some instances to identify other key markers when subcategorization is required. A local register retains counts of each cartridge identified. Encoder location, orientation, and cartridge ID are passed to a spider robot to pick and place the cartridges into large bins. Multiple instances and orientations of well over one hundred cartridges were trained into the system.
The inside of a tire is manually inspected for surface defects and pentrations by foreign objects. It can be physically awkward and difficult to see small objects manually so the vision system handles that task.
For this pilot project, a camera on a linear slide and rotating mirror constrained to a 12” x 3” package was mounted on the end of linear arm and joysticked into the center of the tire. Laser gauges provide feedback to the operator for its ideal placement. Interior is laser profiled to calculate optimal camera distance such that the entire field of view is within focus. The mirror is centered to the field of view and the tire is rotated while the camera acquires a series of images. After each series of images, the mirror is adjusted to the next field of view and the process is repeated until the interior is fully imaged. Images are then presented to the operator on a 34” curve monitor for manual inspection. The operator reviews the images and draws boxes around possible defects. Afterwards this information is archived with an identifier. At a later date, this data can be used to train a Deep Learning system.
QR code and human readable text on the trading card would occasionally not match. Updated the user configuration and faulty remote software.
Deployed in less than two weeks with surplus stock, using 4VT’s standard framework and a VisionPro Quickbuild script. Cards were fed onto a vacuum belt, triggered by a photoeye, and if there was not a match, system would blow the product off at the air gap between the vacuum belt and the accumulator belts at a rate of 10 cards per second.
Animal Health Sciences start-up was seeking a subject matter expert to develop machine vision solution to determine the orientation of a baby chickens head for the application of vaccine.
For this SBIR Phase 1 project, a custom lighting and optical solution was developed to acquire images of baby chicks as they passed under the camera. 1000 images where collected and annotated, 2/3rd of the images were then fed into Deep Learning algrothims. Detection performance was 97% with an execution time of 19-22ms.
From new to end of life, welding tips used to spot weld the housing to the band create varied results. From time to time, a weld would fail to fully engage and would produce a light or no weld.
5000 images were collected and annotated, then applied to a neural network-based descision engine (predecessor to modern deep learning algrothims). System performed well after new tips were broken in after 2000 cycles. (Average life cylce of tips 35k - 40k)
International mail brought in through airline processing centers are often in large sacks with a 4”x6” card attached to it. Depending on the country of origin, some of these cards are hand written and others are printed. The individual at the processing window needs to manually enter in all of the information on the mail tag into a data entry system before it can be thrown onto a conveyor belt to be processed. This creates a bottle neck in the work flow, since only so many transfer windows can be manned and managed to allow baggage/cargo trucks to be unloaded.
A portable prototype imager with a flat surface allowed the mail handlers to position the mail tag for image acquistion. At the start of a new load the mail handler would enter in the Airline and Flight number, along with the number of pieces. The mail handler would take an image of that tag and then the local Vision PC would attempt to read all of the text on the tag, then present the image along with data to the mail handler. Once approved, the data would be saved to a database. If tag was not readable, the image was sent to a remote location where it was presented to a person for manual entry. This allowed for reconcilation of a load and moving mail bags as quickly as the manual imaging process would allow.
Supplier for the machines that manufacture the Permanent Residency Cards needed to ensure that the text printed on the card matches what is contained in the database. It was also to ensure the person’s face was printed clearly, and verify the presence of several security features.
Standard lighting techniques and Optical Character Recognition routines where used to validate the text and data that was contained in the database. Image correlation was used to validate the person’s face printed on the cards. Multiple different lighting techniques and LED wavelengths were used with different image processing alghrothims to verify the presence of all security features.
Supplier for the machines that manufacture the Driver’s Licenses needed to ensure the text printed on the card matched what is contained in the database. Also needed to ensure the person’s face was printed clearly, and verify presence of the engraved birth date on the card.
Standard lighting techniques and Optical Character Recognition routines where used to validate the text and data that was contained in the database. Image correlation was used to validate the person’s face printed on the cards. Photometric stereo imaging technique with IR LED lights was used to extract the engraving and filter out the print on the driver’s license.