Shrink wrap product sleeve is placed onto the bottle before entering into a steam tunnel. Occasionally the shrink wrap label is not in the correct position or gets snagged. This causes an unaesthetic package that is not suitable for retail. Each bottle must be manually inspected prior to filling. Shrink wrap colors are anything across the rainbow and many shades in between, including shades of white placed on a white bottle.
The bottle is side belt transfered to an exit conveyor and the bottle is inspected with a GigE camera. Using a combination of UV and white LED lighting, along with a specialty cut filter, the UV lighting causes a blue shift in the color of the product sleeve, while the color of the white bottle appear to be grey. Due to the reliability with traditional color segmentation working with greys, an in-house custom color segmentation routine was used to segment the exposed bottom of bottle that is not covered by the product sleeve. This allowed bottle to be profiled and measured, allowing for a quantitative pass/fail decision.
When customers are finished with a laser ink cartridge, they often send it back using the return box provided with new cartridges. These cartridges are returned to centralized recycling centers across the country by the truck loads. Used cartridges are removed from their packaging and thrown onto a conveyor belt for manual sorting. As volumes steadily increase, addtional manpower is required to aid in the sorting. Since the recycling center is credited for each cartridge processed and certain cartridges have higher number of recyclable components then others, accurate count of models is highly desired.
PC-based vision system monitors the conveyor for cartridges as they enter into the field of view, at which point it makes note of the cartridges encoder location and its orientation. Using a blob tool, the system pre-classifies cartridges based on their shape using a high-end multi-core processsor for parallelized pattern matching. Subsequent secondary pattern matching is performed in some instances to identify other key markers when subcategorization is required. A local register retains counts of each cartridge identified. Encoder location, orientation, and cartridge ID are passed to a spider robot to pick and place the cartridges into large bins. Multiple instances and orientations of well over one hundred cartridges were trained into the system.
The inside of a tire is manually inspected for surface defects and pentrations by foreign objects. It can be physically awkward and difficult to see small objects manually so the vision system handles that task.
For this pilot project, a camera on a linear slide and rotating mirror constrained to a 12” x 3” package was mounted on the end of linear arm and joysticked into the center of the tire. Laser gauges provide feedback to the operator for its ideal placement. Interior is laser profiled to calculate optimal camera distance such that the entire field of view is within focus. The mirror is centered to the field of view and the tire is rotated while the camera acquires a series of images. After each series of images, the mirror is adjusted to the next field of view and the process is repeated until the interior is fully imaged. Images are then presented to the operator on a 34” curve monitor for manual inspection. The operator reviews the images and draws boxes around possible defects. Afterwards this information is archived with an identifier. At a later date, this data can be used to train a Deep Learning system.
To prevent injection of counterfeit product into the supply chain, the print shop manufacturer had a client request to serialize all the products that get placed onto consumer shelves, along with inner boxes, master cases and pallets.
A custom ASP.Net application was developed to augment the manual process of packing products into inner boxes, master cases and stacking them on pallets. The system generated barcode labels and print them to a Zebra printer. These labels would get affixed to the product, and boxes. The product was scanned with a Cognex Dataman handheld ID Reader prior to being backed into a box. Once the box was filled, the barcode box was scanned, and all the relational data was stored to a database. Similarly, the filled inner boxes, would get added to master case the boxes were scanned, and relational data stored. Master boxes where then added to a pallet and once the pallet was full, a pallet flag containing all the master cases, along with total product and inner box counts.
QR code and human readable text on the trading card would occasionally not match. Updated the user configuration and faulty remote software.
Deployed in less than two weeks with surplus stock, using 4VT’s standard framework and a VisionPro Quickbuild script. Cards were fed onto a vacuum belt, triggered by a photoeye, and if there was not a match, system would blow the product off at the air gap between the vacuum belt and the accumulator belts at a rate of 10 cards per second.
Existing smart sensor does not have the tools to determine if label placement is out of spec. Quality Department requests ability to make sure that the Lot/Date code is present.
Line was moved from another facility to a local one and no documentation could be found. The existing controls system was reverse engineered, and the vision camera upgraded to a Cognex InSight that mimicked the old system. Retrofit required removal of some of the existing material handling components, which were removed, modified, and installed over a long weekend. System was successfully brought online for the start of production.
Animal Health Sciences start-up was seeking a subject matter expert to develop machine vision solution to determine the orientation of a baby chickens head for the application of vaccine.
For this SBIR Phase 1 project, a custom lighting and optical solution was developed to acquire images of baby chicks as they passed under the camera. 1000 images where collected and annotated, 2/3rd of the images were then fed into Deep Learning algrothims. Detection performance was 97% with an execution time of 19-22ms.
After the nail polish bottle has been capped, a consumer label is applied to the top of the cap that indicates the color name and product number. Space constraints prevents an inspection camera to be placed after the label applicator, so the Vision System needs to be able to compensate for 360 orientation of the label.
With some blob morphology techniques the orientation of the text can be determined. Afterwards an OCR tool was applied to read the text twice, oriented 180 degrees apart. If the text found did not match the HMI, the product was rejected.
A scheduling system dictates the components to be assembled on a diesel engine head. This information is written to an RFID tag that is attached to a carriage. The client needed a way to validate the correct engine head was loaded onto the carriage and get its information (production date, parent, and serial number) married to the RFID tag. (7) different head types will need to be handled by this system.
Multiple Cognex Dataman cameras are oriented in a work cell. As the carriage enters into the work cell and stops, an RFID reader gets the part type and passes that to the HMI software, which triggers 1 or 2 cameras. The cameras extract the date, parent, and serial number for a 2D matrix dot pin barcode and the HMI software writes the data to the RFID tag and an edge database. If the parent number is not valid for the current part type, the error is flagged for an operator to review. In the event of a poor barcode, the operator has the ability to enter the information manually.
From new to end of life, welding tips used to spot weld the housing to the band create varied results. From time to time, a weld would fail to fully engage and would produce a light or no weld.
5000 images were collected and annotated, then applied to a neural network-based descision engine (predecessor to modern deep learning algrothims). System performed well after new tips were broken in after 2000 cycles. (Average life cylce of tips 35k - 40k)
International mail brought in through airline processing centers are often in large sacks with a 4”x6” card attached to it. Depending on the country of origin, some of these cards are hand written and others are printed. The individual at the processing window needs to manually enter in all of the information on the mail tag into a data entry system before it can be thrown onto a conveyor belt to be processed. This creates a bottle neck in the work flow, since only so many transfer windows can be manned and managed to allow baggage/cargo trucks to be unloaded.
A portable prototype imager with a flat surface allowed the mail handlers to position the mail tag for image acquistion. At the start of a new load the mail handler would enter in the Airline and Flight number, along with the number of pieces. The mail handler would take an image of that tag and then the local Vision PC would attempt to read all of the text on the tag, then present the image along with data to the mail handler. Once approved, the data would be saved to a database. If tag was not readable, the image was sent to a remote location where it was presented to a person for manual entry. This allowed for reconcilation of a load and moving mail bags as quickly as the manual imaging process would allow.
Supplier for the machines that manufacture the Permanent Residency Cards needed to ensure that the text printed on the card matches what is contained in the database. It was also to ensure the person’s face was printed clearly, and verify the presence of several security features.
Standard lighting techniques and Optical Character Recognition routines where used to validate the text and data that was contained in the database. Image correlation was used to validate the person’s face printed on the cards. Multiple different lighting techniques and LED wavelengths were used with different image processing alghrothims to verify the presence of all security features.
Supplier for the machines that manufacture the Driver’s Licenses needed to ensure the text printed on the card matched what is contained in the database. Also needed to ensure the person’s face was printed clearly, and verify presence of the engraved birth date on the card.
Standard lighting techniques and Optical Character Recognition routines where used to validate the text and data that was contained in the database. Image correlation was used to validate the person’s face printed on the cards. Photometric stereo imaging technique with IR LED lights was used to extract the engraving and filter out the print on the driver’s license.