Reshoring of our manufacturing to stabilize the supply chain and the increased requirements on production quality is driving rapid growth in automation. Tolerances are getting tighter, cost pressures are rising, and traceability requirements are making Machine Vision increasingly common in all manufacturing industries.
Some modern manufacturing processes just wouldn't be possible without Machine Vision or Deep Learning technology.
Machine Vision solutions are most effective when they are integrated strategically, for it needs to enhance the manufacturing process and must enable the team on the plant floor to be more effective.
Major hardware manufacturers prioritize selling hardware over complete solutions. Positioning and messaging amplify the simplicity, yet when deployed without specialist knowledge, the reality often fails to deliver on their promises.
The intricacies of machine vision are underestimated. Optical and lighting design have a major impact on the success of a project. In some cases, the same light placed in a position a few inches away can produce a very different image.
Poorly executed solutions degrade confidence among stakeholders, diminishing the perceived value and potential of machine vision.
A shift in industry norms where machine vision solutions are not merely sold as hardware but as comprehensive solutions. These would include proper training and guidance, ensuring a higher success rate when deployed.
Even if initially successful, many deployed vision solutions do not sustain their accuracy or reliability throughout the entire product life cycle.
Factors such as external interferences (like light, vibration, or dust), wear and tear on fixtures, or even accidental bumps to cameras can change system parameters. Additionally, a lack of preventative maintenance can exacerbate these issues.
The decrease in performance leads to a rise in false positives, increasing inefficiency, and potential costs associated with secondary inspections or scrapping products.
Robust vision systems with a feature set that can adapt and validate after changes and a strong preventive maintenance protocol ensure longevity and reliability.
Whenever a new product is introduced, there's often a need for a Subject Matter Expert (SME) to modify or adapt the existing machine vision solution.
Time and effort were not put into the initial system design to handle multiple different product families, and it lacks a process to add additional SKUs to the system by Production or QA staff.
Delays in adapting to new products the vision system is bypassed, leading to a potential loss of containment and increased costs associated with bringing in an expert.
A more modular and adaptable machine vision system, designed with the foresight of future product additions, making transitions smoother and reducing the need for extensive expert interventions.
Quality Assurance (QA) teams, who are critical in ensuring product quality, often lack direct access to machine vision data.
Current systems don't always prioritize data accessibility for all stakeholders: system restrictions, data silos, or lack of integration between machine vision and other enterprise systems.
Without access to relevant data, QA cannot effectively understand reject reasons or bring awareness to optimizing upstream processes. Potentially leading to reduced product quality, missed non-conformities, and risks to consumer safety and brand reputation.
Transparent and accessible data solutions. QA teams would have a comprehensive archive of rejected and passed products (images and inspection results), allowing for better analysis, optimization, and overall product quality.
Machine Vision in manufacturing is increasing in maturity, but it still is a specialized discipline.
Poorly architected inspection systems lead to...
unpredictable long term reliability,
inspection data that is difficult to extract,
lack of documentation and substandard support.
This only makes the jobs for the team on the production floor that much more challenging.
Off-the-shelf machine vision packages are great for rapid prototyping, but a fully functional vision system needs to interact with the rest of the industrial environment. And commercial deep learning packages require a lot of time collecting images, organizing the data, feeding and tuning the deep learning network. A vision systems integrator's role is to drive the solution so that it interacts seamlessly and shorten the timeline to commission but, most importantly, guarantee the system is reliable and consistent.
The era of Industrial 4.0 is creating an environment where automation needs to capture the physical world and transform it into networked digital data; not only to drive efficiency, but also to provide information transparency. In an increasingly complex automation environment, today's Machine Vision solutions must also address intelligent support for Maintenance, Quality Assurance, and Process Engineers.
How does deep learning and machine vision solutions benefit manufacturing?
Deep learning and machine vision system can be used to inspect, identify, and measure a product. With increased processing power of CPUs and GPUs, more complex quality inspection problems can be solved with 3D or deep learning solutions. Regardless, a vision system's robustness depends on the machine vision integrator who is designing the system's architecture. This includes lighting, optics, camera hardware, a control system, and most importantly, make it work within the existing production equipment.
It's true that cutting material costs can save pennies per unit and, when done over millions of units, the savings can be significant. However, another approach is for a machine vision integrator to strategically place systems inline with the process at specific intermediate steps. This way rejected products, won't have further value added to them.
Unfortunately, opening up tolerances to minimize false positives on a vision system is a typical response; doing so can increase the risk of a true reject making it out into the marketplace. One of 4th Vector Technologies' strategies for mitigating this risk, is to have ready a directory of images where an offline playback script can be used to validate tolerance changes.
Capturing trending data of inspection results for both rejected and good products is important for tracking performance. A vision systems integrator can also enhance a system by having it monitor inspection result trends against inspection tolerances. If inspection result trend lines are creeping up near tolerance thresholds, this could indicate that it is time for preventative maintenance on an upstream processes; before marginal conformances become true rejects.
Traceability is becoming increasingly important to minimize recall costs and prevent counterfeit product injection into the supply chain. Capturing trending data of inspection results for both rejected and good products is essential for tracking performance. In the event of a recall, all assembled products containing the defect can be narrowed down.
A vision systems integrator can collect and send system data to the plant's Scada or MES. Alternatively, 4th Vector Technologies has a standalone fileserver, database, and dashboard solution that is the size of a shoebox and can installed on the production floor.
Whether it is a PLC, robot, or .Net program, a vision systems integrator should design their systems with Fault Tolerance in mind. If an error or fault occurs, the program needs to exit its routine cleanly and log errors for later identifying the fault.
A lean manufacturing process is dependent on minimizing downtime. When a system goes sideways, a lack of documentation makes it harder for Maintenance to diagnose the problem. A vision systems integrator must create training and documentation that is targeted towards both the Operator and Maintenance staff.
Covid-19 has forced many industries to pivot and, during that time, 4th Vector Technologies has matured its remote support package. A small, open source firewall/VPN that is IT friendly now comes standard with our turnkey solutions. A toggle switch on the front of the control panel enables Maintenance to turn it on only when needed.
A discovery process is required to understand your unique requirements so that 4VT can design and build a vision solution that meets your objectives and manufacturing needs regarding performance, reliability, and adaptability.
The Project Brief allows us to collect some necessary information about you and the best time to call back and some details on your objectives. Filling out the form should take less than 10 minutes.
You will receive a callback or email engagement from us either the same day or on the next business day. The Project Brief and notes from our conversation will allow 4VT to dive deeper into the discovery process.
To determine the best components and approach in using machine vision technology to meet your objectives, we will need to do some testing in our lab.
Ideally, you should collect 5-10 good products and several rejected products representing the defects you are looking to detect. Once collected, ship us the samples.
Some products might not be practical to ship or may require our eyes on the overall process, in which case one of 4VT's team members will come to your facility.
After we have received your samples, testing will begin in our lab. These samples will allow 4VT to determine the lighting requirements to enhance the visibility of the features you are interested in inspecting and suppress the contrast of everything that is not important.
Then 4VT will dig into the math related to the optics, camera, and acquisition system to ensure that any hardware selected will meet your objectives.
Once we are done in the lab and finish checking the optics/performance numbers in our spreadsheets, we will email screenshots or a little demo video of our preliminary results.
The final stage of the discovery process is to flush out the Functionality & Constraints of the system and submit them for your review. Once signed off, budgetary numbers can be generated.
The purpose of lighting in machine vision is to:
• enhance the contrast of features we are interested in inspecting,
• suppress the contrast of everything that is not important, and
• minimize external influences.
Understanding when and where to use particular lighting techniques is like a boxer learning when to apply a jab, cross/hook, or uppercut.
There is the theory of what should work and what will work and not interfere with production. Environmental factors need to be taken into consideration when installing a vision system. These include hot or cold rooms, conveyor vibration, space restrictions from the surrounding equipment, and interface requirements. The system's design engineer needs to review their plan with Production's maintenance crew to avoid interfering with routine preventative maintenance of existing equipment.
80% of a project's success is in the lighting and optical design. Not all glare, uneven illumination, motion blur, optical distortions, overhead lighting, or dirt can be overcome with magical filters or cutting edge algorithms in the software. It is better to light it right, then write around it!
Back-of-the-napkin calculations using similar triangles to calculating Field of View and Work Distance works for most cases when the subject is more than 300mm away. Still, there are other details and considerations to be considered when trying to produce a quality image. We'll use this comic strip from Buddy Gator Comics as an example.
After taking some pixel measurements in an art program and assuming the camera is a Canon EOS 80D, it can be determined that the lens being used is 18mm. The distance from the Rabbit to the camera is 334mm. And the distance from the Elephant to the camera is 3503mm. Finally, when the depth of field is calculated, that comes out to 3170mm and an f/# of 116. The pupil diameter of the aperture would be 0.15mm (2x the thickness of a human hair).
With an F/# of 116, that would require more light than a bright summer day at noon, and the image would still have low contrast and appear to be blurry. The low contrast is due to the modular transfer function (MTF) of the lens. Edmund Optics has a good article explaining what MTF is.
Why does it matter? Without a high contrast image, if gauging, results would have a high variance. If pattern matching, inspection time would increase dramatically. It looks doable, but the Math Says No. There is no need to "try and buy" to see if it will work.
When a Vision System Engineer collects the product's images, they deconstruct what "features" determine a good product vs. a reject product. Lighting is adjusted to enhance those features (or lack thereof). Each feature is codified into the system with a set of rules and respective thresholds.
The effort put into collecting images of product variances and changes in orientations helps produce a robust system. The challenge is, not all variances created by the process or material handling are available or can be predicted at the time of initial commissioning of a project.
As time goes by, process and production variances can cause false rejects, and tolerances are "opened up" to reduce the number of good products being rejected. Unfortunately, if inspection tolerances are opened too far, there is a potential of true non-conforming products reaching the consumer. A lack of validation or reference images makes it difficult to know what had changed between now and when the system first went live.
Red herring tests could mitigate bad products getting accepted, but consider what could be possible if a Vision System had a directory of images where the system can perform a 'Validation Check' every time a tolerance is changed.
At 4th Vector Technologies, Machine Vision, Deep Learning, and related disciplines are what we do. It is our craft. Our Founder grew up blue collar in a manufacturing town, and with integrating industrial machine vision systems since 1999, he understands the needs of the team on the production floor.
Here is how our integrated solutions focus on your team's success:
Systems are architected by someone with 20+ years of experience in industrial machine vision integration and can design a robust solution
Installing machine vision systems between high value-add processes removes rejects before it becomes a completed product, mitigating the need for your team to run extra shifts to catch up on production quotas at the end of the month
When changed tolerances can be validated against test data (that was collected and organized by our team), it reduces the chance of you getting a call back during 3rd shift
Give your team alerts when trends shift so that potential problems can be addressed before they occur
Auto-saving images/data by accept, warning, reject, product number, and by shift, it can save your team time in collating the data
Production floor database/file server solutions allow your team to begin collecting and tweaking data before getting IT involved
Your team will know from our error logs what step or line of code caused a fault
If the Field Manual (or thumb drive system backup) ever gets lost, we'll gladly provide your team a new one
Our Remote support package is IT-friendly and shortens our response time for supporting your team
We invest the time to understand your unique requirements. We will design and build a vision solution that meets the functionality and constraints for your specific manufacturing needs regarding performance, reliability, and adaptability.
Whether the application is Inspection, Identification, Vision-Guided Robotics, 3D Vision, or Deep Learning: We Own The Result.
We can provide stand-alone turnkey Machine Vision systems with Cognex Dataman, In-Sight, VisionPro, or MvTec Halcon for a wide range of manufacturing, assembly, and packaging applications. Our in-house front-end HMI decouples the Camera Acquisition and Vision Algorithms so that we can choose the best hardware and software to solve the problem.Details
As Machine Vision and Deep Learning Integrators, we can surgically add machine vision into existing equipment and ensure consistent quality after changeover. We can also engineer to have the excess capacity to meet future needs for faster line rates. Our understanding of the math behind the optics of a machine vision system, and our experience in Illumination design, allow us to know—prior to an installation—what we can make fit and be robust in the existing available space on a machine.Details
Our engineering studies can narrow down a project's risk and cost by putting together a proof-of-concept demo. We would start by bringing samples into our lab (or onsite) and piecing together a lighting mockup system to acquire a series of images. From there, we develop a vision script, and results and performance tabulated. All of this is then summarized into a report.
Engineering time and key components may qualify for up to 60% of the cost as a Research and Innovation tax credit in the US.
Quality inspection and traceability are becoming essential factors in both recalls and prevention of counterfeit products introduced into the supply chain. Getting the data and tracking trends from the machine vision systems on the production floor can be a challenge, and with our software and database experience, we can bridge the gap and save time.Details
A crucial part of keeping a machine vision system running effectively, is assessing all devices, equipment, and software that comprise a vision system are "five by five." Regular quarterly checkups and routine maintenance can prevent unexpected downtime.Details
Read our Tech Briefs for information on lighting, optics, algorithms and trends in industrial machine vision technology.
Industrial Machine Vision and Deep Learning uses cameras, lighting, and computer processors to automatically extract information from digital images with ruggedized hardware suitable for a manufacturing environment. As product moves along a production line, the system analyzes features against defined criteria. Products are removed from a production line that has features that are outside of tolerance.
Deep learning and machine vision used in inspection applications can perform presence or absence verification, gauge a product's dimensions, look for surface defects or contaminants.Learn More
Deep learning and machine vision used in identification applications can read data codes, barcodes, printed lettering, and locating unique patterns on products based on color, shape, or size.Learn More
Machine vision used in guidance applications can locate a part's position and orientation in 2D or 3D space. The positional information can be sent to a robot controller so that a robot can position an end-effector to pick or place a part. Alternatively, it can set a camera into an optimal position for inspection.Learn More
Multiple cameras, or a laser projection, pairing with a camera(s) can be used to generate points for a 3D image. Point clouds can generate a highly accurate surface representation of a product so that features can be found and measured. With traditional 2D machine vision technology, this might not be possible.Learn More
Deep Learning can be a powerful tool for automating industrial quality inspection because it excels at finding anomalies or non-conformances. Some defects easily found by a deep learning system would be complicated to quantify with a traditional rule-based machine vision system.Learn More
4th Vector Technologies specializes in Machine Vision Integration and Deep Learning Integration and currently serves industries in NC, SC, and VA. We integrate top-tier commercial machine vision packages, including MvTec Halcon and Cognex's full product line. We provide industrial vision solutions for inspection, identification, gauging, barcode reading, OCR, guidance, 3D vision, and image-based deep learning.
We also provide preventative maintenance and support contracts, industrial vision research and development feasibility studies, and serve as subject matter experts on SBIR projects.