Computer Vision in Robotics
Computer vision systems are revolutionizing the manufacturing and automation industry, improving the quality of products and providing greater efficiency and safety. Here's how they work.
Computer vision is a field of study that enables computers to replicate the human visual system.
It has been a hot topic for around sixty years, with long stretches of nothing and sudden big discoveries leading to new progress, and so on and so forth.
Within this field, there are numerous components to consider, but today we will be focusing specifically on computer vision systems in the manufacturing and automation industry. By incorporating computer vision systems, manufacturers are able to reduce human error and automate tasks, improve product quality and increase the speed of production. Additionally, they can be used to monitor the production line, detect defects and ensure safety protocols are being properly followed. So, they’re really useful, and we do love them.
Computer vision systems are revolutionizing the manufacturing and automation industry, improving the quality of products and providing greater efficiency and safety.
A Short Computer Vision Story
What are Computer Vision Systems?
Computer vision systems analyze and interpret digital images or videos to extract information.
They aren’t the computer’s eyes (I mean, they are), but mostly, they are the computer’s brain.
They’ll use algorithms and other techniques, enabling them to recognize items, understand scenes and track objects, amongst other things. There are several types of computer vision systems, ranging from 1D to 3D, and though the process does vary, the overall goal remains the same: perceive, recognize, and, more importantly, understand.
The output of the computer vision system can then be used for various applications, such as guiding robots, identifying objects for inspection, counting and measuring parts, and monitoring and controlling industrial processes.
A computer vision system will usually include a camera to capture the visual information, a liberal amount of algorithms, techniques from computer vision (that will vary widely depending on whether 1D, 2D or 3D is used), the ability to analyze and extract information from digital images or videos, and an output generation, where the results are generated and used for control and decision-making processes.
The global 3D machine vision market is becoming more popular by the year and is expected to reach USD 3.46 billion by 2027, driven by the increasing demand for quality inspection and automation in different industrial avenues.
It’s a system that is gaining traction, and as demand for automation increases, computer vision is becoming an increasingly important technology for robotics. Thanks to computer vision, robots can perform more complex tasks with greater accuracy, resulting in improved safety, efficiency, and productivity. It’s a gold mine.
Types of Computer Vision Systems in Manufacturing Industries
In manufacturing industries, computer vision is generally used on robots to automate certain tasks, such as robot guidance for random bin-picking, picking and placing in logistics, or quality control for part inspection and measurement. Let’s skip 1D computer vision and move directly to the interesting ones: 2D and 3D.
2D Vision
This one is less anxiety-inducing and is, in fact, used pretty widely in the manufacturing industry. They are some of our most fierce competitors here at Inbolt. 2D isn’t problematic per se, and has been used pretty widely across the board for years in various industries for inspection, measurement, and quality control purposes. These applications rely on relatively simple artificial intelligence (AI) algorithms that analyze images and detect objects based on their physical characteristics (shape, size, contour).
A 2D machine vision system processes a flat, two-dimensional image of a target object, lacking height, or depth information. This makes it limited in applications where shape information is crucial, but it is widely used in many tasks (feature verification, dimension checking, barcode reading, character recognition, label verification, surveillance, and object tracking, and presence detection). The algorithms used in 2D vision systems are effective for basic inspection tasks but have limitations when it comes to more complex and sophisticated tasks that require a deeper understanding of objects in three dimensions.
Technically, 2D can perceive 3D using specific methods such as multiple cameras and lasers. Though even with multiple cameras, the system relies on the relative positioning and orientation of the cameras to accurately calculate the depth information.
2D vision has some drawbacks, especially for model-based tracking in very light-sensitive environments. It works best with textured objects, requires precise calibration, and needs a powerful hardware platform to achieve decent frame rates.
3D Vision
3D vision systems are the Type C USB for the industry. Remember that strange time when people kept telling you USB cables were has-been and Type C would soon replace them, and we’d laugh because “what a stupid concept”, but three years later, everything is Type C? Right.
Both 2D and 3D algorithms are complex, but 3D data is easier to exploit when it is based on 3D models for analysis. This makes 3D scan, quality control, and model-based tracking applications more efficient when 3D data is used as input. The result is a faster, more accurate vision system that can be used to optimize production processes, improve quality control, and achieve better results in a variety of industries, especially automation. 3D vision systems analyze and interpret 3-dimensional data, such as point clouds, 3D models, or stereo images. These systems use algorithms and techniques from computer vision to analyze and extract information from 3D data. This 3D data creation can be constructed in several ways. This article goes into details, but here is a condensed version.
Read our article on 2D vs 3D vision here.
Laser triangulation is one of the most commonly used methods for 3D machine vision. It uses an active light source and a camera at an angle. The laser beam is projected in a cross-sectional line and is deflected by the object's shape, providing a more detailed profile.
Stereo vision is another method that uses two cameras positioned at different positions to capture images of the same object. By comparing the images captured by each camera, the machine vision system can calculate the depth and shape of the object. This method is more commonly used for applications like obstacle detection, autonomous navigation, and robot guidance. Another way to practice stereo vision is through temporal stereo, which is comprised of a single lens in motion, which takes two pictures and makes stereo as if there were two lenses)
The time-of-flight method measures the time it takes for light to travel from the machine vision system to the object and back. This method uses light instead of radio waves. By measuring the time of flight, the machine vision system can calculate the distance to the object, providing a 3D image. This is often used in applications where high precision and accuracy are required (automotive industry for collision avoidance systems, for instance).
Finally, structured light is a technique that works by projecting a pattern of light onto the object being viewed. It is similar to laser triangulation, but instead of using one beam, it employs a whole field. The camera captures the distorted pattern and analyzes it, so the machine vision system can determine the shape and depth of the object by detecting geometric distortions.
Active stereo vision and time-coded structure are other options, though we won’t get into details here.
Each of these methods can be used to produce 3D data of varying accuracy and quality depending on the specific application. Neither are better or worse, they are interchangeable, based on the mission you wish to bestow upon your robot.
Now, we are obviously biased, but there are a few reasons why using 3D can be better for your automation process. Light agnosticity, for one. 3D can be used in the dark while 2D will be impacted by any small change in lighting. Depth perception is another, whereby grasping something becomes possible, and can therefore be quite useful for picking missions.
Algorithms that analyze and interpret the curated 3D data
These are specifically designed to interpret 3D generated data, in particular data derived from SLAM (Simultaneous Localization and Mapping) and 3D reconstruction technologies.
This type of data provides valuable insights into the physical environment and its features, enabling us to make informed decisions based on the data that has been collected. These algorithms can be used in a wide range of applications, from creating detailed 3D models of physical objects to providing augmented reality experiences.
As for programming libraries— pre-written chunks of code that developers can use to create applications more easily, OpenCV (Open Source Computer Vision Library, designed for computational efficiency and with a strong focus on real-time applications) and Halcon (a proprietary software library for machine vision, more widely used in industrial automation) are some of the most widely used in 3D vision.
When it comes to creating intelligence to help with specific processes linked to automation, the options abound, and decisions are ultimately made with a specific outcome in mind. Most companies will work with a specific method that suits their needs, which will allow them to respond to a variety of specificities within the field of automation.
Applications of Computer Vision Systems in the Manufacturing and Automation Industry
Computer vision in the manufacturing industry is very useful, because, unlike human eyes, computers do not get tired. As such, they’re given a variety of roles:
- Automated assembly
- 3D vision monitoring
- Quality inspection (anomaly detection)
- Safety (object tracking)
- Predictive maintenance
And much more.
Companies like Tesla are pioneering the complete automation of their manufacturing processes, using computer vision as the central nervous system to control and coordinate all the moving parts.
Challenges and Limitations
Industrial machine vision represents the future of smart, automated manufacturing.
As the “eyes” of the industry, computer vision can be used for non-contact measurement and detection of invisible elements, and operates continuously 24/7, even in challenging work conditions.
However, not all vision systems have been created equal.
2D has a light constraint, which we have already discussed earlier.
Hardware is another constraint. Industrial machine vision is held back by things like camera lens distortion correction not being good enough, calibration not being consistent, a limited viewing angle range, installation conditions and site requirements, and more.
Another challenge is that industrial products have a complex architecture that requires a lot of computing power. If the device terminal doesn't have enough memory, the model will need to be trained on the cloud, which adds extra work and slows down real-time performance.
And finally, developing machine vision system components, including vision sensors and underlying vision software, often requires a substantial investment cost.
The inbolt advantage
This isn’t the case with Inbolt. Inbrain is today’s most efficient 3D matching vision technology. AI-based Inbrain processes massive amounts of 3D data at high frequency and identifies the position and orientation of a workpiece, adapting the robot trajectory in real-time, which makes it ideal for automation manufacturing.
Computer vision has come a long way in recent years, and its applications in fields such as manufacturing and automation have been instrumental in boosting efficiency and productivity. It has become an invaluable tool for many industries, and its capacity to improve processes in these industries is virtually limitless.
The benefits of computer vision’s use in industry are indisputable and continue to grow, evolve, and improve along with the technology. There are opportunities like never before as technology becomes more mainstream and more easily accessible by large and smaller industries alike.
Reach out to us to learn more about how you can automate your automation process.
Last news & events about inbolt
Articles
Inbolt raises €15 million to make industrial robots smarter with AI
Inbolt leverages AI to deliver a software-only vision solution for industrial robot guidance, using any standard 3D camera. Inbolt enables robots to adapt to their environment, making automation more flexible, reliable and efficient.