In 2020, we’ve seen the accelerated adoption of deep learning as a part of the so-called Industry 4.0 revolution, in which digitization is remaking the manufacturing industry. This latest wave of initiatives is marked by the introduction of smart and autonomous systems, fueled by data and deep learning—a powerful breed of artificial intelligence (AI) that can improve quality inspection on the factory floor.
The benefit? By adding smart cameras to software on the production line, manufacturers are seeing improved quality inspection at high speeds and low costs that human inspectors can’t match. And given the mandated restrictions on human labor as a result of COVID-19, such as social distancing on the factory floor, these benefits are even more critical to keeping production lines running.
While manufacturers have used machine vision for decades, deep learning-enabled quality control software represents a new frontier. So, how do these approaches differ from traditional machine vision systems? And what happens when you press the “RUN” button for one of these AI-powered quality control systems?
To understand what happens in a deep learning software package that’s running quality control, let’s take a look at the previous standard. The traditional machine vision approach to quality control relies on a simple but powerful two-step process:
Step 1: An expert decides which features (such as edges, curves, corners, color patches, etc.) in the images collected by each camera are important for a given problem.
Step 2: The expert creates a hand-tuned rule-based system, with several branching points—for example, how much “yellow” and “curvature” classify an object as a “ripe banana” in a packaging line. That system then automatically decides if the product is what it’s supposed to be.
This method was simple and effective enough. But manufacturers’ needs for quality control have rapidly evolved over the years, pushing demand to the next level. There aren’t enough human experts to support manufacturers’ increased appetite for automation. And while traditional machine vision works well in some cases, it is often ineffective in situations where the difference between good and bad products is hard to detect. Take bottle caps, for example—there are many variations depending on the beverage, and if one has even the slightest of defect, you run the risk of having the whole drink spill out during the manufacturing process.
The new breed of deep learning-powered software for quality inspections is based on a key feature: learning from the data. Unlike their older machine vision cousins, these models learn which features are important by themselves, rather than relying on the experts’ rules. In the process of this learning, they create their own implicit rules that determine the combinations of features that define quality products. No human expert is required, and the burden is shifted to the machine itself! Users simply collect the data and use it to train the deep learning model—there’s no need to manually configure a machine vision model for every production scenario.
Data is the key in deep learning’s effectiveness. Systems such as deep neural networks (DNNs) are trained in a supervised fashion to recognize specific classes of things. In a typical inspection task, a DNN might be trained to visually recognize a certain number of classes, say pictures of good or bad ventilator valves. Assuming it was fed a good amount of quality data, the DNN will come up with precise, low error, confident classifications.
Let’s look at the example of spotting good and bad ventilator valves. As long as the valve stays the same, all manufacturers have to do is hit the “RUN” button and inspection of the production line can begin. But if the line switches to a new type of valve, the data collection, training, and deployment must be performed anew.
For conventional deep learning to be successful, the data used for training must be “balanced.” A balanced data set has as many images of good valves as it has images of defective valves, including every possible type of imperfection. While collecting the images of good valves is easy, modern day manufacturing has very low defect rates. This situation makes collecting defective images time consuming, especially when you need to collect hundreds of images of each type of defect. To make things more complex, it’s entirely possible that a new type of defect will pop up after the system is trained and deployed—which would require that the system be taken down, retrained, and redeployed. With wildly fluctuating consumer demands for products brought on by the pandemic, manufacturers risk being crippled by this production downtime.
There may yet be a lesson to be learned from the traditional machine vision process for quality control that we described earlier. Its two-step process had an advantage: The product features change much more slowly than the rules. This setup meshes well with the realities of manufacturing, as the features of ventilator valves persist across different production types, but new rules must be introduced with each new defect discovered.
Conventionally, a deep learning model has to be retrained every time a new rule must be included. And to do that retraining, the new defect must be represented by the same number of images as all the previous defects. And all the images must be put together in a database to retrain the system, so that it learns all the old rules plus the new one.
To solve this conundrum, a different category of DNNs is gaining traction. These new DNNs learn rules in a much more flexible way, to the point that new rules can be learned without even stopping the operating system and taking it off the floor.
These so-called continual or lifelong learning systems, and in particular lifelong deep neural networks (L-DNN), were inspired by brain neurophysiology. These deep learning algorithms separate feature training and rule training and are able to add new rule information on the fly.
While they still learn features slowly using a large and balanced data set, L-DDNs don’t learn rules at this stage. And they don’t need images of all known valve defects—the dataset can be relatively generic as long as the objects possess similar features (such as curves, edges, surface properties). With L-DNNs, this part of model creation can be done once, and without the help of the manufacturers.
What our hypothetical valve manufacturer needs to know is this: After the first step of feature learning is completed, they need only provide a small set of images of good valves for the system to learn a set of rules that define a good valve. There’s no need to provide any images of defective valves. L-DNNs will learn on a single presentation of a small dataset using only “good” data (in other words, data about good ventilator valves), and then advise the user when an atypical product is encountered. This method is akin to the process humans use to spot differences in objects they encounter every day—an effortless task for us, but a very hard one for deep learning models until L-DNN systems came along.
Rather than needing thousands of varied images, L-DNNs only require a handful of images to train and build a prototypical understanding of the object. The system can be deployed in seconds, and the handful of images can even be collected after the L-DNN has been deployed and the “RUN” button has been pressed, as long as an operator ensures none of these images actually shows a product with defects. Changes to the rules that define a prototypical object can also be made in real time, to keep up with any changes in the production line.
In today’s manufacturing environment, machines are able to produce extremely variable products at rates that can easily surpass 60 items per minute. New items are constantly introduced, and previously unseen defects show up on the line. Traditional machine vision could not tackle this task—there are too many specialized features and thresholds for each product.
When pressing the “RUN” button on quality control software that’s powered by L-DNN systems, machine operators can bring down the cost and time of optimizing quality inspection, giving the manufacturing industry a fighting chance of keeping up with the pace of innovation. Today, global manufacturers such as IMA Group and Antares Vision have already begun implementing such technologies to help with quality control, and I expect that we’ll see many others begin to follow suit in order to stay competitive on the global stage.
Source – Spectrum IEEE