Machine Learning System Detects Manufacturing Defects using Photos

By on July 26th, 2019 in Hardware

Tags: , , , ,

 A prototype L16 inside an Instrumental station. Photo Credit: Light
A prototype L16 inside an Instrumental station. Photo Credit: Light

Machine learning can be used for more than violating your privacy for a social media challenge.

For example, one fascinating application has been developed by Instrumental AI, which uses machine learning to detect defects and anomalies in photographs of parts during various stages of assembly, primarily in the electronics manufacturing industry.

Instrumental was founded by Anna Shedletsky, a former Apple engineer with two degrees from Stanford. A mechanical engineer by training, Shedletsky led system products design for the Apple Watch for six years. “I led the team that designed the physical product itself, as well as being responsible for the first production line. We had to prove mass production yields at mass production speeds on the first line,” said Shedletsky.

In this role at Apple, she noticed how very small defects on the line could cause huge delays and problems in the product lifecycle and profitability. So, she set out to build technology tools that could address these defects. “I had this kind of naĂŻve, but correct insight that there aren’t great tools for this. that we struggle by just throwing people and money at these problems. And if Apple didn’t have it, it didn’t exist.”

Shedletsky left apple in 2015 to start Instrumental, a software company which has developed a toolkit for detecting manufacturing defects in digital photos taken on the manufacturing line. “We’ve been at it for four years. We work primarily with Fortune 500 companies in the consumer electronics and electronics space. We do have customers outside of electronics, but consumer electronics is kind of the hardest beast, because the volumes are pretty high,” she explained. “The products themselves are high value, but they go through production super fast. So, some of our customers only run products in production for six months. It’s just these super fast cycles of development and production. So that’s where we kind of specialize today. We build technology that helps accelerate the development process and then maintain control in production.”

The system creates a database of images taken during selected stages of assembly. This database can then be analyzed and searched to find specific defects or uncategorized anomalies.

Engineering.com sat down with Shedletsky to ask questions about the technology and applications.

Would a system like this be typically used to trace issues in components prior to assembly, or during final assembly?

All of the above. Process issues, part quality issues, workmanship issues, and design issues. We primarily do our first deployment in the new product introduction (NPI) stage. We’ll primarily be looking for a combination of part quality and design issues in the design phase. Some of our customers, like Motorola, have actually pushed us up in their supply chain. So they’re actually using this technology with some of their suppliers to prevent poor quality parts from leaking into their final assembly stage. We had a customer where we’re in three phases of their supply chain. Those parts start in Thailand, they go to Korea, and then they end up in China.

We work with those factories, as well as the main China assembly facility. So, we can look at both quality issues created by suppliers, as well as quality issues that are created in on-site final assembly.

How is all this data collected? Does it require human input, such as a SKU barcode that’s scanned?

The unique insight that we had is that often when a problem occurs, there’s not the right kind of high resolution data available as soon as the problem is discovered. So, then an engineer has to go collect additional data before you can even get started on solving the problem. What instrumental does is we actually collect data proactively. Specifically, we take images of every single unit at key stages of assembly. And those images, as you might have heard, are worth a thousand words. You don’t need to know what you’re looking for when you take the image, but that image could be very valuable for identifying these types of issues that we were discussing.

The way that we get these images is with drop-in camera boxes, or we can take images off of pre-existing automation equipment, if our customers have cameras already. We read bar codes out of the images to make those traceable, or you can manually scan a barcode if there’s no barcode on a part.

You mentioned to me that the technology uses machine learning to identify defects that conventional vision systems cannot. So, what are you detecting that conventional systems can’t?

Conventional systems are usually used for very specific applications. For example, measuring a gap, or detecting if certain screws are present or not. For these systems, there’s a preconceived notion of what the value of that system is to justify buying it and putting it on the line. That’s how vision has been used in the past. These vision scenarios are very rules-based: the screw is either there or not. But that detection won’t find if the screw is stripped, for example, unless it’s been pre-programmed to do that as part of that specific deployment.

Instrumental is a generalized tool. It can actually discover issues that it’s never seen before and that our customers have never seen before. So that means that we ultimately go in places that typically industrial vision or human inspectors haven’t been able to provide value in the past. For example, we’ll go after major sub-assemblies have been built and do inspection across those entire sub-assemblies. And we can identify a wide variety of different types of issues that would be difficult to wrap specific specifications around.

For example, in a system which measures a gap, you could say point four plus or minus point one, like there is a very specific specification. But what about glue volume and glue dispense? What about bubbles in that glue? What about if you have solder? And is the solder, does the solder look right? Is it cold solder or is it going to make a good connection? These things could use conventional vision to inspect if it was pre-planned. However, often it’s too expensive to actually deploy vision systems to do these, because each individual algorithm you have to set up on a machine vision system will cost you many hours of a consultant’s time, whereas Instrumental programs itself.

Read more at ENGINEERING.com

By ENGINEERING.com

ENGINEERING.com provides a variety of news and services to the engineering discipline worldwide and publishes a popular online blog focusing on the art of making in the industrial world.