The Cost of Lost  Asset Utilization

Lost utilization is expensive. It’s as simple as that. Every misplaced part, every recipe imbalance, even if imperceptible, costs money. This is especially true for contract manufacturers with electronics manufacturing services (EMS), where every hour with slow cycles and lost production minutes equate to thousands of placements, resulting in valuable product and revenue loss. In an increasingly high-mix/low-volume environment, their surface-mount-technology (SMT) factories feel the challenge to improve capacity utilization of their expensive assets, and none more so than their pick-and-place (P&P) machines.

The P&P is the heartbeat of any SMT line; a complex, modern marvel with incredible capability. Once factories start collecting line utilization KPIs as part of an asset management program, they are often quite surprised to find their P&P utilization is a small fraction of the benchmark placement rates for the machines. During a typical deployment, customers almost always express how challenging it has been for them to find root causes of low utilization and to know where to focus limited expert SMT resources to exact the most impact.

Revealing Hidden Losses

Factories are used to thinking about standard sources of idle line time: unplanned downtime, factory closures, unscheduled shifts, engineering work, etc. However, work with top SMT factories around the world has shown us that, very often, one of the most significant utilization losses during mass production is caused by production programs that do not evenly distribute work between all identical machines on the line. 

As SMT experts and industrial engineers know well, imbalances in cycle time between different work areas on any assembly line result in bottlenecks that idle some work areas and lead to lower utilization. This imbalance loss is not new and SMT experts will recognize that it has always been present. 

So, what is different now? Modern big data analysis techniques, aided by machine learning and artificial intelligence algorithms, have largely removed the traditional limitation of being able to extract great value from rich data. These advanced analysis techniques combined with secure, state-of-the-art methods for efficient, full-machine-data extraction to the cloud can automatically identify the root causes of these imbalances so they can be fixed, even by non-experts.

A Changing Landscape

Previously, it was necessary to have SMT experts closely watching local real-time monitoring displays on every line to check for imbalances that occurred product-by-product and line-by-line.  This expert-led process was extremely time-consuming and, as such, only made sense to do on specific single high-runner products with large run sizes. As modern SMT lines have been asked to run an increasingly high mix of products, this outdated manual balancing process has become not only incredibly challenging, it’s rapidly becoming obsolete.

Engagements with top factories routinely reveal immediate gains of over 5% absolute line utilization improvement by simply identifying and fixing imbalanced P&P production recipes. This means that for a factory line running at 20% absolute LU today, a 5% absolute improvement would mean a 25% relative improvement in production throughput- all without any equipment, labor, or process changes. Let’s see how it’s possible.

Harnessing the Value of Rich Machine Data

All modern P&P machines capture incredibly detailed information on product build, including the workload of each individual placement head.  When investigating a specific performance issue, it’s standard practice for seasoned SMT experts to find and leverage this information on local machines or line displays. However, until now, it has not been standard practice to capture all of these rich performance details for centralized analysis. There is simply too much data to make sense of by hand; even identifying the data that is available for analysis is often challenging.

Rich data, in this context, is defined as detailed machine activity reported individually for each PCB panel processed. Think of it as a giant spreadsheet where there would be one row for each panel passing through the machine and dozens or hundreds of columns capturing data fields about how the machine processed that panel. Examples of these columns would include the exact duration of each internal processing step, a list of all nozzle and feeder errors that occurred (even non-fatal errors), details on all consumed parts, etc.

By contrast, examples of non-rich data would include reporting only the total number of components placed on each panel without more details or capturing only a periodic summary of activity such as the number of panels processed in 30 minutes.  These summary KPIs can be calculated on top of rich data, but if only summary KPI from the machine is captured instead of the rich details behind it, there is not enough context preserved to make sense of it later.

Our standard approach is to start by capturing every bit of the rich P&P machine performance data available for every product cycle from the beginning and to efficiently store it to the cloud where it can be used for a broad variety of use cases.

Data that Drives Direction

Advanced big-data time-series analysis algorithms are then used to automatically segment the data into “work sessions”– periods of time where the same product is running on the line and where each production cycle should be identical.  This automated process gives our data science algorithms thousands of examples of each machine building the same product. Because they are identical, these algorithms then compare all the cycles against each other to automatically identify the “golden cycle” for each product without anyone needing to manually enter a “standard cycle time” or “production target”. This process is highly robust and does not get confused by transient operational problems or machine errors and can work even with noticeably short lot sizes; as little as 20 minutes, or 20 panels.

Looking at a true “golden cycle” for each product allows us to immediately compare cycle times not just between machines, but at a much more valuable level- between each individual work stage or head inside of the P&P machines to look for imbalance losses. 

We routinely find recipes that are highly imbalanced, sometimes running at half the rate they could achieve with rebalancing. Often, this is because the line is a high-mix line and there are too many recipes to check individually. In other instances, the imbalance is not between multiple machines, rather between stages inside a single machine.  Without visibility and tools to assess the rich data, it is simply not possible to find these kinds of imbalances manually since it is internal to a single machine.

Taking Data Intelligence to the Next Level

In nearly every organization, it’s still people who perform corrective and predictive actions based on the information given to them by new tools. It’s not hard to imagine the efficiencies and value that could be recaptured if local workers and experts could skip the countless hours or even days of analysis currently dedicated to solving problems. With advanced systems providing both analysis and guidance, precious human hours can be refocused on creating immediate value. Adoption and scaling of technology across factories is at the top of the priority list for virtually all modern manufacturers. Those that have found success in this have done so by partnering their experts within the organization with the right technological tools and teams to support them through these processes.

The fact that SMT lines, especially high-mix lines, are difficult to balance is a well-known fact for EMS manufacturers. It’s been a common struggle for years. But, manufacturers now have access to world-class technology that is not only effective, it’s also affordable. Even more compelling is that the same rich data collected from each machine then goes on to power a growing set of Industry 4.0 use cases across the factory including predictive maintenance, predictive quality, and advanced statistical process control applications.

– TB

About the Author

Tim Burke is co-founder and Chief Technology Officer of Arch Systems where he works to accelerate Industry 4.0 by standardizing connectivity and data gathering across the factory. He has broad expertise in industrial communication protocols as well as the challenges of working with the diverse set of machines found in electronics and semiconductor factories. Tim’s published work on the device physics of organic photovoltaics has been cited by thousands of researchers.

 

 

Arch’s efficient method of full machine data extraction to the cloud and advanced analytics techniques can automatically identify the root causes of these imbalances so they can be fixed, even by non-experts. This enables low and even no-cost trials of the ArchFX cloud-based SMT Utilization Suite- and the promise of 5% absolute LU improvements in the first 3 months.