When to Use Statistics, One-Piece Flow, or Mistake-Proofing to Improve Quality

by Michel Baudin, MMTI – Manufacturing Management & Technology Institute

Abstract

Shigeo Shingo coined the term “Pokayoke” and has been a major contributor to lean manufacturing. Mikel Harry is the father of Six Sigma and has trained thousands of black belts in statistically-based methods. They both make extreme and opposite claims about the right approach to quality improvement. Shingo has no use for statistical methods; Harry believes that the future of industry depends on an understanding of statistics. There are raging debates in Internet discussion groups and in the trade press about the relevance of the statistical approach to quality improvement. While their perspectives seem diametrically opposed, neither Shingo nor Harry takes the trouble to specify the context of their remarks. Shingo’s world is primarily automotive; Harry’s, electronics. In fact, they are both correct in their non-overlapping universes.

The approach to quality improvement in your plant should not be dictated by a philosophy, but by the quality problems you are facing. If your main problem is lack of process capability, you need to establish it, and the most effective approach is the combination of statistical methods with knowledge of the physics and chemistry of the process. There is anecdotal evidence that the results of efforts centered on process capability level off around a few percentage points of internally detected failures. The next order-of-magnitude improvement in quality performance then comes from reducing problem detection times by moving to one-piece flow with first-in, first-out (FIFO) sequencing.

This change alone, however, will not enable you to achieve low ppm rates of defectives because human error affecting individual parts is still possible, and this is where mistake-proofing kicks in. Mistake-proofing is the strategy that takes you from a low failure rate to an extremely low rate, like 15 ppm, or between five and Six Sigma in the vocabulary of the Six Sigma movement. The religious war between believers in statistics and mistake-proofing zealots is absurd. Shingo and Harry are both correct in that their methods work but incorrect in portraying them as panaceas. Statistical methods help you most when your process is worst. Then they pass the baton to cellular manufacturing and one-piece flow for the next step, which in turn pass it to mistake-proofing to achieve the world’s highest quality levels.

Introduction

Lean manufacturing not only improves quality, but can make a company achieve the best quality worldwide in its industry, as it has done for Toyota. For this reason, lean quality should have drawn the attention of the quality profession but, to date, has not, as evidenced by the absence of offerings on the lean approach to quality assurance from the ASQ or academia, except for the course that we have developed for the University of Dayton.

When manufacturing professionals think about it at all, the keywords that come to mind immediately are “Pokayoke” or its translation: mistake-proofing. But mistake-proofing is in fact to quality assurance as kanbans are to production control: an innovative and powerful tool, but only a component of a comprehensive approach that must also handle inspections, testing, failure analysis/process improvement, emergency response, and external audits.

When addressing the subject with quality engineers or managers, we have found it controversial, if not explosive, because lean quality denies statistical methods the central role they have enjoyed in the field for the past 80 years, first in the form of Deming’s and Shewhart’s SPC, and more recently in the form of Six Sigma.

Shigeo Shingo coined the term “Pokayoke” and has been a major contributor to lean manufacturing. Mikel Harry is the father of Six Sigma and has trained thousands of black belts in statistically-based methods. As shown in Table 1, their perspectives seem diametrically opposed.

Shigeo Shingo: “When I first heard about statistics in 1951, I firmly believed it to be the best technique around, and it took me 26 years to be completely free of its spell.” [1], p. 54

 

Mikel Harry: “We believe that statistical knowledge is to the information age what fossil fuel was to the industrial age. In fact, the future of industry depends on an understanding of statistics.” [2], p. 24

Table 1. Shigeo Shingo versus Mikel Harry 

Before taking sides in this debate, we should wonder where their strongly worded convictions come from. Neither author specifies the context of his remarks. Shingo’s world is primarily automotive; Harry’s, electronics. It is quite possible that they may be both correct in their different universes. Let us attempt to clarify the issue through examples.

Case studies

A diffusion furnace
Figure 1 shows a setup that is common in semiconductor wafer fabrication. One load of 200 wafers is typically worth about $250,000. This is an industry in which products are obsolete before the processes to make them reach the level of maturity that is the starting point of high-volume production elsewhere. Let us examine two problems it can be faced with.

Baudin Fig 1
Figure 1. A diffusion furnace with a station controller

Insufficient process capability
These machines typically have difficulty producing consistent doping profiles from one end of the batch to the other, and engineers keep trying to extend the flat zone in the middle where the results are within specs. In addition, small fluctuations in the results of this operation interact with process variables set at upstream and downstream operations to cause circuit malfunctions that may not be detected until final test.

These issues are addressed by a team of process and yield enhancement engineers who conduct statistically designed experiments to make the process more stable and increase the number of good die per wafer. The process engineers who run the experiments understand the physics and chemistry of solid-state diffusion and thin film deposition, as well as analysis of variance. What they do clearly has to do with controlling the characteristics of the outgoing product and eliminating the production of defectives. They do not belong to the quality department, but what they do is quality control in a broad sense. It is an essential function in this business, and it is centered on statistical methods.

These issues are in fact so vital to the semiconductor industry that it is the largest industrial user of statistical analysis software. The other major market for statistical software is the pharmaceutical industry, but for different reasons. Semiconductor engineers use these methods because they work; their pharmaceutical counterparts, because they are told to by the Food & Drug Administration.

Human errors at the controls
This, however, does not exclude the presence and high cost of human error. Before the machine was fitted with the external controller shown in Figure 1, the selection of the process program was based on an operator reading a run sheet and typing in an identification code on the keyboard of the embedded controller. On an average of four times a year, an operator loaded the wrong process program, causing an entire load to be destroyed and costing $1M/year. At $50K, the external controller was an expensive mistake-proofing device, but it paid for itself almost immediately.

In a typical semiconductor manufacturing plant, you find engineers specialized in computer systems to control process equipment. The embedded control systems are themselves PCs on most machines, and sometimes UNIX workstations. The embedded controls communicate with outside computers using industry-specific protocols. Using commercially available packages, in-house computer engineers can program an external PC to communicate with the embedded controller on one side and with a plant-wide system on the other to prevent the wrong process program from being executed.

It may be expensive, but it is cost-justified and implementing it does not take resources away from process engineering and yield enhancement. In this case, mistake-proofing and statistical methods are complementary and both relevant, because quality losses occur both for lack of process capability and because of human error.

Oil filling of automobile parts
Let us now shift our attention to a much less exotic operation. Figure 2 shows an automotive part weighing about 20 pounds that must be filled with 1 pound of oil prior to assembly. This is a quite different situation, and an operation that requires no $2M piece of equipment. Here we don’t have different groups of engineers addressing different issues. At most, we may have one engineer working on this operation some of the time. In this context, it’s reasonable to assume that, unless we know how to dispense 1 pound of oil accurately and consistently, not much else matters. If we have process capability issues, they are addressed first. Once it is under control, human error becomes the next focus of improvement.

Lean Fig 2 crop

Figure 2. Oil-filling of an automotive part

If we know how to dispense a pound of oil accurately and consistently, then the parts coming out of this operation can be defective only if the operator omitted to fill it, or filled it more than once. Figure 3 shows a filling method that is conducive to just this kind of errors.

Baudin Fig 3
Figure 3. Batch method of oil filling

The operator works on a batch of parts arrayed on top of a cart, locates the dispensing nozzle in the filling hole and presses a trigger. The controller then dispenses the specified amount and beeps when done, which causes the operator to index over to the next part. The work position is ergonomically questionable, and filled and empty parts coexist side by side. The slightest distraction can cause the operator to forget where she was, and the difference in weight between a 20-pound empty part and a 21-pound full part cannot be felt by lifting it.

Contrast this with the filling station shown in Figure 4. Setting up one piece flow in front of a fixed dispenser eliminates most error opportunities. It cleanly separates the unfilled units from the filled ones and makes sure every unit gets a turn on the filler.

Baudin Fig 4
Figure 4. Sequential filling station

This eliminates most, but not all, the errors since the operator can still move an unfilled unit forward. A person cannot feel the weight difference, but an automatic check weigher can. So in a second round of improvement, a check weigher can be retrofitted to the dispenser to disallow double filling and prevent the unloading of unfilled parts. This mistake-proofing device then eliminates the last opportunities for errors.

Unlike the diffusion furnace example, this is a case where statistical methods make no contribution to the solution. At best, they can be used to assess the frequency of errors but do not make contributions to the solution.

What is your quality problem, and what is the right approach to solve it?

As these two examples show, the approach to quality improvement in your plant should be dictated by the quality problems you are addressing. Unless you have the wealth of engineering resources of a semiconductor company, you have to make choices.

If your main problem is lack of process capability, your failure rate will typically be high, and this issue should be addressed ahead of anything else. There is empirical evidence that the results of efforts centered on process capability level off around a few percentage points of internally detected failures. After that point, it’s no longer the identification of root causes that matters most but instead the speedy detection of problems. Once you know that something in the process has gone wrong, you usually know what to do about it, but your discovery of the problem is delayed by batch-and-queue operations in random sequences.

The next order-of-magnitude improvement in quality performance comes then from moving to one-piece flow with FIFO sequencing. It is well known that quality improves as a result of converting a sequence of operations to a U-shaped cell. The proportion of defectives traced to the operations in a cell drops by at least 50 percent, and we have seen it go down as much as 90 percent. While the oil filling example above suggests some of the reasons this happens, there are others, and we will elaborate below on this matter. The main point here is that the effect of one-piece flow and FIFO in turn levels off at percentages of defectives commonly on the order of 1 percent to 0.1 percent = 1,000 ppm. It is not able to achieve low ppm rates by itself.

Human error affecting individual parts is still possible, and this is where mistake-proofing comes in: it is the strategy that takes you from, for example, 1,000 ppm to 15 ppm, which corresponds to between five and Six Sigma in the vocabulary of the Six Sigma movement.

Table 2 summarizes this discussion. The figures in the left column are to be interpreted as orders of magnitude. They correspond to figures we have observed in automotive parts companies. Rapid product obsolescence in high-technology manufacturing companies may prevent them from ever experiencing the stages described beyond the first row.

Defective parts

 

 Quality problem

 

 Solution

30% to 3%

 

Process capability

 

Statistical methods + Process engineering

3% to 1%

 

Problem detection time

 

Cells, flow lines, FIFO, visual controls

1% to 15 ppm

 

Human error

 

Mistake-proofing

Table 2. Quality levels and improvement approaches

Statistical methods for quality improvement with high failure rates

As discussed before, low yields and high failure rates are a constant of life in high technology manufacturing. Perhaps, not surprisingly, just as the old SPC came out of Western Electric, the high technology of the 1920’s, Six Sigma came out of Motorola, a company in a similar situation in the 80s and 90s. This does not mean that the relevance of the approach is limited to high technology. Even in mature industries, there are processes, such as painting car bodies, which have double digit rework rates even among the best manufacturers. Finally, even stable processes temporarily exhibit this type of behavior during new product introduction.

By “statistical methods,” most quality professionals understand the combination of control charts, histograms, scatter plots, sampling methods, inspection plans, and other tools collectively known as SPC or SQC. These methods pioneered 70 years ago by Walter Shewhart and later promoted and enhanced by W. E. Deming, A. Wald, and K. Ishikawa, were targeted at an environment where measurements were taken manually and recorded on paper spreadsheets, and where calculations were done with slide rules. The production of these charts continues today as a result of customer mandates enforced by audits, but they play almost no role in problem solving, even where statistical methods are needed.

Since the 1930s, not only has data acquisition technology changed but statisticians like John Tukey [3}, Stuart Hunter [4], Genichi Taguchi [5], and Brad Efron [6] have developed many tools that take advantage of the processing power of computers. Semiconductor testers gather hundreds of variables on each circuit, but, even in mature industries, spindle probes in machining centers, coordinate measurement machines (CMMs), or supervisory control and data acquisition (SCADA) systems can collect numerous product characteristics and feed them immediately to analysis software. Even the electronic spreadsheets found on every engineer’s desk come with a data analysis package that is far more powerful than traditional SPC. Beyond this, specialized statistical software products supporting multivariate exploratory data analysis and design of experiments are available for less than the cost of one engineer-week.

Yet, except in special industries like semiconductors, the power of these tools remains largely untapped. The reason is not technology but lack of skills. Not only are people who understand tools like the Fisher discriminant or fractional factorial designs of experiments not commonly found in factories, but their knowledge is of limited value unless they also understand plasma etching, the appropriate feeds and speeds for milling titanium, or whatever other process they are using. And people combining statistical and process knowledge are almost nowhere to be found.

The black belt system that is part of Six Sigma can be viewed as an attempt to resolve this dilemma. Mikel Harry borrowed the term from karate, so let us stay with the same martial arts metaphor. It is well known that masters in the Chinese art of Wu Su, such as the late Bruce Lee, could easily defeat karate champions. The problem with Wu Su, as with modern statistics, was that one master can only train one or two disciples in a lifetime, whereas a karate instructor can turn out hundreds of black belts. Likewise, Six Sigma black belts may not have a level of knowledge that can compare with academic statisticians, but they know how to interpret the outputs of statistical software, and manufacturing engineers with process knowledge can be trained into Six Sigma black belts in large numbers in nine months to a year.

One-piece flow and first-in, first-out

Once process capability is established, the plant faces a change in the nature of quality, from drifts in process parameters to infrequently occurring discrete events like tool breakage, skipped steps, or mislabeled components. The challenge at this point is to detect them quickly and react promptly. The problems that require an elaborate root cause analysis, by definition, have been solved already, and now speed takes precedence over analytical sophistication.

The one-piece flow pattern is shown in Figure 5. From the time it leaves the input buffer to the time it reaches the output buffer, each part moves individually from step to step without any accumulation of WIP allowed between machines or work stations.

Baudin Fig 5

 Figure 5. One-piece flow through a cell

This has several consequences for quality:

  • The lead time through this sequence of operations is an order of magnitude shorter than with batch and queue between every two operations. If parts are tested after coming out, negative test results can be fed back to the line before it has a chance to damage more than a handful of parts
  • Because parts move FIFO at every step, the process sequence is preserved, and the parts affected by any defect detected at testing can be identified, as shown in Figure 6
  • Within the process, any defect introduced on one part at a step that prevents successful execution of the next step is detected immediately and production of further defectives prevented

Baudin Fig 6

Figure 6. Quality impact of preserving process sequence

The effects of one-piece flow and FIFO are backed up by in-process quality checks that take the form of go/no-go gauge checks or of what Shigeo Shingo called successive inspection in assembly, where each assembler touches the parts installed by the previous one. Examples of go/no-go gauges are shown in Figure 7.

Baudin Fig 7

 Figure 7


For any characteristic, the options are:

  1. No checking. If a particular dimension has never been found out of spec for 10 years, it probably doesn’t need to be checked.
  2. First and last piece checking. It is applicable to production runs where the characteristics for parts in-between can be interpolated. The checking operations then become part of the setups rather than of production tasks.
  3. 100% checking. As part of their routines, operators check every part, and place defectives in a red box where it is collected by the quality department.

These concepts are difficult for quality professionals trained on SPC to accept because they exclude both measurements and sampling. They have been trained to think that you should never use a go/no-go gauge where you can take a measurement, because the measurement is much richer in information. The measurement tells you more than whether a dimension falls between two goal posts; it tells you exactly where. They have also been told that measuring 100% of the parts is unaffordable, and that well-designed sampling plans can give you the same information at a much lower cost.

Sampling is not allowed inside cells, for several reasons:

  • It allows defective parts to wait and accumulate between measurements, which does not serve the objective of detecting and responding to problems quickly.
  • It disrupts operations. Operator jobs are choreographed into a sequence of motions that is repeated with every part. Add measurements to be taken every on fifth part at one operation and every seventh part at another, and you throw off the operators’ routine. On the other hand, a check on every part can be made a part of this routine, and go/no-go gauges can be engineered to make these checks fast enough not to slow down production.

Also, given that the problems we are trying to detect are discrete state changes rather than drifts or fluctuations, the information loss due to using go/no-go checks instead of measurements is not as great as it would have been earlier, and is more than made up by application to every part rather than to a sample and by the increased speed of problem detection.

As noted earlier, the collection of measurements and their compilation into control charts is commonly done to satisfy external mandates. This can still be done without disrupting the flow of production, for example, when parts are finished or while they are between segments of the process that have organized into cells. Measurements, of course, also have a place in the solution of problems once they have been detected.

Once-piece flow and FIFO do not imply and are not always implemented in the form of a U-shaped cell. The U-shape, however, also has an impact on quality, simply because seeing incoming and outgoing parts side-by-side makes operators more conscious of quality issues. This is best understood by considering the opposite, as shown in Figure 8.


Baudin Fig 8

 Figure 8. Quality impact of contiguous input and output buffers


The methods described in this section cannot work on processes with double digit percentages of defectives. High volume production with an unstable process requires buffers between every two operations, simply to protect the downstream operation against fluctuations in the output of the upstream one. The process characterization and stabilization work done with the statistical tools is an enabler for one-piece flow and FIFO, which in turn enables the process to reach the next plateau.

Mistake-proofing

When your main remaining quality problem is operator errors, then clearly the last frontier for improvement is their prevention. But if your main problem is the inability to hold tolerances consistently, spending your resources on mistake-proofing would be lining up deck chairs while the ship is sinking. Conversely, statistical methods are not much help in the prevention or errors that occur once or twice a year.

Shigeo Shingo recounts in [1] how he coined the term “poka-yoke” in 1963 after an operator was offended by the word “baka-yoke,” a common everyday word which translates to foolproofing. The operators are not fools, but everybody makes mistakes. The more sensitive new word translates to “mistake-proofing,” which has been gaining currency in the US in the past few years, to the point of being used in quality standards documents like QS-9000.

Besides [1], the main source of information on mistake-proofing in the US is [7], a compilation of 240 examples from Japanese industry, presented in the form of before and after drawings with a few comments. While a valuable source of ideas, this encyclopedia of mistake-proofing has two limitations:

  1. The examples are showing their age. For example, none of them deals with modern, computer-controlled equipment, which can be error-prone, but cannot be mistake-proofed with tangs or guiding pins.
  2. While most examples are clear and enlightening, a few are baffling and probably wrong. Example 57, for example, claims that inserting a stopper inside a nut runner would prevent it running down a nut all the way with a missing washer. Neither we nor any engineer we discussed it with could understand how it would work.

Few factories in the US practice mistake-proofing on their shop floors. On the other hand, there are commercial products that are loaded with mistake-proofing features, as shown in the two examples of Figure 9. It is common with traditional alarm clocks for users to change the time setting when they really intended to set the alarm. The Swiss Army alarm clock prevents this by keeping the rarely used time setting knob under a hood, leaving only the frequently used alarm setting button exposed. The HP 2500C printer goes out of its way to prevent users from inserting cartridges of the wrong color into a slot. The slots are, of course, color-coded, but they also have a groove pattern matching a tang pattern on the corresponding cartridge, that turns each color cartridge into a key that fits in only one lock. It is impossible to put the cyan cartridge into the magenta slot.

The discipline of designing products to prevent user errors is known as “usability engineering,” and is a field of active research in the U.S., at least in software, in the automotive industry, and in aerospace. Don Norman, in [8], explains how to design objects like door handles, light switches or faucets in a way that leads people to use them as intended with little or no training. Until we brought it to his attention, he had never heard of mistake-proofing on the shop floor, but his concepts are highly applicable to the design of operator-machine interfaces, an area that is not addressed in [1] and [7].

Baudin Fig 9

Figure 9. Mistake-proofing in commercial products


NASA has a group in its Ames Research Center in Mountain View, Calif., working on cockpit design to prevent pilot errors in airliners. The motivation is that 60 percent of the crashes are due to pilots misinterpreting the data on control panels when switching from autopilot to manual. In [9], NASA’s Asaf Degani describes a formal system he called OFAN for modeling in parallel a device, its control system, and its operator interface. An OFAN analysis brings to light discrepancies between these entities that can be conducive to deadly errors.

The work done in usability engineering is particularly relevant on the shop floor to mistake-proof computer-controlled equipment. To this date, as shown in Figure 10, the operator interfaces to CNCs are confusing and error-prone, and make no use of what has been learned in this area in the past 30 years.

Baudin Fig 10

Figure 10. Operator interface to the control system

There are results and concepts that can be applied to take mistake-proofing beyond the applications to conventional equipment shown in [1] and [7].

Conclusion

This discussion has been focused on technical tools. To improve quality, there are of course many other issues that need to be addressed, that are beyond the scope of this article. Our goal was to position adequately different approaches that are often perceived to be in conflict, and also to draw attention to the middle layer where one-piece flow and FIFO play the central role. That flow lines and cells improve quality is well known, but rarely mentioned among the tools to that end in the literature on quality.

Biographical Sketch

 Baudin Fig 11  

Over the past 14 years, Michel Baudin has been consulting for such clients as Honda of America, Canon Virginia, Boeing, Raytheon, Unilever, MetalEurop, the CIADEA automotive group, and Hoechst on lean manufacturing implementation, and for high-technology companies like Hewlett Packard, Intel, Motorola. Winbond, and National Semiconductor on production scheduling, process transfer from R&D to production, and computer system architecture for manufacturing applications. He also designed the MS/X OnTime production scheduler marketed by Tyecin Systems and led the EU-funded INRECA research project.

Michel Baudin is the founder of MMTI, the Manufacturing Management & Technology Institute in Palo Alto, Calif. His prior experience includes being a director of the Menlo Park Technology Center of Teknekron Corporation, leading a group at Schlumberger/Fairchild that designed, tested, and supported maintenance management, production scheduling, and quality control software that is in use in semiconductor factories; giving technical support for CIM installations in Japan on behalf of Consilium corporation; and implementing the OPT scheduling system in two General Motors factories.

Mr. Baudin is author of Manufacturing Systems Analysis, with Application to Production Scheduling, a textbook available from Prentice Hall. His academic background includes a master's degree in engineering from the Ecole des Mines, Paris, graduate work at the Hahn-Meitner Institut of Berlin; and research at the University of Tokyo. He is an instructor with UC Berkeley extension and a faculty member of the University of Dayton's Center for Competitive Change. Michel Baudin is fluent in French, Japanese, and German, and is learning Spanish.

References

  1. Shigeo Shingo, 1986, ZQC: Source Inspection and the Poka-Yoke System, Productivity Press, Portland, OR
  2. Mikel Harry, 2000, Six Sigma, Doubleday, New York
  3. David C. Hoaglin, Frederick Mosteller, John W. Tukey (Editors), 2000, Understanding Robust and Exploratory Data Analysis, John Wiley & Sons, New York
  4. George Box, William Hunter, Stuart Hunter, 1978, Statistics for Experimenters, John Wiley & Sons, New York
  5. Madhav Phadke, 1989, Quality Engineering Using Robust Design, Prentice Hall, Englewood Cliffs, NJ
  6. Brad Efron and Robert Tibshirani, 1998, An Introduction to the Bootstrap, Chapman & Hall/CRC, Boca Raton, FL
  7. Factory Magazine editors, 1988, Poka-Yoke: Improving Product Quality by Preventing Defects, Productivity Press, Portland, OR
  8. Donald Norman, 1990, The Design of Everyday Things, Doubleday, New York
  9. Asaf Degani, 1996, Modeling Human-Machine Systems: on modes, error, and patterns of interaction, PhD dissertation, Georgia Institute of Technology, Atlanta, GA
SHARE