Reliability Hotwire, a Reliasoft monthly eMagazine for reliability professionals, published two papers recently entitled:
“Taguchi Robust Design for Product Improvement” http://www.weibull.com/hotwire/issue122/index.htm ,
and Taguchi Robust Design for Product Improvement: Part II http://www.weibull.com/hotwire/issue123/index.htm
I read the first paper in issue 122 (post-publication) and found many issues and discrepancies. I had worked with Dr. Taguchi directly for approximately 10 years while I managed the Xerox Robust Engineering Center. I had numerous other interactions through Taguchi’s affiliation with American Supplier Institute in Detroit . I responded to Reliasoft and the author of the paper regarding the first publication, but no modifications were made . I was then given a pre-read to the second paper and several general modifications were made prior to publication. Below are my responses to the first publication and pre-read response to the second publication.
Chris,
I just wanted to send you a few comments about ‘Taguchi Robust Design for Product Improvement’ paper published in issue 122 of Reliability Hotwire. The stated objective of the experiment was to find the appropriate control factor levels in the design. Dr. Taguchi told me on numerous occasions that the primary objective of parameter design was to prevent a poorly understood or inadequate process design from going downstream (and creating lots of costs and trouble). Parameter design experiments should be used to inspect engineering knowledge and process downstream readiness. It is to verify that the engineers can prevent surprises downstream. It is to verify that they can make the process do what they want, when they want it. If an engineering team runs an experimental design, such as the one shown, and they know very little about the important process parameters, they will still collect data. They will still analyze the data, and try to make inferences and decisions based on their results. They will not , however, be providing any protection for downstream enterprise. If an engineering team selects several noise factors and levels which do very little to affect the function, they will still collect and analyze data, and try to make inferences. They will however not be providing sufficient protection for the actual process noises which will come in downstream conditions. If the engineering team selects a response and measurement system with serious limitations, they will still collect and analyze data and try to make inferences. The response and measurement system may have any number of serious problems, like lack of engineering focus, ambiguity, lack of validity, nonlinearity, large errors, … The engineering team will still collect data and do their thing. Unfortunately, the experimental results will be of little value and later on, downstream people will tell you it was [of little value].
Inspection of the data in the paper reveals that the numbers are all about the same from one noise factor level to another. In other words, the noise factors create no systematic contrast between levels. Noise factors and levels selected have very little effect, one indication of a poorly understood design. A good engineering team could easily come up with some potent noise factors they will have to worry about downstream . The signal-to -noise ratios show only a ~1 dB range among all eight experiments. This is quite small. The mean and standard deviation data show a bothersome trend where the higher means have the lower standard deviations. This is probably due to the saturation of the data as numbers approach 100 on the gloss meter scale. Logistic transform of the data is usually done for 0 to 100 scales, as in percentages, to minimize the rail condition.
Dr. Taguchi would probably have prescribed either an L9 (3 4-2) inner array or perhaps an L12 inner array with additional factors assigned. Assignment of control factor (CxC) interactions was usually discouraged. It was left to the engineering team to use their understanding of the process to assign factors appropriately. For example, most engineers would know that paint is a shear thinning fluid, i.e. the viscosity drops as the flow rate increases. By not using this fact during the assignment, they would probably observe an interaction effect between those two factors. By adjusting the flow rate levels , depending on the viscosity level, the interaction could be avoided. This was called sliding scale assignment. They could demonstrate their engineering knowledge by appropriate assignment
Below is a simple graph showing an interaction between two control factors A and B . The effects of factor A makes Y increase, for example, when B is at level one. The effect of factor A makes Y decrease, for example, when B is at level two. This means that factor A effect cannot be relied on to make the response always increase (or decrease) . Sometimes Y increases, and sometimes Y decreases, depending on what the other factor(s) are doing. Remember that there are lots of factors not assigned to the experiment as well. If the effects of an assigned factors like factor A are different depending on what one or many other factors may be doing, that makes an unreliable effect. We would prefer factor effect which will always move Y in the same direction In physics, for example, consider Newton’s second law, sometimes written as F=mA. Increasing the mass always increase the force. It is not as if sometimes a mass increase creates a larger force and other times the mass increase creates a smaller force. You can rely on the force always getting larger as the mass increases. Similarly, you can rely on the force always getting larger as the acceleration increases. An engineering team that finds lots of control factor antisynergistic interactions, does not understand the design very well and should not move it to downstream conditions.
Parameter design verification test was always conducted by Dr. Taguchi to see if the results were reproducible. In the example shown, there was no verification test, only a final linear model was built from the regression analysis. The final control factor combination, to maximize the S/N ratio, was not checked for reproducibility. Verification tests enable consolidation of robustness gains so that the next experiment starts from a better place. It also provides new data showing the engineering team can create reproducible results using their knowledge of the process. Verification test should used to demonstrate that gloss can be improved, not just provide an equation.
Larger-the-better and smaller-the-better S/N ratios are usually used together, to develop a more positive operating window. Smaller-the-better and larger-the-better S/N ratios were used early on in Taguchi’s career when measurements were frequently made on dysfunctional outputs (ideally zero). Defects of spray painting like orange peel, sags, pinholes, blisters, etc. would have been treated with smaller the better S/N ratios. Now the robustness effort is to work on the functions rather than dysfunctions if possible
Spray painting ideal function development with signal factors related to changing droplet kinetic energy (mass and velocity) would probably be the preferred approach today given time and resources. I would first try experimentally to consistently and repetitively create the same drop volume and velocity (by changing lots of control and noise factors). A very narrow distribution of drop volumes and a very narrow velocity distribution would be preferred. Tuning factors would be identified for changing drop volume and drop velocity. Subsequent targeting, wetting and devolatilization experiments would be developed followed by curing process steps optimization. Notice the decomposition of the gloss problem into upstream time segments—generate the drop, propel the drop, deliver the drop to the surface, remove the solvent from the drop, cure the drops together… Each step would be aimed at minimizing variation of, (without measuring), the gloss.
Reply to part II : Chris, Interactions ( CxN ) between control factors (C) and noise factors ( N ) can be used to help with robustness improvement. If one were to use random numbers as data, however, (CxN) interactions could easily be observed. Discovering interactions means very little if it cannot be used to improve the design. Verification test need to be conducted to confirm the gain predicted by taking advantage of CxN interactions. If it is not done, it’s just a mathematical exercise.
What is the relationship between mean and standard deviation for a design? Is it correct to assume that they are independent? Is your picture correct? Most times when the mean output is zero, the standard deviation is quite small. As the mean output increases, the standard deviation also increases. Treating the mean and standard deviation separately sounds enticing, but it ignores the reality. One side effect of treating them independently is to make the device or design work very inefficiently. It drives the design to small variation by driving the output to small levels. The objective should be to maximize the ratio of the useful output to the harmful output, not drive both to zero. Occasionally the mean will increase and the standard deviation will decrease. This is referred to as running into the rail. It may be a measurement system limitation, as I mentioned earlier.
One way to increase the output response is to increase the power/energy into the design or process. As the power increases, however, lots of detrimental effects can be observed. Temperature changes , chemical reactions rate changes, optical side effects occur, mechanical problems like vibration amplitude increase, … output becomes more variable. To find a way to use the power to just create more useful output and to starve (of energy) the side effects is the job of parameter design. For the current process, the flow rate of the paint and the pressure drop in the gun are control factors that affect the energy of the droplets long before they strike the target . It may well be that most of the paint is going somewhere other than the target creating a much thinner coating , but slightly higher gloss. An inefficient. costly process is one that throws paint everywhere but the target, yet maybe meets the gloss spec. A better process would be one that delivered paint with correct drop volume and velocity and placed droplets where they were supposed to go with the correct incremental thickness. Gloss is more a function of the substrate surface wetting and roughness characteristics and rheological/ devolatilization behavior of the paint after deposition.
Your paper is mostly devoid of any engineering consideration. The focus is on what to do with the data (whether or not it has any meaning). In all the years I worked with Dr Taguchi, the focus was always on the engineering, design decomposition, the measurement improvements, the verification testing , and the gains made by running a well planned engineering experiment. I understand why you have included response surface approach and reintroduced some of the approaches suggested by other statisticians many years ago .
I need to know if this is real data or just made up ?
Your conclusions suggest a single array for both noise and control factors. This has been discussed elsewhere in great detail. It is not parameter design . The layout of an experiment is usually set by the constraints of time and money. Those are starting points for the design. There are many ways to adjust the size of an experiment: Compounding noise factors, assignment of only important noise factor(s), use only signal factor, loose tolerance on setting factor levels, augmenting earlier design, using difficult to change factors in slowly changing columns, orthogonal array selection,… .
As it turned out the author indicated that the data were indeed fabricated. I was offered the opportunity to provide a future paper to help engineers/emagazine readers understand Taguchi’s robust design methodology more accurately.
Louis LaVallee
Sr Reliability Consultant
Ops a la carte
Leave a Reply