As many of you know, I have been on a mission: to bring the science of statistical process control to the color production process. In my previous blog posts (too numerous to mention) I have wasted a lot of everyone's time describing obvious methods that (if you choose to believe me) don't work as well as we might hope.
Today, I'm going to change course. I will introduce a tool that is actually useful for SPC, although one big caveat will require that I provide a sequel to this interminable series of blog posts.
Today, I'm going to change course. I will introduce a tool that is actually useful for SPC, although one big caveat will require that I provide a sequel to this interminable series of blog posts.
The key goal for statistical process control
Just as a reminder, the goal of statistical process control is to rid the world of chaos. But to rid the world of chaos, we must first be able to identify chaos. Process control freaks have a whole bunch of tools for this rolling around in their toolboxes.
The most common tool for identifying chaos is the run-time chart with control limits. Step one is to establish a baseline by analyzing a database of some relevant metric collected over a time period when everything was running hunky-dory. This analysis gives you control limits. When a new measurement is within the established control limits, then you can continue to watch reruns of Get Smart on Amazon Prime or cat videos on YouTube, depending on your preference.
Run-time chart from Binney and Smith
But when a measurement slips out of those control limits, then it's time to stop following the antics of agents 86 and 99 and start running some diagnostics. It's a pretty good bet that something has changed. I described all that statistical process control stuff before, just with a few different jokes. But don't let me dissuade you from reading the previous blog post.
There are a couple other uses for the SPC tool set. If you have a good baseline, you can make changes to the process (new equipment, new work flow, training...) and objectively tell whether the change has improved the process. This is what continuous process improvement is all about.
Another use of the SPC tool set is to ask a pretty fundamental question that is very often ignored: Is my process capable of reliably meeting my customer's specifications?
I would be remiss if I didn't point out the obvious use of SPC. Just admit it, there is nothing quite so hot as someone of your preferred gender saying something like "process capability index".
What SPC is not
Statistical process control is something different from "process control". The whole process control shtick is finding reliable ways to adjust knobs on a manufacturing gizmo to control the outcome. There are algorithms involved, and a lot of process knowledge. Maybe there is a PID control loop, or maybe a highly trained operator has his/her hand on the knob. But that subject is different from SPC.
Statistical process control is also something different from process diagnostics. SPC won't tell you whether you need more mag in your genta. It will let you know that something about the magenta has changed, but immediate job of SPC is not to figger out what changed. This should give the real process engineers some sense of job security!
Quick review of my favorite warnings
I don't want to appear to be a curmudgeon by belaboring the points I have made repeatedly before. (I realize that as of my last birthday, I qualify for curmudgeonhood. I just don't want to appear that way.) But for the readers who have stumbled upon this blog post without the benefit of all my previous tirades, I will give a quick review of my beef with ΔE based SPC.
Statistical process control is something different from "process control". The whole process control shtick is finding reliable ways to adjust knobs on a manufacturing gizmo to control the outcome. There are algorithms involved, and a lot of process knowledge. Maybe there is a PID control loop, or maybe a highly trained operator has his/her hand on the knob. But that subject is different from SPC.
Statistical process control is also something different from process diagnostics. SPC won't tell you whether you need more mag in your genta. It will let you know that something about the magenta has changed, but immediate job of SPC is not to figger out what changed. This should give the real process engineers some sense of job security!
Quick review of my favorite warnings
I don't want to appear to be a curmudgeon by belaboring the points I have made repeatedly before. (I realize that as of my last birthday, I qualify for curmudgeonhood. I just don't want to appear that way.) But for the readers who have stumbled upon this blog post without the benefit of all my previous tirades, I will give a quick review of my beef with ΔE based SPC.
Distribution of ΔE is not normal
I would hazard a guess that most SPC enthusiasts are not kept up at night worrying about whether the data that they are dealing with is normally distributed (AKA Gaussian, AKA bell curve). But the assumption of normality underlies practically everything in the SPC toolbox. And ΔE data does not have a normal distribution.
I would hazard a guess that most SPC enthusiasts are not kept up at night worrying about whether the data that they are dealing with is normally distributed (AKA Gaussian, AKA bell curve). But the assumption of normality underlies practically everything in the SPC toolbox. And ΔE data does not have a normal distribution.
I left warnings about the assumption of abnormality in Mel Brooks' movies
To give an idea of how long I have been on this soapbox, I first blogged about the abnormal distribution of color difference data almost five years ago. And to show that it has never been far from my thoughts and dreams, I blogged about this abnormal distribution again almost a year ago.
A run-time chart of ΔE can be deceiving
Run-time charts can be seen littering the living rooms of every serious SPC-nic. But did you know that using run-time charts of ΔE data can be misleading? A little bit of bias in your color rendition can completely obscure any process variation, lulling you into a false sense of security.
Run-time charts can be seen littering the living rooms of every serious SPC-nic. But did you know that using run-time charts of ΔE data can be misleading? A little bit of bias in your color rendition can completely obscure any process variation, lulling you into a false sense of security.
My famous example of a run-time chart with innocuous variation (on upper right)
that hides the ocuous variation in the underlying data (underlying, on lower left)
that hides the ocuous variation in the underlying data (underlying, on lower left)
The Cumulative Probability Density function is obtuse
The ink has barely had a chance to dry on my recent blog post showing that the cumulative relative frequency plot of ΔE values is just lousy as a tool for SPC.
It can be a useful predictor of whether the customer is going to pay you for the job, but don't try to infer anything about your process from it. Just don't.
A useful quote misattributed to Mark Twain
Everybody complains about the problems with using statistical process control on color difference data, but nobody does anything about it. I know, you think Mark Twain said that, but he never did. Contrary to popular belief, Mark Twain was not much of a color scientist.
So now it's time for me to leave the road of "if you use these techniques, I won't invite you over to my place for New Year's eve karaoke", and move onto "this approach might work a bit better; what N'Sync song would you like to sing?"
Part of the problem with ΔE is that it is an absolute value. It tells you how far, but not which direction. Another part of the problem is that color is inherently three dimensional, so you need some way to combine three numbers, either explicitly or implicitly.
Three easy pieces
Many practitioners have taken the tack of treating the three axes of color separately. They look at ΔL*, Δa*, and Δb* each in isolation. Since these individual differences can be either positive or negative, they at least have a fighting chance of being somewhat normally distributed. In my vast experience, when a color process is in control, the variations of ΔL*, Δa*, and Δb* are not far from being normal. I briefly glanced at one data set, and somebody told me something or other about another data set, so this is pretty authoritative.
Let's take an example of real data. The scatter plot below shows the a*b* values of yellow during a production run. These are the a*b* values of two solid yellow patches from each of 102 newspaper printers around the US. This is the data that went into the CGATS TR 002 characterization data set. The red bars are upper and lower control limits for a* and for b*, each computed as the mean, plus and minus three standard deviation units. This presents us with a nice little control limit box.
There is a lot of variation in this data. There is probably more than most color geeks are used to seeing. Why so much? First off, this is newspaper printing, which is on an inexpensive stock, so even within a given printing plant, the variation is fairly high. Second, the printing was done at 102 different printing plants, with little more guidance other than "run your press normally, and try to hit these L*a*b* values".
The variation is much bigger in b* than in a*, by a factor of about 4.5. A directionality like this is to be expected when the variation in color is largely due to a single factor. In this case, that factor is the amount of yellow ink that got squished onto the substrate, and it causes the scatterplot to look a lot like a fat version of the ideal ink trajectory. Note that often, the direction of the scatter is toward and away from neutral gray.
This is actually a great example of SPC. If we draw control limits at 3 standard deviation units away from the mean, then there is a 1 in 200 chance that normally distributed data will fall outside those limits. There are 204 data points in this set, so we would expect something like one data point outside any pair of limits. We got four, which is a bit odd. And the four points are tightly clustered, which is even odder. This kicks in the SPC red flag: it is likely that these four points represent what Deming called "special cause".
I had a look at the source of the data points. Remember when I said that all the printers were in the US? I lied. It turns out that there were two newsprinters from India, each with two representatives of the solid yellow. All the rest of the printers were from either the US or from Canada. I think it is a reasonable guess that there is a slightly different standard formulation for yellow ink in that region of the world. It's not necessarily wrong, it's just different.
I'm gonna call this a triumph of SPC! All I did was look at the numbers and I determined that something was fishy. I didn't know exactly what was fishy, but SPC clued me in that I needed to go look.
Two little hitches
I made a comment a few paragraphs ago, and I haven't gotten any emails from anyone complaining about some points that I blissfully neglected. Here is the comment: "If we draw our control limits at 3 standard deviation units away from the mean, then there is a 1 in 200 chance that normally distributed data will fall outside those limits." There are two subtle issues with this. One makes us over-vigilant, and the other makes us under-vigilant.
I absentmindedly forgot to mention that there are two sets of limits in our experiment. There is a 1 in 200 chance of random data wandering outside of one set of limits, and 1 in 200 chance of random data wandering outside of the other set of limits. If we assume that a* and b* are uncorrelated, then this strategy will give us about a 2 in 200 chance of accidentally heading off to investigate random data that is doing nothing more than random data does. Bear in mind that we have only been considering a* and b* - we should also look at L*. So, if we set the control limits to three standard deviation units, then we have a 3 in 200 chance of flagging random data.
So, that's the first hitch. It's not huge, and you could argue that it is largely academic. The value of "three standard deviation units" is somewhat arbitrary. Why not 2.8 or 3.4? The selection of that number has to do with how much tolerance we have for wasting time looking for spurious problems. So we could correct this minor issue by adjusting the cutoff to about 3.15 standard deviation units. Not a big problem.
The second hitch is that we are perhaps being a bit too tolerant of cases where two or more of the values (L*, a*, and b*) are close to the limit. The scatter of data kinda looks like an ellipsoid, so if we call the control limits a box, we are not being suspicious enough of values near the corners. These values that we should be a bit more leery of are shown in pink below. For three dimensional data, we should raise a flag on about half of the regions within the box.
The math actually exists to fix this second little hitch, and it has been touched on in previous blog posts, in particular this blog post on SPC of color difference data. This math also fixes the problem of the first hitch. Basically, if you scale the data axes appropriately and measure distance from the mean, the statistical distribution is chi-squared with three degrees of freedom.
(If you understood that last sentence, then bully for you. If you didn't get it, then please rest assured that there are at least a couple dozen color science uber-geeks who are shaking their head right now, saying, "Oh. Yeah. Makes sense.")
So, in this example of yellow ink, we would look at the deviations in L*, a*, and a* and normalize them in terms of the standard deviations in each of the directions, and then add them up according to Pythagoras. This square root of the sums of the squares is then compared against a number that was pulled from a table of chi-squared values to determine whether the data point is an outlier. Et voila, or as they say in Great Britain, Bob's your uncle.
Is it worth doing this? I generally live in the void between (on the one side) practical people like engineers and like people who get product out the door, and (on the other side) academics who get their jollies doing academic stuff. That rarified space gives me a unique perspective in answering that question. My answer is a firm "Ehhh.... ". Those who are watching me type can see my shoulders shrugging. So far, maybe, maybe not. But the next section will clear this up.
Are we done?
It would seem that we have solved the basic problem of SPC of color difference data, but is Bob really your uncle? It turns out that yellow was a cleverly chosen example that just happens to work out well. There is a not-so-minor hitch that rears it's ugly head with most other data.
The image below is data from that same data set. It is the L* and b* values of the solid cyan patches. Note that, in this view, there is an obvious correlation between L* deviation and b* variation. (The correlation coefficient is 0.791.) This reflects the simple physics: as you put more cyan ink on the paper, the color gets both more saturated and darker.
Once again, the upper and lower control limit box has been marked off in dotted red lines. According to the method which has been described so far, everything within this box will be considered "normal variation". (Assuming the a* value is also also within its limits.)
But here, things get pretty icky. The upper left and lower right corners are really, really unlikely to appear under normal variation. I mean really, really, really. Those corner points are around 10 standard deviation units (in the 45 degree direction) from the mean. Did I say really, really, really, really unlikely. Like, I dunno, one chance in about 100 sextillion? I mean, the chance of me getting a phone call from Albert Munsell while giving a keynote at the Munsell Centennial Symposium are greater than that.
Summarizing, the method that has been discussed - individually applying standard one dimensional SPC tools to each of the L*, a*, and b* axes individually - can fail to flag data points that are far outside of the normal variability of color. This happens whenever there is a correlation in the variation between any two of the axes. I have demonstrated with real data that such variation is not unlikely, in fact, it is likely to happen when a single pigment is used to create color at hue angles of 45, 135, 225, or 315 degrees.
What to do?
In the figure above, I also added an ellipse as an alternative control limit. All points within the ellipse are considered normal variation, and all points outside the ellipse are an indication of something weird happening. I would argue that the elliptical control limit is far more accurate than the box.
If we rotated the axes in the L*b* scatter plot of cyan by 44 degrees counter-clockwise, we have an ellipse that is perpendicular to the new horizontal and vertical axes. When we look at the points in this new coordinate system, we have rekindled the beauty that we saw in the yellow scatter plot. We can meaningfully look at the variation in the horizontal direction separately from the variation in the vertical direction. From there, we can do the normalization that I spoke of before and compare against the chi-squared distribution. This gives us the elliptical control limit shown below. (Or ellipsoidal, if we take in all three dimensions.)
This technique alleviates hitches #1 and #2, and also fixes the not-so-minor hitch #3. But, this all depends on our ability to come up with a way to rotate the L*a*b* coordinates around so that the ellipsoid lies along the axes. Not a simple problem, but the technique can tell us how to rotate the coordinates so that the individual components are all uncorrelated.
The ink has barely had a chance to dry on my recent blog post showing that the cumulative relative frequency plot of ΔE values is just lousy as a tool for SPC.
As alluring and seductive as this plot is,
don't mix CRF with SPC!
don't mix CRF with SPC!
It can be a useful predictor of whether the customer is going to pay you for the job, but don't try to infer anything about your process from it. Just don't.
A useful quote misattributed to Mark Twain
Everybody complains about the problems with using statistical process control on color difference data, but nobody does anything about it. I know, you think Mark Twain said that, but he never did. Contrary to popular belief, Mark Twain was not much of a color scientist.
The actual quote from Mark Twain
Part of the problem with ΔE is that it is an absolute value. It tells you how far, but not which direction. Another part of the problem is that color is inherently three dimensional, so you need some way to combine three numbers, either explicitly or implicitly.
Three easy pieces
Many practitioners have taken the tack of treating the three axes of color separately. They look at ΔL*, Δa*, and Δb* each in isolation. Since these individual differences can be either positive or negative, they at least have a fighting chance of being somewhat normally distributed. In my vast experience, when a color process is in control, the variations of ΔL*, Δa*, and Δb* are not far from being normal. I briefly glanced at one data set, and somebody told me something or other about another data set, so this is pretty authoritative.
Let's take an example of real data. The scatter plot below shows the a*b* values of yellow during a production run. These are the a*b* values of two solid yellow patches from each of 102 newspaper printers around the US. This is the data that went into the CGATS TR 002 characterization data set. The red bars are upper and lower control limits for a* and for b*, each computed as the mean, plus and minus three standard deviation units. This presents us with a nice little control limit box.
Scatterplot of a*b* values of solid yellow patches on 102 printers
There is a lot of variation in this data. There is probably more than most color geeks are used to seeing. Why so much? First off, this is newspaper printing, which is on an inexpensive stock, so even within a given printing plant, the variation is fairly high. Second, the printing was done at 102 different printing plants, with little more guidance other than "run your press normally, and try to hit these L*a*b* values".
The variation is much bigger in b* than in a*, by a factor of about 4.5. A directionality like this is to be expected when the variation in color is largely due to a single factor. In this case, that factor is the amount of yellow ink that got squished onto the substrate, and it causes the scatterplot to look a lot like a fat version of the ideal ink trajectory. Note that often, the direction of the scatter is toward and away from neutral gray.
This is actually a great example of SPC. If we draw control limits at 3 standard deviation units away from the mean, then there is a 1 in 200 chance that normally distributed data will fall outside those limits. There are 204 data points in this set, so we would expect something like one data point outside any pair of limits. We got four, which is a bit odd. And the four points are tightly clustered, which is even odder. This kicks in the SPC red flag: it is likely that these four points represent what Deming called "special cause".
I had a look at the source of the data points. Remember when I said that all the printers were in the US? I lied. It turns out that there were two newsprinters from India, each with two representatives of the solid yellow. All the rest of the printers were from either the US or from Canada. I think it is a reasonable guess that there is a slightly different standard formulation for yellow ink in that region of the world. It's not necessarily wrong, it's just different.
I'm gonna call this a triumph of SPC! All I did was look at the numbers and I determined that something was fishy. I didn't know exactly what was fishy, but SPC clued me in that I needed to go look.
Two little hitches
I made a comment a few paragraphs ago, and I haven't gotten any emails from anyone complaining about some points that I blissfully neglected. Here is the comment: "If we draw our control limits at 3 standard deviation units away from the mean, then there is a 1 in 200 chance that normally distributed data will fall outside those limits." There are two subtle issues with this. One makes us over-vigilant, and the other makes us under-vigilant.
Two little hitches with the approach, one up and one down
I absentmindedly forgot to mention that there are two sets of limits in our experiment. There is a 1 in 200 chance of random data wandering outside of one set of limits, and 1 in 200 chance of random data wandering outside of the other set of limits. If we assume that a* and b* are uncorrelated, then this strategy will give us about a 2 in 200 chance of accidentally heading off to investigate random data that is doing nothing more than random data does. Bear in mind that we have only been considering a* and b* - we should also look at L*. So, if we set the control limits to three standard deviation units, then we have a 3 in 200 chance of flagging random data.
So, that's the first hitch. It's not huge, and you could argue that it is largely academic. The value of "three standard deviation units" is somewhat arbitrary. Why not 2.8 or 3.4? The selection of that number has to do with how much tolerance we have for wasting time looking for spurious problems. So we could correct this minor issue by adjusting the cutoff to about 3.15 standard deviation units. Not a big problem.
The second hitch is that we are perhaps being a bit too tolerant of cases where two or more of the values (L*, a*, and b*) are close to the limit. The scatter of data kinda looks like an ellipsoid, so if we call the control limits a box, we are not being suspicious enough of values near the corners. These values that we should be a bit more leery of are shown in pink below. For three dimensional data, we should raise a flag on about half of the regions within the box.
We need to be very suspicious of intruders in the pink area
The math actually exists to fix this second little hitch, and it has been touched on in previous blog posts, in particular this blog post on SPC of color difference data. This math also fixes the problem of the first hitch. Basically, if you scale the data axes appropriately and measure distance from the mean, the statistical distribution is chi-squared with three degrees of freedom.
(If you understood that last sentence, then bully for you. If you didn't get it, then please rest assured that there are at least a couple dozen color science uber-geeks who are shaking their head right now, saying, "Oh. Yeah. Makes sense.")
So, in this example of yellow ink, we would look at the deviations in L*, a*, and a* and normalize them in terms of the standard deviations in each of the directions, and then add them up according to Pythagoras. This square root of the sums of the squares is then compared against a number that was pulled from a table of chi-squared values to determine whether the data point is an outlier. Et voila, or as they say in Great Britain, Bob's your uncle.
Is it worth doing this? I generally live in the void between (on the one side) practical people like engineers and like people who get product out the door, and (on the other side) academics who get their jollies doing academic stuff. That rarified space gives me a unique perspective in answering that question. My answer is a firm "Ehhh.... ". Those who are watching me type can see my shoulders shrugging. So far, maybe, maybe not. But the next section will clear this up.
Are we done?
It would seem that we have solved the basic problem of SPC of color difference data, but is Bob really your uncle? It turns out that yellow was a cleverly chosen example that just happens to work out well. There is a not-so-minor hitch that rears it's ugly head with most other data.
The image below is data from that same data set. It is the L* and b* values of the solid cyan patches. Note that, in this view, there is an obvious correlation between L* deviation and b* variation. (The correlation coefficient is 0.791.) This reflects the simple physics: as you put more cyan ink on the paper, the color gets both more saturated and darker.
This image is not supposed to look like a dreidel
But here, things get pretty icky. The upper left and lower right corners are really, really unlikely to appear under normal variation. I mean really, really, really. Those corner points are around 10 standard deviation units (in the 45 degree direction) from the mean. Did I say really, really, really, really unlikely. Like, I dunno, one chance in about 100 sextillion? I mean, the chance of me getting a phone call from Albert Munsell while giving a keynote at the Munsell Centennial Symposium are greater than that.
Summarizing, the method that has been discussed - individually applying standard one dimensional SPC tools to each of the L*, a*, and b* axes individually - can fail to flag data points that are far outside of the normal variability of color. This happens whenever there is a correlation in the variation between any two of the axes. I have demonstrated with real data that such variation is not unlikely, in fact, it is likely to happen when a single pigment is used to create color at hue angles of 45, 135, 225, or 315 degrees.
What to do?
In the figure above, I also added an ellipse as an alternative control limit. All points within the ellipse are considered normal variation, and all points outside the ellipse are an indication of something weird happening. I would argue that the elliptical control limit is far more accurate than the box.
If we rotated the axes in the L*b* scatter plot of cyan by 44 degrees counter-clockwise, we have an ellipse that is perpendicular to the new horizontal and vertical axes. When we look at the points in this new coordinate system, we have rekindled the beauty that we saw in the yellow scatter plot. We can meaningfully look at the variation in the horizontal direction separately from the variation in the vertical direction. From there, we can do the normalization that I spoke of before and compare against the chi-squared distribution. This gives us the elliptical control limit shown below. (Or ellipsoidal, if we take in all three dimensions.)
It all makes sense if we rotate our head half-way on its side
This technique alleviates hitches #1 and #2, and also fixes the not-so-minor hitch #3. But, this all depends on our ability to come up with a way to rotate the L*a*b* coordinates around so that the ellipsoid lies along the axes. Not a simple problem, but the technique can tell us how to rotate the coordinates so that the individual components are all uncorrelated.
1. thanks for the Munsell Centennial plug!
ReplyDelete2. isn't is curious that unique yellow lies so close to the b* axis? Coincidence?
3. I can't wait for the PCA explanation.
And since all of you want to know, the guitar in my profile pic is a 1967 Fender Coronado XII. A hollow body electric 12 string. One of my favorites that I do not get to play that often.
ReplyDeleteHurrah for ellipsoids! Way back in 1976, when I knew nothing of statistics (still don't) I found a paper or a book (can't recall now) describing these calculations, and used it to prove that my design for a color matrix chip for TV sets would not vary excessively from unit to unit, even though it used horribly variable bipolar PNP transistors in part of the circuit to allow RGB outputs that had the full range of zero volts to the power supply rail.
ReplyDeleteThanks for the extra info, Wayne. I know that I am on a sparsely traveled path, but it helps to know that someone else has been down the path before me!
ReplyDelete