Wednesday, August 21, 2013

What color is human skin?

When I grew up, there was only one answer to the question "What color is human skin?" At the time I was born, Binney and Smith [1] had one crayon (called "flesh") for skin tone. Then in 1962, the company renamed the "flesh" crayon "peach" [2]. Then there was the Civil Rights Act of 1964. Finally, in 1992, Binney and Smith introduced their Multicultural Crayon set. This set includes six crayons that represent an actual range of skin tones: apricot, burnt sienna, mahogany, peach, sepia, and tan. White and black are included for mixing. Or maybe they were added because of the "eight crayons in a box" box rule. Either way, no one really has pure white or pure  black skin.
Photo egregiously stolen from Dick Blick website

The Humanae Project

I was excited this week when I found out about the Humanae Project. A lady from Brazil by the name of Angelica Dass has taken on the task of collecting images of a zillion people [3] to catch their skin tones. A small area of the image of each face was averaged and Photoshopped in as the background.  
Six of the hundreds of people who have volunteered for the project

How could I resist doing a little math on these images and blogging about it?

Visual look

At the time I looked, there were 420 images. I pulled the RGB values from the background of each of these images and went at it. The first look is a montage of all the colors. My impression? Each little box looks like a skin tone, but it does seem a bit light on the dark - that is - the darker skin tones seem under-represented.

A collection of 420 real skin tones

Here is another look at this data. In the graph below, each dot stands for one person's skin tone. The horizontal position of the dot gives the red value of the color, and the vertical position gives the green value. One thing that the graph shows is that there is a very definite set of colors that qualify as skin tones [4].

Red and green values of all skin tones in the collection

When I look at these graphs, I also see that, for the most part, the collection of skin tone colors form a nice line. Well, a nice crooked line. And maybe the line is kinda fat. Still, this says to me that you could make a pretty decent approximation of all the skin tones by assigning each of the skin tones to a single spot on that crooked, kinda fat line. To put this a different way, if you were to collect the 420 people in a room and ask them to line up in order from darkest skin to lightest skin, they would make a more or less smooth transition

I have done just that in the image below. Each person has a narrow vertical strip, and all the strips are arranged in order according to the average of the red, green, and blue values.

Another collection of 420 skin tones
"Everyone line up in order from darkest to lightest!"

If you stand back and defocus your eyes a bit, this looks smooth. This shows that skin tones form a line. But, if you look closely, many of the individual strips can be seen, contrasting with their neighbors as being perhaps redder or perhaps greener. This shows that the line is kinda fat. People with the same average lightness of skin vary a little bit on the hue of their skin.

This is consistent with a paper that is a lot more rigorous than this blog post [5]. Their conclusion is that one number is enough to characterize around 95% of the information in the spectra of skin. Well, I didn't need none of their sophisticated "principle component analysis" to show the same thing, did I? Show offs.

Motivation for the rest of this blog

I am about to dive into the deep end of the mathogeek pool. Before I do, let me provide some idea of the various applications--to explain why it might be worthwhile to swim in those slide-rule-infested waters.

Application #1 - Suppose someone wanted to generate skin tones of various ethnicity, maybe for a game or to create random avatars. Having a simple parametric equation that describes a wide range of plausible skin tones would be a great way to do that. The word "parametric" in the previous sentence means that you could randomly select a parameter (call it "k") that would be in some range of values (I dunno, maybe from -3 to +3), and the equation would give you an RGB triplet that would be a plausible skin tone.

Application #2 - Suppose someone wanted to find faces in an image. One of several necessary criteria for a pixel to be a face pixel is that the RGB values must belong to the club of plausible skin tones. So, it would be cool to have some sort of equation that describes the set of plausible skin tones, so that any given pixel can be tested for membership in that club.

Application #3 - Suppose someone wanted to characterize someone's skin tone. This characterization might be used to recommend makeup, or to categorize someone as having "winter" coloring. If there were a magic equation with a parameter it would be possible to find the particular value of that parameter which best characterizes the skin for that person. That parameter would be the characterization of that person's skin tone.

These applications all have to do directly with skin tones. I have a few other applications in mind that would use the technique that I describe.

Application #4 - Suppose some math geek (or more likely, a stats geek) needs a way to describe a set of multi-dimensional data--much like one would use mean and standard deviation to describe single-dimensional data. I describe a method below to determine the ellipse (or hyper-ellipse) that describes multi-dimensional data in a statistical way. Since I am not aware of a word for an ellipse or hyper-ellipse that serves as a proxy for a large amount of data, I will invent the word: proxellipse. The proxellipse is an extension of standard deviation to multi-dimensions.  

Application #5 - Suppose some other geek (maybe a scientist of some kind) wanted to display a scatter plot of two-dimensional data. If the scientist had twenty points in that plot, it would give some appearance of the amount of spread. But, if the scientist had two hundred points, the scatter plot would give the impression that the spread of data is far broader. The analysis below is an alternative. By displaying a scatter plot along with the proxellipse, one won't be misled by the crowd effect. The proxellipse would show the boundary in which a certain percent of the data will likely fall (with all the normal assumptions about whether the data is normally distributed).

Statistical look

The graph below is the same data as the red/green plot above, only this time, I am looking at red versus blue values. This perspective shows a bit more crookedness. It looks like there are two separate populations: those with darker skin, and those with lighter skin. It looks like two different lines are needed to describe the different sets. 

Red and blue values of same data, with a discontinuity highlighted

So, there is a crooked line in RGB space that defines skin tones. For the sake of simplicity, I am going to start by looking at the statistics of just the brighter dots (where R > 180, G > 120, and B > 110). This reduced the set from 420 data points to 404. To be clear, I am quite literally discriminating here on the basis of skin color, excluding the darker skin tones. I apologize, but the darker tones belong to a different statistical distribution. 

Now it is my turn to show off some really golly-whiz bang math. This is a green/blue view of the segregated data points, with a red line showing three axes of an ellipse. This ellipse is the proxellipse (with coverage of 3), which is an ellipse that is basically in the same shape, size, and orientation of the data.

Green and blue values of skin tones, with the axes of an ellipse shown in red

Now I'll say a bit more about the proxellipse. It's a statistical thing. Essentially, I have extended the idea of standard deviation to three dimensions. The size of this proxellipse is three standard deviation units in each of the directions. Now, I assume that I am not the first to discover the technique, but it would appear that this technique is not well known in the color science community [6]. Or maybe it's just a dumb idea?

Getting back to the kinda fatness of the line. The degree of kinda fatness can now be quantified. To demonstrate that the data points here are very close to being a line, the length of the major axis is 28.8 gray values. The other two minor axes are 5.3 and 3.3 gray values. For those who don't recall, 28.8 is much bigger than 5.3 and 3.3.

An algebraic look

Let's just pretend that someone wanted to come up with an equation that could be used to generate a sequence of reasonable skin tones. I called that Application #1. One approach would be to apply linear regression to any of the graphs above. One might, for example, determine a best fit function for G as a function of R, and another regression would determine the best fit of B as a function of R. In this way, the value of R is a parameter that can be used to determine the other two color coordinates.

This may work, but I have some reservations about this technique. First, such a regression treats R as an independent variable and G as a dependent one. Really, the two should be in the same category. It seems like something must go wrong. (OK, that's just a philosophical argument.) 

Second, the choice of expressing G as a function of R, for example, is a bit problematic due to the relatively steep slope. A small change in R will cause a large change in G. And (the important part) a small change in the random data will cause a large change in the slope that is calculated through regression. That's a bummer.

Third, I have a little known fact about statistics. Garden variety linear regression starts with the assumption that there is no noise in the independent variable. That is to say, it is based on the assumption that we know the R values perfectly, and any deviation from a straight line is strictly because of random variation in G. Now here is my little known fact: If you add random noise to both of the variables in a linear regression, the slope of the regression line will move toward zero. The noise in the independent variable adds a bias.

So... how about another approach? Using my proxellipse analysis, I arrived at the following equations for the RGB values of the skin tones:

     R = 224.3 + 9.6 k
     G = 193.1 + 17.0 k
     B = 177.6 + 21.0 k

When k is allowed to go from -3 to 3, this will provide RGB triplets for reasonable lighter skin tones. The following equations will give reasonable RGB triplets for darker skin tones.

     R = 168.8 + 38.5 k
     G = 122.5 + 32.1 k
     B = 96.7 + 26.3 k

I apologize again... I have created separate but equal equations. <sigh>

Caveats

Science?
As a scientific experiment, this is perhaps not controlled enough to be accepted into a peer reviewed journal. Now, I have every reason to believe that Angela has controlled conditions as best as she can. But, she is using a camera, and cameras are not color measurement devices. As far as I know, she has not provided an image with a Munsell color checker card that would serve to calibrate the colors to real units. There isn't a statement about the type of camera or the settings. In particular, there is no statement of the gamma setting of the camera. (Gamma is a setting that will increase the brightness of midtones in order to give the picture a better appearance. Most cameras have this enabled by default, and it gets in the way of accurate color measurement.)

But these are not my big bugaboo. RGB cameras (at least almost every one on the market today) do not see color the same way we do. The red, green, and blue filters in a camera do not give the camera the same spectral response of the three cones in the eye. I investigated this in my paper Why do Color Transforms Work?

Also, the RGB response differs from camera to camera, so as they say, results may vary.

Ideally, the purist in me would like to see spectral data on everyone's skin, but the purist in me is too darn mired in the details to ever get a blog out. And besides, the images on her website do all look like reasonable skin tone on both of the computer monitors that I routinely use.

Skin blemishes and goniophotometry and umbrophotometry

This analysis is rather simplistic in that it associates one single RGB value with the color of a person's skin. That's just plain silly, for two reasons. First, skin is not uniform in color, especially as one gets older. Second, the color of the skin (and just about anything for that matter) depends on the angles that it is illuminated from and the angle from which it is viewed. If the lighting angle, surface orientation, and the camera are all at the right angle, you can see a very white specular reflection on a surface. This effect is called goniophotometry, the measurement of light as a function of angle.

The third effect is that there are generally shadows on a person's face. Even when illuminated diffusely, there still may be (for example) an area under the nose [7] that is darker because of the shadow. Clearly this effect is accentuated in some people. I have just this moment coined the word "umbrophotometry" to characterize the measure of this effect. "Umbra" means shadow, and is the root of the word "umbrella".

--------------------------------------------
[1] There used to be a company named Binney and Smith. In 2007, the company name was changed to Crayola. Clearly if I talk about stuff they have done after 2007, I should refer to them as Crayola, but how do I refer to the company back before it changed its name? Is it Binney and Smith, or Crayola? And when I want to talk about the guy who won the 1985 Grammy for the Album of the Year, do I refer to him as Prince, the Artist Formerly Known as Prince, or the Artists Formerly Known as the Artist Formerly Known as Prince? [8]

[2] Personally, I don't think that "peach" is quite the right name for that crayon. Maybe I'm wrong, but I think that peaches are more of an orange color than the crayon. I don't have a better suggestion, mind you. I just like to kvetch.

[3] I could have said that she is taking pictures of a brazillion people. Give me credit for not going for that pun!

[4] I am being a bit loose here, since I have only shown the view of R and G. The other views are pretty similar, though.

[5] Sun and Fairchild, Statistical Characterization of Face Spectral Reflectances and Its Application to Human Portraiture Spectral Estimation, Society for Imaging Science and Technology, Volume 46, 2002
http://www.cis.rit.edu/fairchild/PDFs/PAP14.pdf

[6] ASTM 2214 (2002), Standard Practice for Specifying and Verifying the Performance of Color-Measuring Instruments (Section 6.1.1 anticipates my technique. They are discussing a way to evaluate the variability of a collection of color measurements and state:

"Since color is a multidimensional property of a material, repeatability should be reported in terms of the multidimensional standard deviations, derived from the square root of the absolute value of the variance–covariance matrix."

Egg-headed stuff, indeed. If only they knew my method when they wrote that!

[7] George Carlin informed a generation of people that this part of the body is called the philtrum.

[8] This was a trick question. The correct answer is "Lionel Richie". Prince's album Purple Rain was nominated in 1985, but Lionel Richie's Can't Slow Down won the Grammy. 

7 comments:

  1. Love the writeup. I'm using these functions for procedurally generating people and skin tones. Two questions: 1) For the second set of equations, what's the best range for 'k'? I'm using -3 to 1 (as higher than 2.5 soon goes above an R of 255). 2) Any ideas for a good range of cheek colors or hair colors correlated to skin colors?

    ReplyDelete
  2. First, a caveat... There is not enough data on black skin to make the second equation statistically significant, so take that with a grain of salt. Last I checked, there was a whole lot more pictures on the Humanae site, so re-running this analysis would give more reliable data on the second equation.

    As for the range of k... if you use -3.5 < k < 0.5 for the first equation, and -3.5 < k < 3.0 for the second equation, you get lines that come close to intersecting... you have a nice continuum.

    I think that's a good starting point. Depends how sophisticated you need.

    I don't have any data to make any guesses about cheek or hair colors. I think the same collection of images could be used, with a little intelligence built in to find cheeks and hair.

    Drop me an email, and we can chat in more depth.

    ReplyDelete
  3. Thanks for this! I needed to generate random plausible skin tones and your equations saved me a lot of time.

    ReplyDelete
  4. It's a statistical analysis, not an experiment. Works fine for practical uses though obviously there's always room to make it a more statistically strong result. It's also worth noting that professional photographers often rely on a white balance card. In white lighting, the generated colours are quite accurate to life.

    While it may be a simple model, it is effective. It's more effective than generating or borrowing one of the numerous skin tone palettes artists use. Rather than a palette, this model creates a spectrum. That's a very powerful conclusion.

    ReplyDelete
  5. Great post. I wonder what the full 3D RGB scatterplot looks like?

    ReplyDelete
  6. I attempted to do something similar with the Humanae data. I thought it would be easy to get the RGB values based on the Pantone numbers. It wasn't. When I searched using the numbers from each photo, I found wildly different colors. I wrote to Ms Dass multiple times asking which Pantone color book she was using over a couple of years. I didn't hear back from her. I checked with Pantone. They said the numbers weren't theirs. I wrote to her again, explained my background in clinical studies, and said if I looked at the project as a clinical study I would suspect the data was fraudulent. That got her attention. She sent a terse message stating that she is an artist and I was being passive aggressive. She offered no explanation for why her "Pantone" numbers didn't match with any Pantone numbers.

    ReplyDelete