Wednesday, April 23, 2014

TAGA 2014 in review

When I started in the printing industry, we didn't have all those fancy colors for ink. All we had was white. Decorative packaging was white, but that was ok cuz all we had to sell was rocks. Books were printed with white ink, which, at the time, was good enough. No one knew how to read. Hadn't been invented yet.

And then someone had to go and invent white paper. We were screwed and we knew it. Our response? There were, of course, the white paper deniers who eventually invented red ink. Those of us who survived the great "white ink is dead" upheaval innovated, and this, is how black ink was invented.

White ink is dead? Now what are we gonna do!?!?

As I recall, we were burning the midnight oil trying to find innovations that would eclipse the disruptive technology of white paper. Some were working on creating whiter ink. Some worked on ways to package rocks that didn't require packaging. By the way, I don't think anyone ever made money on that one. They found it hard to sell a packaging material that wasn't there.

But someone had their midnight oil burning mechanism sitting too close to the window. The next morning, that same someone noticed a big smudge of serendipity on the window. "Eureka! We have discovered black ink! Let's go buy a million lanterns and a million windows so we can go into production!"

You know the rest of the story. Gutenberg and all that. But what you probably didn't know that TAGA -- the Technical Association of the Graphic Arts -- was there the whole way, providing a forum for the technical leaders of the time to discuss the future of printing, the thixotropic properties of carbon black, and modern techniques of rock packaging. 

At the time, black ink was a solution looking for a problem to solve. Very few people know this, but it was at TAGA that the people who were working to find the most efficient mean for putting black ink on paper met the people who were looking for a way to record the spoken word. TAGA provided the spark that created the written word.
The 2014 TAGA conference was a continuation of this ancient tradition. Here are some highlights. I make no apologies for all the cool stuff that I missed because I was hung over.

Daniel Dejan of Sappi Paper gave one of the keynotes. Brain scans done of people reading tablets and people reading physical paper are actually different. People are more likely to read a book linearly, and read a tablet with "skim/click/jump around". Studies have shown that people tend to trust stuff committed to paper more than stuff on the web, and also have better retention for stuff on paper. (Please print this blog post out before reading.) These are reasons to utilize print.

How can you target advertising for all these avenues?
(from Dejan's talk)

Ian Hole of Esko gave another key note. There have been some way cool packaging concepts recently. Bombay Sapphire gin used an electroluminescent ink to make the bottle light up in blue. Absolut vodka used a blue thermochroic ink to indicate the vodka was chilled to the proper temperature. Coke has successfully marketed cans with absolutely no words on the can. Smart wine corks can track the temperatures that a bottle has gone through. Smart packaging of medicines can record when a patient is actually taking a medicine. Pringles delivered their potato chip tubes with the logo “these are not tennis balls”. Where? At Wimbledon.

Show stopper from Ian Hole's presentation

Tim Claypole of Swansea University gave yet another of the keynotes. 98% of printed electronics is done with screen printing. Flexo is another technology that could work, but solvents are an issue. The only application of printed electronics that has been successful (from a business standpoint) to date is the printing of blood glucose sensors. This has been very successful. One university group created a 3D printer for chocolates. Look to the Korean Olympics for applications of large area printed displays.

As if three keynotes were not enough, Paul Cousineau and Mark Bohan collaborated to give the fourth keynote. The most important thing I learned is that wide format printing on shrink wrap is being used to make customized coffins.

A brilliant scientist with a huge ego gave a talk about intra-instrument agreement between spectrophotometers. His message is that standardizing one spectro to match another is a tricky business. Due to actual physical differences in the spectros, standardization can make inter-instrument agreement worse. Look for this self-important dude to give a blog summary of the paper soon.

Tony Stanton (Carnegie Mellon) looked at color uniformity of various types of presses. Variability for ink jet: 0.2 ΔE00, for litho: 0.5 ΔE00, for electrophotographic: 1.0 ΔE00.

Soren Jensen (Danish School of Media and Journalism) spoke on an approach to meet the color targets of ISO 12647-2. This standard has target L*a*b* values for CMY solids and tolerances that they must be within. It also has target values for the RGB overprints, but does not have tolerances for these. Soren worked out a means for determining the densities that are a) within tolerance for CMY, and b) that minimize the overall error in hitting the RGB targets. He's my buddy cuz he used Beer's law.

Perifarbe plots from Soren's presentation

Bruce Leigh-Myers (RIT) looked at inter-instrument agreement of M1 spectros measuring paper with fluorescent whitening agents. One instrument in his corral was way different.

Ragy Isaac (Goss) gave a tutorial on statistical methods, driving home the point that you shouldn't make important decisions without doing a little bit of statistics. His talk involved paper helicopters.

Xiaoyin Rong (Cal Poly) discussed screen printing of electronics. Cell phone touch screens are a potential application for printed electronics. To do this, you want a tiny silver grid laid down on glass. The trace width should be around 2 µm. Current limitations are about 30 µm for gravure and flexo, and 40 µm for screen printing. She looked at using a solid mesh screen as opposed to a woven wire screen. She is still working on this.

Gary Field (Cal Poly) investigated the difference between printing in the order KCMY versus CMYK. Previous studies would run in one order, bring the press down and swap plates, and then run again. The reliability of this sort of test is questionable. Was the change due to changing conditions on the press, or due to the change in the print order? He got around this limitation by running on a five color press with K in the first and last positions. The conclusion is that CMYK produces a bigger gamut. The Dmax is higher, the print is glossier, and the four color solid is closer to neutral.

Martin Habekost (Ryerson University) looked at the print quality (resolution) that is produced by various RIP software. Surprisingly (or maybe not) all RIPS are not created equal. Some RIPs produce plates that result in higher resolution on the printed page.

Doug Bousfield (University of Maine) looked at the tack force that ink applies to paper. One of his comments was that a tack force meter might not be really measuring what we think it’s measuring. Heresy. Pure heresy.

Michael Carlisle (ArjoWiggins creative papers) spoke on the development of a special paper for electronic printing. They typically print on plastic substrates, but this needs to be laminated to paper. This special paper is especially smooth, which is what is needed to make those 5 micron traces.

Really cool SEM images from Michael's presentation

Sasha Pekarovicova (Western Michigan University) talked about the use of soybean oil to take the ink out of recycled paper. Bet you didn't stop to think about where the ink goes when you put the newspaper into the recycle bin!

Joerg Daehnhardt (Heidelberg) spoke about fanout on extra-wide sheet-fed presses. The cause is different from the fanout that we all know and love on web presses, but the results are the same. It is caused by the grippers pulling laterally, and uneven inking putting strange elasticity on the paper. They use a system of cameras to determine the misregister and actually stretch the plate to compensate.

Don Duncan (Wikoff Color) gave a talk on migration of icky stuff from the printing process into the stuff inside the packaging. This is a hot political issue and Nestlé’s and Mueslix have found it can be a big source of bad PR. Nestlé’s developed a list of naughty chemicals that they don’t want to be used in the print process. The Swiss government rubber stamped this, and this list is on its way to becoming an EU ruling. The list is perhaps a bit stringent, maybe even draconian. It is also a bit of an issue since Nestlé’s requires that fluorescent inks be used on the packaging of certain candies, and all fluorescent pigments are on Nestlé’s list of forbidden chemicals.

Mandy Wu (Appalachian State University) chronicled a project at her university for the students to offer commercial printing to businesses in the community. They made a point of using green technology.

Thomas Klein (Esko) gave a very informative presentation about high definition flexo plates. The normal flexo plate has flat topped cells. This causes problems with highlights, since you can’t reliably print below a 12% dot, and also with solids because the solids get mottled. You can deal with the solids by creating a cell that has microcells. This evens out the laydown and gives a big boost in density. At the highlight end, the issue is dealt with by making cells that are rounded at the top.

Nir Mosenson (HP) spoke on an electrophotographic printer, in particular how they managed to scale up their original system to one twice as wide. He provided a very clear description of how this sort of printer works. One thing I found interesting was the use of a rotating set of mirrors mounted together as a hexagonal prism being used to guide the laser beam to charge the cylinder.

Kirk Szymanski (Ricoh) described the process by which one assures that a toner-based printer is made consistent in color. Inline measurement of color is important. Even for a printer like this with very quick startup (comapred with web offset) there is a small warm-up time. Typically 7 to 10 sheets are required to provide stable printing. The lasers need to be linearized across the sheet. There are inherent differences in color between web offset (and ISO 12647-2) that must be calibrated. Density is a good start, but tweaking must be done in CIELAB.

Stephen Lapin (PCT Electron Beam) compared electron beam (EB) curing to conventional ink drying. In heatset web offset, the ink is “dried” by flashing off the oils through the use of heat. In UV cured flexo, a photoinitiator is added to the ink. This photoinitiator is a good absorber of UV, so it readily captures the energy from the UV lights. EB curing relies on the fact that all matter will absorb energy from an electron beam. The higher the specific gravity (I hate to say “density”), the more absorption of the energy. EB curing uses a special ink that polymerizes when the energy is absorbed; there are no oils to flash off.

Tim Claypole (Swansea) presented a service that they have started. Newspaper printers will contract with Swansea to rate them. The printer prints a test target, which is measured and graded at Swansea. This service differs from other software packages in that it is an independent evaluation, and in that it compares one printer against another.

Sasha Pekarovicova (Western Michigan University) gave a talk about their experiences in creating a  printed electronics capacitor where pthallocyanine ink (commonly called cyan) is used as the dielectric. The pinholing effect was problematic. (If I understand the issue correctly, this limits the breakdown voltage of the capacitor.)

Bruce Leigh-Myers (RIT) described a G7 calibration that was done on a gravure press by his colleague Bob Chung. Typically, a G7 calibration takes a few iterations to dial the press in. On gravure, this is very costly, since it costs and arm and a leg and two tickets to Disneyland to engrave the cylinders. The idea is to do the first press run virtually. The first run is performed by attaching a gravure ICC profile to the P2P target file and reading the values out of the digital image. Simple enough, and it did work. Only one set of cylinders needed to be engraved.

Udi Arieli (EFI) gave an enlightening talk (with many cute videos) about his theory of global optimization. This is the idea that a printing process should not be thought of as strictly a printing process in order to make for efficiency. Concentrating on optimizing just one process unit can hurt the efficiency of another. I have known Udi for years, but had never heard him speak until this conference. I had no idea he was such a smart guy.

I know I missed some good talks, but I was either hung over (as I said), forgot my pen, was signing autographs for a student, or the topic was way over my head. For lack of time, I have omitted reviews of all the student's journals - which I thought were fabulous this year!

Oh... I almost forgot one thing. I am now the VP of papers. That means - talk to me if you want a spot in next year's conference.

Wednesday, April 16, 2014

Stupid piano tricks

Your grandparents may have told you tales of a musical instrument called a "piano". This crude instrument had keys on it, much like today's pianos, but it lacked the ability to produce its own drum track, the ability to sound like a horn section, and could not (in most cases) play itself. How quaint! And if you had one of these old relics, you needed to hire someone every year or so to come to your house and tune the darn thing. I can't see why anyone would want one, really.

Some of the best piano players of all time -
Ray Charles, Elton John, Jerry Lee Lewis, Vladimir Horowitz, Bugs Bunny,
Victor Borge, Doctor John, Fats Waller, 
Linus Van Pelt, Liberace, and Dawg

But just in case you do have one, you could try a little experiment that I just tried on one of the two pianos in my house. Very slowly press down on middle C. If you are careful, you can get the key all the way down without the hammer touching the string, that is, you can do it silently. The cool thing is that while you hold the key down, the damper will be released from the string so that it is free to vibrate.

Now comes the truly amazing part. While you are holding down middle C, sharply strike the C below middle C. Let it up right away so that the damper clamps down again on that string. The piano will still be echoing the sound, but in an odd way. You will hear the sound of middle C, and not the sound of the note that was hit. On most pianos you will hear a lot of tinny vibrations from all over the piano, but the loudest will probably be middle C. If you let up on middle C, you will see that the middle C string was indeed vibrating.

First experiment in sympathetic vibrations

What we have discovered is called sympathetic vibration. (Sympathetic vibrations have nothing to do with the Beach Boy's Good Vibrations, in case you were wondering.) The middle C string has "sympathy" for the vibration of the lower C. Air and that big piece of cast iron that the strings are attached to both carry the vibration from one piano wire to the other and set that second wire to vibrating. 

So far we have learned that sympathetic vibrations cause middle C to play middle C. But that's a bit odd. It makes sense that if it's going to vibrate, it will vibrate at middle C. After all, that's how it was was designed to vibrate. But why does it vibrate at all? Can a piano wire vibrate sympathetically to any other note, or are there some rules to it?

Vibration modes of a piano wire

We can answer that by looking at how a piano wire (or guitar string, or vocal chord, or a column of air in an organ pipe) can vibrate. In the drawing below, we see one mode of vibration. The whole string flexes up and down as a whole. This is the fundamental tone; the vibration is at about 262 Hz (vibrations per second) for the middle C piano wire. 

Fundamental vibration of piano wire

But that's not the only mode of vibration for the middle C wire on the piano. It can also vibrate in the funny shape shown below where one half of the wire is flexing up and the other half down. This is called the first overtone, and it is at twice the frequency, or about 524 Hz. If we heard this all by itself, we would hear this as the C above middle C.

First harmonic vibration of piano wire

Is that it? Hardly! Piano wires (or practically anything else) are capable of all kinds of vibrations. Below we see the second harmonic vibration of the piano wire. If a middle C piano wire is set to vibrate in this mode, the frequency would be around 3 X 262 Hz, or about 786 Hz. If we heard this vibration all by itself, we would hear the G which is one and one-half octaves above middle C.

Second harmonic vibration of piano wire

Why does middle C sound when the lower C is played?

Back to the original question. When a piano wire vibrates, it normally vibrates in a collection of all the vibration modes. When middle C is played, there will generally be vibrations of 262 Hz, 524 Hz, 786 Hz, 1048 Hz, 1310 Hz, and so on. Our ear and brain conspire to hear a single note and not the complex chord.

So... for the purpose of this discussion, it is important that when the C below middle C is played, it vibrates not only at its fundamental of 131 Hz, but also at the first harmonic of 262 Hz. That first harmonic is what sets middle C to vibrating.

The rule of sympathy is thus pretty simple. A string will vibrate sympathetically to the frequency that it was tuned for.

But it gets a tiny bit more complicated

Let's go back to the piano and repeat the experiment, only this time we will plunk a different note. We will hold down middle C so that this wire is free to vibrate, but will plunk the F below. When the F is released we will hear a C, but this time the C is the C above middle C. One octave higher. 

Second experiment in sympathetic vibrations

Confused? It doesn't follow our first guess at the rule for sympathetic vibrations. 

But it shouldn't be all that confusing, provided you consider that the F below middle C has it's overtone series, and middle C has it's overtone series. From a previous blog post on tuning a piano, the avid JMG blog reader will recall that F, being a fifth below middle C, will have a frequency that is 2/3rds that of middle C, or about 175 Hz. That is the fundamental frequency for this wire, but it will also vibrate at 2 X 175 Hz (the F above middle C) and at 3 X 175 Hz (the C above middle C).

In other words, in the second experiment, the middle C did not vibrate at middle C because the F didn't provide that frequency in its overtone series. But the C above middle C (which the middle C wire likes to vibrate at) is a frequency that the F is more than happy to provide.

And another example

Here is another example that should help solidify the concept. While the note held down is middle C as before, the note plunked down in the third experiment is G below middle C.  From experimenting on my own piano, I found the following results.

Third experiment in sympathetic vibrations

This can be explained by a similar analysis, as organized in the following table. The first column shows all the frequencies that are made available by the G that is plunked. The second column shows all the frequencies at which the middle C wire would like to vibrate. As can be seen, the lowest frequency provided by the G that the C will respond to is 786 Hz, the second G above middle C. Hence, that's what we hear.

G
C
196.5 Hz (fund)


262 Hz (fund)
393 Hz (1st)


524 Hz (1st)
589.5 Hz (2nd)

786 Hz (3rd)
786 Hz (2nd)

Summary

Things like piano wires naturally vibrate at a number of different frequencies. Sympathetic vibration occurs when one vibrating object has some loose connection to another object. The sympathetic vibration that is induced in this way depends on finding a match between those frequencies that are available and those frequencies that the second object has an affinity for.

Wednesday, April 9, 2014

Silly-opathy

What is homeopathy?

Homeopathy is a bit like the old "hair of the dog that bit you". If I have a headache, I should be treated with something that gives me a headache. That something will be highly diluted, so as not to kill me - which is a good thing. This sort of treatment will muster my body's own resources to combat whatever is ailing me.

Homeopathic cure for a hangover?

Homeopathetic claims

Here are some of the claims about homeopathy that I found on the internet:

Homeopathy is a safe, gentle, and natural system of healing that works with your body to relieve symptoms, restore itself, and improve your overall health.
http://www.nationalcenterforhomeopathy.org/

Homeopathy is extremely effective. ...
Homeopathy is completely safe. ...
Homeopathy is natural...
Homeopathy works in harmony with your immune system, unlike some conventional medicines which suppress the immune system...
Homeopathic remedies are not addictive...
Homeopathy is holistic...
https://abchomeopathy.com/homeopathy.htm

One of the ways homeopathy works is by helping to balance your body’s energy, or chi as it’s called in traditional Chinese medicine. This energy is circulated through your body along specific meridians, and when this circulation gets disrupted -- something you can test for using electrodermal screening -- illness can result.
http://articles.mercola.com/sites/articles/archive/2008/07/24/ever-wonder-why-homeopathy-works.aspx

Homeopathy is holistic because it treats the person as a whole, rather than focusing on a diseased part or a labeled sickness.  Homeopathy is natural because its remedies are produced according to the U.S. FDA-recognized Homeopathic Pharmacopoeia of the United States from natural sources, whether vegetable, mineral, or animal in nature.
http://homeopathyusa.org/homeopathic-medicine.html

Homeopathy is probably the most difficult medical discipline to master because it is based on the pure observation of nature, and the strict application of a natural law.  All symptoms (physical, mental or emotional) need to be considered for an accurate prescription to be given. The goal of the homeopath is to recognize, through the unique expression of their patients’ symptoms, the pattern of disturbed energy and identify the correct homeopathic medicine (remedy) that is most ‘similar’ to them.
http://greensquarecenter.com/therapies/homeopathy/

I noticed a funny thing on these sites. These websites never come right out and make specific claims about which ailments might be best treated by homeopathy. They tell a bit about what homeopathy is, and maybe some of the history. But they are very reluctant to say "good for a sore throat", or "if your doctor has diagnosed you with..."

The British Homeopathic Association is one of the few sites that makes claims, albeit indirect claims. They provide a list of 75 papers where homeopathy has been tested on various conditions. These are valid research papers in respected journals According to this web page, homeopathy is effective at treating brain injuries, bronchitis, childhood diarrhea, the common cold, depression, fatigue, fibromyalgia, hay fever, post-operative ileus, immune function, influenza, insomnia, low back pain, post-operative oedema, otis media, perennial allergic rhinitis, plantar fasciitis, post-operative wound healing, postpartum bleeding, premenstrual syndrome, psoriasis, radiodermatitis, renal failure, rheumatic diseases, seborrhoeic dermatitis, sepsis, sinusitis, snoring, sports injury, stomatitis, tracheal secretions, upper respiratory tract infections, uraemic pruritus, varicose veins, and vertigo.

Wow! How could anyone possibly be skeptical after that?

What is Science?

Medical treatment is a tricky thing. Symptoms are not always clear cut, so diagnosis is not simple. People sometimes fail to respond to proper treatment, and sometime spontaneously recover despite lack of treatment. Measuring the effects of medications is a statistical thing, and as we know, seven out of five people have difficulty with statistics.

Science is not hearsay or anecdotal. Testimonials are appropriate for revival meetings, but not for science. Good medical research relies on the idea of randomized, controlled trials.

The typical design of a medical experiment starts by recruiting a bunch of volunteers who have all been diagnosed with a certain condition, the more the merrier. Ten is not such a big group. A hundred? Not bad. A thousand? Yeah... that would be good. If you have too few, then you run the chance of falling into the "maybe it worked, but it might just be the roll of the die" zone.

Some studies will administer the same test treatment to all the volunteers and then see how they fare. These studies have a name. They are called "inconclusive". As compelling as the results may seem, you can never tell whether the volunteers got better because of the treatment or in spite of the treatment.

It is important, therefor, to run studies that compare one potential treatment against another treatment. Half of the volunteers get treatment A, while the other half get treatment B. Often times, the "other treatment" is "no treatment", but humans are tough to work with. They generally have some belief in medicine, and their brain and body can team up to enhance recovery even when the pill is nothing more than a placebo (sugar pill). To make for a level playing field of "treatment A" versus "no treatment", the volunteers are not allowed to know whether they are getting a real medicine or a sugar pill. This is one half of the double blind.

The other half of a double blind experiment is that the people doing the caregiving -- the ones administering the medicines, taking in the volunteer's data, or otherwise interacting with the volunteers -- must also be kept in the dark about who is taking a medicine and who is taking a placebo.

Assigning volunteers to treatment groups is another delicate matter. When I have run randomized trials, I generally try to get all the attractive young brunette ladies in whatever group I will be involved with. For some reason, I don't get invited to run all that many drug trials. Volunteers are generally assigned to the groups at random, although the significance of a trial can be improved by selecting treatments in a quasi-random manner. For example, the computer may be told to randomize in such a way that the treatment groups all have roughly the same number of people between the ages of 60 and 70.

At the end, we wind up with a statistical question: How statistically significant is the difference between the results in the two groups? Even if the treatment being tested has absolutely no effect on the body, there is a 50% chance that it will outperform the placebo. But if the treatment way outperforms the placebo, then we can be pretty sure that there is something going on. Statistics allows us to put a number on "way outperforms".

Here comes the tough part. Human bodies, illness, treatment... these things have a lot of variability. If you run a drug trial with just a handful of people, the luck of the draw could easily tell you that the same drug is fabulous, ineffective, or lethal. So, to be effective, we need large studies before we can be confident that a treatment is effective.

Sometimes, it is possible to bring together a large number of smaller studies and form a firm conclusion. If one study with ten people says the drug is helpful, well, that's not that exciting. If there are ten studies out there that all came to that conclusion, then the evidence is a bit more convincing. They call this meta-analysis.

Meta-analysis is a bit tricky, though. First, it's complicated by the fact the the ten studies were probably all run just a little different, so it's hard to combine the results. More importantly, however, is publication bias. If a researcher does a small study and the results are kinda blah, there is a tendency for no one to get all that excited about publishing it. So, the studies that actually make it into the journals are biased toward optimism.

Those are the rules for doing science.

How does homeopathy hold up?

I took a look at the list of research papers provided by the British Homeopathic Association. For almost all of them, they gave links to the Pubmed abstracts. I consider this good scholarship. Good for you guys. Below I have quotes from the "Conclusion" sections of the four studies that investigated the effectiveness of homeopathy for treating childhood diarrhea.  (I added the bold type.)
Study 1The results from these studies confirm that individualized homeopathic treatment decreases the duration of acute childhood diarrhea and suggest that larger sample sizes be used in future homeopathic research to ensure adequate statistical power.

Study 2: The statistically significant decrease in the duration of diarrhea in the treatment group suggests that homeopathic treatment might be useful in acute childhood diarrhea. Further study of this treatment deserves consideration.

Study 3These results are consistent with the finding from the previous study that individualized homeopathic treatment decreases the duration of diarrhea and number of stools in children with acute childhood diarrhea.

Study 4The homeopathic combination therapy tested in this study did not significantly reduce the duration or severity of acute diarrhea in Honduran children.

My summary: One provided negative results. Three provided positive results, with two of these clearly saying that the question is still open. Overall, this is favorable, but please bear two things in mind. First, these are the four studies that actually got published. Who knows how many studies were performed and subsequently discarded because the results were uninteresting?

The second thing to bear in mind... I will add one additional quote from Study 3, which was the most positive of the four: "The mean number of stools per day over the entire 5-day treatment period was 3.2 for the treatment group and 4.5 for the placebo group." Hmmmm... If I were to stop at Walgreen's to pick up some loperamide for my ummm... unappealing symptoms, and if the ummm... frequency of unpopular symptoms dropped from 4.5 occurrences a day all the way down to 3.2, would I recommend loperamide to a friend? I think not. There is a difference between "statistical significance" and "practical significance".


I did a little study of my own, wandering around Pubmed looking for meta-studies on homeopathy. I found six such studies.

Overall, the literature concerning a total of 83 original studies suggests that homeopathy may have significant effects in some conditions, ... A larger number of observational studies and of clinical trials -- conducted in a methodologically correct manner without altering the treatment setting-- are needed before sure conclusions concerning the application of homeopathy for specific diseases can be drawn.
http://www.ncbi.nlm.nih.gov/pubmed/21622275/

The evidence demonstrates that in some conditions homeopathy shows significant promise, ... A general weakness of evidence derives from lack of independent confirmation of reported trials and from presence of conflicting results, ...
http://www.ncbi.nlm.nih.gov/pubmed/17173103

When account was taken for these biases in the analysis, there was weak evidence for a specific effect of homoeopathic remedies, but strong evidence for specific effects of conventional interventions. This finding is compatible with the notion that the clinical effects of homoeopathy are placebo effects.
http://www.ncbi.nlm.nih.gov/pubmed/16125589

There is some evidence that homeopathic treatments are more effective than placebo; however, the strength of this evidence is low because of the low methodological quality of the trials. Studies of high methodological quality were more likely to be negative than the lower quality studies. Further high quality studies are needed to confirm these results.
http://www.ncbi.nlm.nih.gov/pubmed/10853874

The central question of whether homeopathic medicines in high dilutions can provoke effects in healthy volunteers has not yet been definitively answered, because of methodological weaknesses of the reports.
http://www.ncbi.nlm.nih.gov/pubmed/17227742

Placebo effects in RCTs [Randomized Clinical Trials] on classical homeopathy did not appear to be larger than placebo effects in conventional medicine.
http://www.ncbi.nlm.nih.gov/pubmed/20129180

Based on this, I am inclined to go along with what the National Center for Complementary and Alternative Medicine has to say about homeopathy:
Most rigorous clinical trials and systematic analyses of the research on homeopathy have concluded that there is little evidence to support homeopathy as an effective treatment for any specific condition... A number of the key concepts of homeopathy are not consistent with fundamental concepts of chemistry and physics.

But, but, but...

It worked for my cousin's neighbor

I'm glad to hear that your cousin's neighbor improved. But, there is a possibility that your cousin's neighbor might have recovered without the treatment. And what about your other cousin's boss who got the same treatment and grew a third foot?

It's not a chemical, it's natural

Hydrogen dioxide, sodium chloride, l-trytophan, and disaccharides are all chemicals, so therefor, they are harmful. These are are better known as water, table salt, that protein in turkey that supposedly makes you sleepy, and certain sugars, like sucrose. All matter is a chemical. Rather than tell me again that homeopathic remedies re not chemicals, please just paste a sign on your forehead that says "I don't understand chemistry".

By the way, rattlesnake venom, arsenic, stinging nettles, radon gas... these are all natural, so they must be good for you?

It's holistic, so it's better

There is something appealing about a doctor who considers your whole condition, rather than primarily looking at a biopsy of your liver. But I will repeat a quote from one of the homeopathy sites mentioned above: "Homeopathy is probably the most difficult medical discipline to master because it is based on the pure observation of nature, and the strict application of a natural law."

"Science has been wrong on a lot of things"

I have heard this argument quite a bit. It is used to prove that there are martians, that aluminum foil hats keep the CIA from reading your brain, and that dinosaurs roamed the Earth just a few years before Ronald Reagan was born.

Let me flesh out the argument a bit. I think the complete train of through goes something like this: "Science has said that homeopathy is ineffective, but Science has been wrong in the past. That means that there is a chance that Science could be wrong on it's condemnation of homeopathy. Therefor, it is certain that homeopathy works."

I followed that argument right up until the last little bit. And I might add,, one of the wonderful things about science is that it does evolve. When reality disagrees with Science, Science is adapted to resolve the conflict.

Conspiracy theory

Homeopathy would be proven if the major drug companies weren't colluding to squash all funding, and if medical journals would allow the research to be published.

The second part of this is just plain not true. A Pubmed search on the word "homeopathy" gets 4,850 hits.

As to the first part, homeopathy is currently big business. I could not find a single consistent number, but according to one source, there are 4,000 businesses accounting for $360M in annual income. Another source says that “U.S consumer sales of Homeopathic treatments reached $870 million in 2009, growing 10% over the previous year.” A third source gives a much larger number for the worldwide market: "In dollars the world homoeopathy market according to ASSOCHAM is $5.35 billion."

I'm not going to quibble about hundreds of millions versus billions of dollars. The important thing is that, by all accounts, there is a huge market potential. If the technique could be proven to the point where it was accepted by insurance companies, then the potential is astronomical. I am probably not the first person to ponder the fact that investment in some good science could make someone a lot of money. Well, if it panned out.

"Not consistent with fundamental concepts of chemistry and physics"

I'm going to go back to a comment from the National Center for Complementary and Alternative Medicine. Where does homeopathy run afoul of established science?

The difficulty comes from the fact that homeopathic remedies are highly diluted dilutions of highly diluted dilutions, which are then highly diluted by highly diluting them again and again. The active ingredient is first diluted in water or alcohol to one part in a hundred. The result of this is then diluted again to one part in a hundred, and then again, and again, and yet again. This dilution may be performed as few as 6 times, but preferably 30 times.

Excuse me... We start out with a finite number of molecules. After one dilution we have 100 times fewer. After two dilutions, that number goes down to 10,000. After three dilutions, the number of molecules is reduced by a factor of a million. After somewhere around 12 dilutions, we have about one molecule left.

After a few more dilutions - for good measure - it would be awfully hard for a lab with 18 million dollars worth of analytical equipment, a staff of 43 PhDs, and a couple of cages of lab rats to tell the difference between a tablet for angina and a tablet for morning sickness. So... how is my body is gonna be able to tell the difference?

The homeopathicists have heard this one before, so they have a ready response. There may not be any molecules of belladonna extract left, but the belladonna has left it's "memory" in the water.


Ok. Sorry. I just got off the bus. Homeopathic memories in the water are able to cause a profound effect on the human body, but to date we have not been able to invent an instrument of test procedure that is sensitive enough to see these "memories"?

Even in the unlikely event that there is something like a memory that gets imprinted in the distilled water, any drop of distilled water must also have the imprint of the millions of different molecules that it has come into contact with.

One can certainly argue that many medications work for reasons that we don't understand. But homeopathy goes a step beyond just benign ignorance. Any scientific explanation of homeopathy must start by inventing a new branch of physics that has a powerful effect on the body, but for some reason has evaded our detection up until this point. 

My assessment

Send me a check for $100 and I will send you my remedy for whatever ails you. Just to warn you, my cure will likely involve painting your toenails orange and dancing naked around a spruce true under a full moon while singing a certain Van Morrison song. For $1000, you might coerce me to join you. I'll bring along the KFC to sacrifice.

Wednesday, April 2, 2014

Arranging paint and (once again) vermillion

If you have ever picked up a card of paint samples and said to yourself "Gee whiz! I love the feel of the way these colors go together. The aesthetic beauty is overwhelming!" then you should probably contact Phil Kenyon to thank him for providing you with a few moments of sensual pleasure in your otherwise drab, wretched existence. If, on the other hand, you have picked up a paint card and said to yourself "OMG! Were these paint chips arranged by a left-footed Batswanian bandersnatch with halitosis?" then you might want to call up the paint company and tell them about Phil Kenyon.

Which color set has that "I gotta buy some paint right now" look?

It is very unlikely that the paint cards were actually arranged by Phil. Phil is not in the hallowed profession of paint sample card designers. But he is the president of Chromalyzer, a company which provides software to help paint sample card designers lay out their paint sample cards in a way that feels good. I bet you didn't even stop to think that such software might exist. I didn't give it any thought until I met this gentleman who trods amongst the artists and the scientists.



Phil has access to a huge database of names and spectra of all the colors of paints offered by all the paint companies in the universe. Naturally, in my unrelenting quest to find out what color vermilion is, I contacted him. His answer is worthy of being my first guest blog.

What's in a name?
by Phil Kenyon

While it is likely that furthering a discussion on finding ways to convert grass cuttings or the gas from old sneakers into a new energy source would result in a more tangible benefit to mankind, sadly I know nothing whatever about this topic other than I may well be a great source for at least one of these basic ingredients. This being the case I am limited to selecting John’s Discussion on The Definition of Vermilion to expand on, rather than attempting any loftier goal.

I have spent a great deal of time striving to better understand and navigate the largely subjective languages and landscapes in the world of color and attempt to bring objectivity and order to the process for those that seek to use the power that color wields over our lives and emotions for the purposes of good rather than evil.  

John’s Question “What is the definition of Vermilion” could possibly be answered with another question, “What is in a name”.  Since the original question was asked by a mathematician, the answer to my second question can be stated in precise mathematical terms as “a lot”.


Occurrences (Gamut in L*a*b*) of the use of Vermilion or Vermillion (sp) in color names taken from a compiled list of over 35,000 color names in use in decorative paint palettes in North America

The fascinating thing about color is that it generates a great deal of passion not only from the Physiological influence that it has over us from a visual perspective, but that it also creates a great deal of passionate discussion over how we define it in terms of an adjective, or in terms of a colorimetric value. 

Since color is derived essentially from Light we can define the visible spectrum in terms of wavelengths measured in Nanometers (400 to 700nm give or take….).  No argument there right? The problems start when you introduce the stuff that absorbs or reflects these wavelengths and therefore what it is that we perceive as a single color. Shine a different kind of light on a painted piece of paper and it looks different. Keep the light the same and make the sample shiny or rough, and it looks different.  Did it change color or just appearance?

This is one problem with trying to define Vermilion. Vermilion, as are most objects that are perceived as being colorful, is something of a chameleon. We have to understand the nature of the chameleon.

Back to the question “What is in a name?” There is a commercial answer and one which actually does also have a loftier goal if it can be answered correctly.

The commercial answer is, people don’t want a color that is described as a nanometer, or an RGB or LAB or any other notation that seeks to define a color in numerical form that allows us to manufacture and reproduce it with some degree of accuracy and control.  They want to connect on a more personal level. Try answering “how do I look” to your husband or wife as 5’7”, 140 lbs, see where that gets you.  

If the description does not meet some realistic expectation, irrespective of whether you actually quite like what you see, the disconnect is likely to result in a negative reaction, No Sale.  It is important to understand what the accepted boundaries are of Vermilion, Terracotta, Ivory, Bronze.  The more broad the terms the more difficult this becomes, consider Cactus, Denim, Candy, Stone.  It is important to use Objectivity in establishing these boundaries of subjective opinion.  Finding the center or in some cases centers of the color space and making sure the color you use is nearer the center and not outside the lines makes sound commercial sense.

As for the Lofty goal, Color has power. It affects mood and behavior and while there may be no unequivocal evidence to support this, there is sufficient evidence to suggest that the right color environment will aid in more rapid recovery from illness, stimulate creativity, calm aggression etc.
  
The purists in the Color Science community will be quick to point out that it impossible to define precisely which colors result in these behavior’s since color and appearance are actually not one and the same, which is accurate.  The appearance of a color varies as we described before and is also influenced by many other factors not described here.  However, irrespective of the challenges of being able to specifically define “Vermilion” or “Calming Blue” the ability to define the Zip code that either of these definitions of a color lives in, and the most central address to go look for it is way better than saying it was last seen West of the Rockies.

When you need help choosing a color or defining an entire palette for any purpose, you probably select someone you feel has a good sense of what the color should be.  Just like a great musician, some people just have a natural gift for working with color and a sense of what goes together.  But just like any natural talent, it takes more than that to become the best at what you do. You have to understand the process, learn the nuances of the different genres and most of all understand your audience.  I may never be the most naturally gifted musician, my role in color is to help everyone in the band be as good as they can be and make sure the audience hears some great music. And if it’s Reggae, it won’t sound like hip hop.


Phil Kenyon is president of Chromalyzer llc, a company that creates and sells software for benchmarking analysis and development of large color palettes and provides consultancy services for clients in the USA and Europe.

Just in case you wanted a bit more of a sense of his software, here are a few movies showing two of other words that are in the vocabulary of color naming people.
Bronze

Cactus