Building the perfect weather satellite
Many years ago, I had a brief stint at a university,
writing navigation software for the next generation of weather satellites [1].
These new satellites were going to be slicker than, well, slick stuff. The new design would solve all the problems of the previous satellite
designs.
The university I worked at, however, had not been
well-connected to the design process for this satellite. Very little input from
the group I was in went into this marvelous new creation. Needless to say, the folks I
worked with had a rather negative opinion about just how marvelous the
satellite was.
Or should I say, “was to be?” This marvelous project was
hopelessly late.
To make matters worse, there was an impending crisis. When
the new design had begun, there were two weather satellites parked in orbit
above the United States, one above the east coast, and one above the west coast. When
an LED in an encoder of one of the satellites burned out, we were left with a
single operational satellite. Since a single geosynchronous satellite cannot
get a good view of our entire country, the satellite needed to be moved
seasonally to track areas of critical weather. Normally parked over the Midwest, it was slid east during hurricane season.
The failure of the satellite left the US in a bad situation.
First, weather coverage was lacking, having only one vantage point to view the
continent from. Second, we were especially vulnerable to a similar failure of the remaining satellite. Disabling of this last satellite would deal a
harsh blow to weather forecasting.
The forecast was a bit odd...
It was the opinion of some of my coworkers that the fancy
new satellite was a mistake from the start. The features that were added were
mostly golly-whiz-bang features that engineers can get excited about, but which
offer little to the end-user of the satellite imagery. This in itself was not the
direct problem. The direct problem was that the new design was being chronically
delayed in order to get these wonderful new features to work right. In the
opinion of my coworkers, it would have been much better to have built several
more of the previous generation of satellites.
In the end, the new weather satellite project was way over
budget, and very late.
Welcome to the sandbox.
Let’s face it. All of us engineers who have been at it
awhile are guilty of playing in our sandbox. We got into engineering because we
are smart, and we like the kinds of toys that engineers get to play with. Every
once in awhile, we get dazzled by the light of our oscilloscopes, seduced by a
tantalizing algorithm beckoning us to write it, or beguiled by the charms of the ultimate gizmo.
We sat mesmerized, unable to take our eyes off the Lissajous figure
Enraptured, we rationalize the benefits of this more
complicated approach. “Yes, it will take a bit longer to design, but it will be
more reliable in the field.” As if anyone else will understand it well enough to
assemble it correctly!
Against our own better judgement, we pursue this Holy Grail
of Engineering, fully convinced that this is the absolute best choice. We
dismiss critics of our design as being “plebeian”, or “short-sighted”. Disagreements only tend to polarize the
issue.
If I were without sin, I would have no qualms against
casting stones against any and all. But, since I am as guilty as any, I must
cast some stones upon myself.
I was once called upon to build a software tool that would
help measure the resolution of images from an electron microscope. A sample
with a clean edge would be put in the microscope, and an image would be taken
of the edge. A line of data taken from this edge would show the black-to-white
transition. The software I was to write would graph this line of data on the
screen along with a computed transition. The user (my boss) would adjust the
parameters of the computed line until he was satisfied with the fit to the
actual data.
ADEM, the electron microscope I helped build [2]
Of course, as a mathematician and software guy, I knew that
the computer could do a far better job at fitting a curve to data than any old
user. The fact that the fit was nonlinear not only made it considerably more
difficult, but also made it more interesting. So, I embarked upon a project of
building software to do an automated fit.
All in all, the guy who requested this software from me (my
boss) was remarkably patient. He needed a quick answer. He had been a
programmer, and had done a fair amount of curve-fitting software in his time.
He would have written it himself, but it had been years since he had
programmed, and had not learned the programming language we were working in. To
nudge me out of the sandbox, he would say things like, “Well, you know it is
really tough to avoid local minima when fitting such noisy data.”
Eventually he hounded me enough so that I compromised. I
automated the initial settings for the curve parameters, and provided a user
interface to tweak the parameters from there. The software was late, but it did
what he needed it to do. He was even gracious enough to tell me that the
initial settings that my software generated were really quite good
.
It is only in moments of abject honesty that I stop patting
myself on the back long enough to remember that I could have satisfied my
customer weeks earlier if I had not stopped to play in the sandbox.
Then there was the time that I wasted months developing the
absolutely most way-cool disk file structure ever witnessed. It could allocate
partitions and coalesce them when done. There were files and linked lists of
files. The software used semaphores to protect against multiple concurrent calls
to the same routine. The whole thing fit into a structure which had relocatable
pointers and a check-sum. The directory was duplicated on disk so that it could
be recovered if power was lost during a write.
I wrote a test suite, complete with random a number
generator to test this code. I wrote thirty pages of documentation. It was a
crowning accomplishment, and a testament to my awesome programming skills.
But my systems analyst skills were the pits. I went moved
from that project to another, and a “cut through the BS” kind of guy took over.
He read my prolific documentation, looked through the code, and spent a week
writing code for a simple file structure with only necessary features.
In retrospect, he caught me playing in the sandbox. I had
added features which were above and beyond the call of duty. I had missed one
of the most critical features – time to market.
Lest the reader start to get the impression that this author
is somehow connected to all techno-boondoggles, I will add examples from the
literature. The first I quote from Gerald M. Weinberg [3]:
A case in point is the semi-professional programmer who was
commissioned by a physics professor to write a program to find the inverses of
some matrices. As there were to many matrices to keep in storage at once, he
needed a routine for reading them from tape [4] one at a time for processing. He had little experience with input-output
programming, so he decided that this would be a good chance to learn something,
and he set out to get some advice.
Was this one Rachmaninov?
“How
can I program the input from tape so as to buffer the input from processing?”
he asked a somewhat more professional colleague. Being somewhat more
professional, the colleague didn't answer the question, but out one of his own.
“Why do you want to buffer the input?”
“To
save time, of course.”
“Have
you estimated how much time you will
save?”
“Not
exactly, but it will be a lot, because there are a lot of matrices.”
“How many?”
“I
don’t know exactly. A lot.”
“Approximately how many?”
“Maybe
a hundred.”
“Good.
And how large are they?”
“Ten
by ten.”
The
colleague did a quick calculation on the blackboard which showed that these
matrices would require about a minute to read.
“See,”
said the semi-pro, in triumph. “That’s a lot of time.”
“Perhaps–or
perhaps not. How many times will you
run this program?”
“What
do you mean?”
“I
mean, if you write a buffering routine, you’re going to have to test it, and I
doubt if you can do that with less than one minute of machine time [5].
So if you only have one set of matrices, I’d advise you to forget it. Just the
computer time in testing will cost more than you could possibly save–not to
speak of your time.”
“But
you don’t understand", said the semi-pro, who was not willing to see his chance
of writing a new and interesting program slip away. “This has got to be an efficient program!”
His colleague should have been discouraged by this response, but
could not stop himself from trying to rephrase the arguments. But, alas, it was
all in vain, and the next time he chanced to see his friend–which was the next
semester–he was still having problems getting his buffering routines working.
The poor physics professor, still waiting for his matrices, was completely
unaware of what was going on–but was mildly flattered that his programming
problem was so complex.
Freeman Dyson [6] has some strong comments to make about big science. Referring to the
development of the Zelenchukskaya observatory in the Soviet Union, he writes:
The committee of academicians decided to build the biggest
telescope in the world....[A] Soviet astronomer told me that this one instrument
had set back the progress of optical astronomy in the Soviet Union by twenty
years. It had absorbed for twenty years the major part of funds assigned to
telescope building, and it was in many ways already obsolete before it began to
operate.
One of the factors which the committee planning the observatory
did not worry about was the Zelenchukskaya weather. I was on the mountain for
three nights and did not see the sky....at Zelenchukskaya the weather is
consistently bad for the greater part of the year.
For those who are not yet convinced of the ubiquity of the
sandbox, I recommend the book Drunken Goldfish & Other Irrelevant Scientific Research, (William Harston,
published by Ballantine Books, 1987. In this book, you will learn about the
effect of earplugs on a chick’s recognizing its mother, references to double
puns in Vietnamese, how to make a rat fall in love with a tennis ball, and about
other research which you probably cannot live another day without. Absolutely
hilarious reading, from cover to cover!
Sandboxes are everywhere, and they are alluring. I believe
that this has led to a general disdain (particularly in industry) for research
groups in general. We must be aware of the lure of the sandbox, and be prepared
to substitute small science solutions for our big science approaches.
Some other suggestions to keep the sand out of our undies:
Stay customer focused.
Don’t be afraid to scrap an
idea if it is taking a long time.
Avoid getting too many levels
deep.
-----------------------------------------------------
[1] This is actually just a little bit less exciting than it sounds. I never actually got to put my hand on the steering wheel. I wrote software to identify the latitude and longitude of satellite images.
[2] Unlike most of the lies I tell in this blog, this lie is absolutely true. Among other things, I wrote auto-focus and AGC for the first digital electron microscope back in the mid 1980's.
[3] From The Psychology of Computer Programming,
by Gerald M. Weinberg
[4] This example shows that the sandbox has been around for quite a while!
[5] Back in
the olden days, when programmers were real programmers, CPU time was far more
expensive than programmer’s salaries.
[6] See
From Eros to Gaia, by Freeman Dyson, Pantheon Books, 1992.
No comments:
Post a Comment