Pages

Monday, July 29, 2013

Ghost in the Machine

It's bad enough that the climate change models are so poor at predicting actual climate, but now we find that machine coding that runs the models result in different output from the exact same model running the exact same data.

Fancy that:

New peer reviewed paper finds the same global forecast model produces different results when run on different computers ...

The paper was published 7/26/13 in the Monthly Weather Review which is a publication of the American Meteorological Society. It finds that the same global forecast model (one for geopotential height) run on different computer hardware and operating systems produces different results at the output with no other changes.

The post describes different problems, including different rules on rounding during calculations in various operating systems. If that type of error affects other models, that's a problem for the science isn't it?

My life as a budding programmer is distant and was brief. Lasting from 11th grade to my first semester of college. But I'll never forget a lesson in programming oddities in a program designed to model the game of life.

By the rules, any square with life would live or die depending on the amount of life in the squares around it on a grid. You'd run this through cycles with different initial life to see if the life dies out, grows, or goes into a pattern. It was pretty cool.

For whatever reason, everyone else in the class counted all the life in the 9 squares and (assuming life in the center), simply subtracted 1 from the total to implement the rules. My program was virtually identical in approach except for one small difference.

I wrote the code to count the 8 squares around the target square of life. And then didn't subtract anything, of course. I skipped the center square of life when counting and everyone else counted the center square of life and then subtracted 1 to not count it when implementing the rules. That's a minor difference and should not have made any difference, right?

Except my program did not work. The teacher could not figure out why it did not work. I sure couldn't. It should have worked. The teacher made it a class project to figure out why my approach did not work. The class could not figure out why it did not work. In the end, the teacher could only explain it by blaming the ghost in the machine that interpreted my programming in ways we did not expect. So I changed that line of my program and it worked just fine.

I did not become a computer programmer. But I learned a valuable lesson not to simply trust the output as if it is handed down from God. Sometimes the computer just does stuff we don't understand. If that problem raised its ugly head in my simple high school program run on the almost infinitely simpler computer of the late 1970s, how much more of a problem could it be in the massively complex (and poorly documented, as ClimateGate emails revealed) programs that predict our doom from carbon dioxide output?

But by all means, bankrupt us to combat that predicted global warming (that your models predict is determined by man's puny role) that will cause your predicted harm using policies you predict will solve the predicted problem that your models predict--and all under your guidance without ignorant denialist voters predictably getting in the way.

It's science rounding error, damn it!