Archive for March 2008
I was pretty excited when I heard that Microsoft was getting into scientific computing. As the world’s biggest desktop software company, I figured they might understand that scientific computing and high-performance computing are not automatically the same thing, and that reliability and reproducibility are more important than peak performance. Turns out I was wrong: the workshop I attended last September was dominated by discussion of topics like GPU programming and computational grids that are still bleeding-edge computer science, rather than the nuts and bolts that would actually help most scientists be productive day-to-day, Microsoft’s new HPC++ Computational Finance lab‘s site has a lot on speed but nothing on correctness, et cetera.
So where should they be spending their time? If I ran the world, they’d start by reading Buckheit and Donoho on reproducible research, double back to Jon Claerbout’s notes on the same, check out the Madagascar project, and then try to figure out how to scale up those ideas to hundreds of thousands of scientists and publications in as diverse a range of fields as possible. It won’t give the senator something to stand beside on opening day, but it’ll do science a lot more good.
An article about computational science in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures.
(via Andrew Lumsdaine)
Posted on behalf of Daniel Hook and Diane Kelly:
We are members of a software research group from Queen’s University who are investigating ideas and tools to assist with the development of scientific software. We are starting a project focused on finding silent or hidden errors in scientific code. (Silent errors are errors that don’t result in a crash, an error message or an other obvious indicator of a problem.) To create a catalogue of common silent errors, we would like to hear debugging “war stories” from computational scientists. Using these stories we hope to provide improved code testing techniques specifically for scientists.
To conduct our study, we are looking for scientific software debugging stories: just a few lines explaining what the problem was and how you managed to solve it. You can contribute to our study by sending us a story and/or by passing this email along to colleagues who might be able to help with a story.
Please navigate to http://www.cs.queensu.ca/~hook/err_intro.html for more details if you are interested in contributing an error story to this study.
Thank you for your interest.
Daniel Hook and Diane Kelly