Software Carpentry

Helping scientists make better software since 1997

Archive for May 2009

Error Handling

One topic that isn’t currently in the curriculum that I’d really like to add is detecting, handling, reporting, and recovering from errors. This makes up is 10-30% of the code in real applications, but dealing with errors is almost omitted from textbook examples and tutorials for the sake of clarity  (Tanenbaum’s Minix book being a laudable exception).  I have asked elsewhere for someone to write an entire book on the subject; if anyone wants to take a crack at an hour-long lecture, please get in touch.

Written by Greg Wilson

2009/05/12 at 23:58

Posted in Content

Links for Summer Interns

Our summer interns started today; our first job is to define exactly what they’ll be working on this summer, so it seems like a good time to round up a few links on interesting topics. My apologies for those hidden behind paywalls…

Steve’s Project Ideas

  1. Social network analysis for scientists
  2. Electronic lab notebooks

Reproducible Research

If I said, “I just got a really interesting result in the lab, but I didn’t record the steps I took or the settings on the machine,” no reputable journal would publish my paper. If I said, “I just got a really interesting computational result,” most reviewers and editors wouldn’t even ask if I’d archived my code and the parameters I used, or whether that code would run on someone else’s machine. Reproducible research (RR) is the idea of making computational science as trustworthy as experimental science by creating tools and working practices that will allow scientists to re-create past results.

  1. WaveLab and Reproducible Research
  2. The Madagascar project
  3. The Sweave project
  4. Special issue of Computing in Science & Engineering on reproducibility

Data Provenance

The “provenance” of an object is the history of where it came from, and how it got here. The provenance of a piece of data is similar: what raw values is it derived from, and what processing was done to create it? Ideally, every piece of scientific software should track this automatically; in practice, very few do, and most scientists don’t take advantage of the capability when it’s there. That’s changing, though, particularly as emphasis on reproducibility grows.

  1. The Provenance Challenge: a series of competitions to benchmark provenance tools against one another.
  2. Special issue of Concurrency and Computation: Practice & Experience reporting the results of the first challenge

Science 2.0

Also called “computer-supported collaborative science”, this is the idea of leveraging modern web-based collaboration tools to better connect scientists, their experiments, and their results. It encompasses a broad range of ideas, but “social networking for scientists” based on their interests is near the core, as is “open science” (the idea of making scientific results public in the same way as open source software or Creative Commons publications).

  1. Overview article in Scientific American
  2. Jon Udell’s Internet Groupware for Scientific Collaboration may be several years old, but it’s still prescient
  3. Jean Claude Bradley’s blog
  4. Cameron Neylon’s personal blog (see for example his post on “FriendFeed for Scientists“) and lab blog

Scientific Programming Environments

Compared to professional software developers, most scientists use fairly primitive programming environments, in part because they’ve been too busy learning quantum chemistry to learn distributed version control, and in part because software developers seem to go out of their way to make tools hard to set up and learn. Lots of people have tackled this from a variety of angles. Unfortunately, a lot of work to date has focused on supercomputing, which is sort of like studying modern medicine by focusing on heart surgeons…

  1. Greg Wilson’s “Where’s the Real Bottleneck in Scientific Computing?” and “Software Carpentry
  2. Carver, Kendall, Squires, and Post’s “Software Development Environments for Scientific and Engineering Software: A Series of Case Studies
  3. Matthews, Wilson, and Easterbrook’s “Configuration Management for Large-Scale Scientific Computing at the UK Met Office” is an example of tools done right

Written by Greg Wilson

2009/05/11 at 15:51

Posted in Content, Version 3

How Scientists Use Computers: Survey Part 2

Thank you once again for taking part in our Fall 2008 survey of how scientists use computers in their research. We will present a paper describing our findings at ICSE’09 in Vancouver on May 23, and will make the results public as soon after that as possible. There will also be an article in American Scientist magazine discussing what you’ve told us some time this summer.

Our next step is to figure out what makes some scientific computer users so much more productive than others. We would therefore be grateful if you would take a few minutes to answer the questions below and email the result to gvwilson@cs.utoronto.ca:

  1. If you think that you use computers more effectively in your work than some of your peers:
    1. explain why you think so
    2. describe what you do or know that they don’t
  2. If you can think of someone in your research area who uses computers more effectively in their work than you do:
    1. explain why you think so
    2. describe as best you can what they do or know that you don’t

If you answered either question, we would be very grateful if you could pass this email on to the colleague or colleagues you were thinking of and ask them to answer it as well—we believe we will learn a great deal by comparing responses, as well as from the responses themselves. If they wish to remain anonymous, please ask them to return their response to you for forwarding to us. Otherwise, please have them reply directly to us. (It would be very helpful in the second case for them to mention your name, so that we can pair their response with yours.)

As with the original survey, only the researchers directly involved in this study will have access to respondents’ contact information and/or identities. This information will not be shared with any third party in any way.

Thanks in advance for your help—we hope you’ll find the results useful.

Prof. Greg Wilson

Dept. of Computer Science

University of Toronto

gvwilson@cs.utoronto.ca

http://www.cs.toronto.edu/~gvwilson

Written by Greg Wilson

2009/05/09 at 15:14

Posted in Community, Content, Research

Topics and Schedule Posted

We’ve put up a list of the topics we intend to cover, and the order in which we intend to cover them.  It’s very provisional; we’ll update it regularly, and your comments would be very welcome.

Written by Greg Wilson

2009/05/06 at 13:26

Entrance Requirements

Our rough guide to what students should know before taking this course is now on the Prerequisites page. If you don’t feel confident you know this material, but still want to take the course in July, please let us know: we’re organizing some tutorial sessions in May and June.

Written by Greg Wilson

2009/05/04 at 12:38

Posted in Alberta, Toronto, Version 3

What If Scientists Didn’t Compete?

Interesting summary in the New York Times of some collaborative science done by Dr. Sean Cutler. One of the goals of this course is to give scientists the skills they need to do things like this routinely.  This raises a few questions:

  1. Should one of the “graduation exercises” for the course be to write some kind of search tool (or a wrapper around several search tools)?  Finding potential collaborators is sometimes the biggest challenge.
  2. Should more emphasis be placed on sharing and merging data sets? Or on looking for inconsistencies in them?
  3. What else do you need to be able to do in order to collaborate more effectively with your colleagues?

Written by Greg Wilson

2009/05/01 at 12:43

Posted in Noticed

Follow

Get every new post delivered to your Inbox.