Twitter

Thursday, October 08, 2015

Debugging the good results

You have a data set and an idea to model the data, in the hope that it will provide some information or solution to a problem. In the ideal world, you shall just cast the idea on the data like a never fail spell and, ta-da, the solution shall just pop out of thin air.

It does not happen in the real world. Even in the wizard world, when an angry Harry Potter tried to use the Cruciatus spell on an opponent, yet he failed. You-know-who commented "you've got to mean it, Harry." It takes both strong willpower and skill to execute a powerful spell. The execution is the vessel that carries an idea. If the vessel sinks, so goes the idea too.

Now let's talk about debugging. The reason we debug our analysis and codes is our codes and the analysis results they generate are prone to mistakes. We are all aware of that. However, our drive to diligently debug our codes is strong only when we are not getting the desired outcomes from our codes.

Regina Nuzzo (@ReginaNuzzo) wrote in her recent Nature news feature:
“When the data don't seem to match previous estimates, you think, 'oh, boy! Did I make a mistake?'”
However, not all coding errors give silly results. Some, on the contrary, give pretty "good" results. Results we have been hunting for. It takes strong willpower to check, proofread and debug to reduce the chance of false results. Over the years, I have had my fair share of false good results produced by programming errors. Therefore, to reduce the risk of cardiovascular arrest, members of my group tend to be more diligent when the results found are extremely exciting. Incidentally, both two of my graduate students who recently presented good results (results that agree with our intuition) in our weekly meeting said they will check their codes more carefully at the end of the meeting.

No comments: