Welcome back to the series of posts on experimental error! As a physics tutor, I am following on my first post, which explored some of the ways even a fairly simple experiment is prone to unavoidable experimental error. In this second post, we looked at noise and the statistical methods that you can use to mitigate its impact on your final uncertainty.
Noise is random, bias skews your data
Now, though, let’s turn to bias, noise’s more pernicious cousin. When you take a measurement, there are generally two ways in which the value you are measuring does not accurately reflect the quantity you are trying to measure. The first is noise, an offset that varies randomly from run to run. The second is bias, which skews all of your measurements in one direction away from the true value.
Missing the shoes
To illustrate the difference, let’s return to the example we used at the end of the last post. Imagine you want to know the typical natural height of a male high school basketball player. You start off by measuring the heights of your three friends on the basketball team: they are 6’0”, 5’9”, and 6’2”. You want better statistics, so you go on and measure the rest of the team. By now, you can say that the average player is around 5’11” and there is a range of heights from roughly 5’8” to 6’4”. Next, you become really ambitious and measure every player in the country. Now, you can state with great certainty that the dead-center average player is 5’11.9” tall and there is a 95% chance that some randomly chosen player is between 5’8.3” and 6’5.4” tall. Beautiful result!
But … did you ask them to take off their shoes? If not, then each player’s shoes and socks would have added an inch or more of elevation to his natural height and every single measurement you made would have been too big. This is bias; your dogged pursuit of great statistics took care of the noise, but a mistake in our experimental technique has thrown off our result.
Whenever you perform an experiment, even a very simple one, you should always ask yourself if you are forgetting the shoes. Is there some effect, some quirk of your experiment, that is throwing off all of your data?
Bias is sneaky
I say that bias is noise’s more pernicious cousin because it is sneaky. You can look at your data and see the effect of noise. In the example above, you can immediately tell that you are seeing some natural player-to-player variation after you have measured your three friends. It might take much more work to quantify your noise (more measurements, taking an average, maybe fitting a distribution), but you can at least see that the noise is there.
Looking at just these three measurements taken under the same conditions, though, doesn’t tell you anything about your bias. All three heights might be too large by an inch or too small by three inches, but you would have no way of knowing. Even after sampling the entire nation, you would have no way of detecting your error if you didn’t have some standard answer to compare your result against. If bias is so hard to detect, then how do we go about fighting it?
Examine your biases
Now let’s return to the experiment we described in the first post on experimental errors: we are trying to measure g, the acceleration due to gravity, by measuring how long it takes a ball to fall from a given height. Specifically, we focused on the act of using a stopwatch to measure the time that elapses between when your partner lets go of the ball and when it hits the floor. We performed a quick analysis of the noise that we might expect to see in our data due to our slow and imperfect trigger fingers, but that was the easy part: this source of error was fairly obvious and we could easily average over many runs.
Then we started throwing out questions that could potentially confound our measurement. What if you say “go” and start timing, then your partner drops the ball some time later? What if your partner is the one who says “go” and you are the one with the delayed reaction? Would that bias the time you measure one way or another? What if you watch the ball as it falls and anticipate when it will hit the ground? What if, what if, what if? In even such a simple experiment, we can find a multitude of ways in which our method might push our result one way or another away from the true value.
Whenever you are trying to weed out your biases, the first step is usually to sit down, think critically through your experiment in gory detail, and just brainstorm things that could go wrong. Once we have a list of possible culprits, how can we go about determining how much they are throwing off our result?
In the next post, we’ll take a look at two common techniques that are used to explore the biases inherent in your experiment and to quantify how much they might be hurting your result.