Page 1 of 1

Error and variation in simple calibration (theory)

Posted: Fri Aug 25, 2017 8:23 pm
by stu_wms
Hello, all--

Somewhat off topic, but us amateurs do not understand the practical ramifications of the relation between error and variation in an instrument; pls impart the knowledge.

We have a simple adjustable pipette which we periodically calibrate gravimetrically with water, assigning a weight of 1.000g to indicate a dispense volume of 1.000 mL (pls ignore density and temp considerations for sake of simplicity). We covered accuracy and repeatability.

Afterward, we needed to accurately dispense 5 mL, which I assumed we could simply use the pipette for 5 aliquots. My comrade opted for a different method, claiming that it is a bad idea to use instruments sequentially (the fewer uses the better), as any error would accumulate such that the final volume would be inaccurate by the product of the error factor x number of reps.

But the purpose of calibration is to correct for that error. In theory, while calibration minimizes systematic error (increases accuracy), there is still random variation (precision), which is addressed in a different way (technique, etc). The best way to compensate for the variation is to increase the number of trials; thus, in general, a higher count of aliquots tends to minimize the variation.

So here’s some questions:
1) Assuming the calibration was performed correctly, does increasing the number of sequential aliquots increase or minimize the likelihood of the total volume being inaccurate?
2) Are these likelihoods contingent on calibration method (and would change if the calibration was performed differently)?
3) If cumulative trials magnify error but minimize variability, should one quantify both and use the instrument in a way that gives the lowest? Hmm

Pls don’t judge us too harshly; we paid attention in chemistry but not statistics.

Thanks.

Re: Error and variation in simple calibration (theory)

Posted: Tue Aug 29, 2017 11:45 am
by lmh
Accuracy and precision are two separate things. Calibration can only deal with accuracy.
If, for example, you have a flask with a nominal volume of 1L but you have measured it reliably as 990mL, then you've done a calibration, and when you use it to prepare a solution of 5g stuff up to the 1L line, you are justified in declaring your solution to be 5g/990mL.

Making multiple pipettings compared to a single large measurement is a matter of precision. When you make a lot of successive pipettings, you are carrying out a "random walk". If the pipette is +/- 1%, then each time, you move a distance away from the correct value, randomly +/- 1%. The error isn't proportional to the number of pipettings you make, because the first might be too large, the second too small, and so on. It is like a beetle wandering around at random, and over time, gradually moving away from its starting point.

If your big pipette is +/- 1%, and your small pipette is +/- 1%, it is worse to use the small pipette 5 times than the big pipette once (but not five times worse), and there is nothing you can do to compensate for this.

Re: Error and variation in simple calibration (theory)

Posted: Tue Aug 29, 2017 1:58 pm
by GOM
IMH correctly said

Accuracy and precision are two separate things.


Would it help to think about in terms of firing at a target.

You could repeatedly hit near the same spot on the outer ring - that would be precision but not accurate

If you hit the centre of the target that would be accuracy

Re: Error and variation in simple calibration (theory)

Posted: Tue Aug 29, 2017 6:26 pm
by Peter Apps
Following from lmh's post - the distance that random walks depart from their starting point increases with the square root of the number of steps - but because they are random that relationship only holds for the mean behaviour of large numbers of random walks.

Peter