-
- Posts: 2
- Joined: Mon Sep 12, 2016 4:30 pm
Somewhat off topic, but us amateurs do not understand the practical ramifications of the relation between error and variation in an instrument; pls impart the knowledge.
We have a simple adjustable pipette which we periodically calibrate gravimetrically with water, assigning a weight of 1.000g to indicate a dispense volume of 1.000 mL (pls ignore density and temp considerations for sake of simplicity). We covered accuracy and repeatability.
Afterward, we needed to accurately dispense 5 mL, which I assumed we could simply use the pipette for 5 aliquots. My comrade opted for a different method, claiming that it is a bad idea to use instruments sequentially (the fewer uses the better), as any error would accumulate such that the final volume would be inaccurate by the product of the error factor x number of reps.
But the purpose of calibration is to correct for that error. In theory, while calibration minimizes systematic error (increases accuracy), there is still random variation (precision), which is addressed in a different way (technique, etc). The best way to compensate for the variation is to increase the number of trials; thus, in general, a higher count of aliquots tends to minimize the variation.
So here’s some questions:
1) Assuming the calibration was performed correctly, does increasing the number of sequential aliquots increase or minimize the likelihood of the total volume being inaccurate?
2) Are these likelihoods contingent on calibration method (and would change if the calibration was performed differently)?
3) If cumulative trials magnify error but minimize variability, should one quantify both and use the instrument in a way that gives the lowest? Hmm
Pls don’t judge us too harshly; we paid attention in chemistry but not statistics.
Thanks.