by
lmh » Wed Jan 04, 2012 2:43 pm
If you run 6 samples to test reproducibility and one is very wrong, no matter how you identify it as an outlier, it suggests that if you run 6 unknowns, one of them will also be very wrong. That is a big problem.
I personally only reject individual runs if I can see that something went wrong before I look at the peak. For example, if the vial is full of broken glass and a crushed vial insert, I have good reason to believe that this injection is unreliable (and a documentable reason too!), independent of the fact that the peak has the wrong area.
Years ago, I met a lab who (not using a chromatographic method; this was another analytical technique altogether) routinely measured each sample in 6 replicates. I only found out by chatting with their technician after he'd joined a new group, and we were comparing notes (we'd both used this technique). The technique was difficult, so they allowed for the fact that one or two, even three replicates would fail. I asked what they meant by fail. He explained that some replicates would give results too far from the mean. I asked what they meant by "too far". He explained that his boss meant more than a couple of standard deviations compared to the other points. It never occurred to him (or the boss) that they were defining the standard deviation of their measurements, not measuring it! It's also no surprise that their data were better than everyone else's.