-
- Posts: 1890
- Joined: Fri Aug 08, 2008 11:54 am
I'm trying to sort out an approach to setting QC limits for our lab. We aren't expected to work to ISO9001, but I personally feel we've been more lax than we ought, and I'm tightening up. Nearly all our work is using self-developed methods, or methods taken from academic literature, where you're lucky if the method is recorded thoroughly enough to repeat, let alone containing any QA information.
We need a balance between being professionally-careful and overdoing things. The approach I'm going for is that during method development, we take blank extracts spiked with standard at levels towards the low end of our quantification range, and find the s.d. of the measurements. We then multiply this by about 3 or 4, and use this range as our "pass" range, on the grounds that 97% of measurements should pass if we used a factor of 2, and 4 gives us a little more safety, especially in the case that we do several QC's and a fail of any one is a fail for the entire sequence (endless false failures aren't going to help). For more critical methods I would prefer to use medium and high concentration checks too, but realistically there's a time issue, and we may have to stick to one QC in a region most likely to spot a problem.
Does this sound stupid/over-optimistic/over-lax? What do "real" labs do?
