by
lmh » Fri Jan 17, 2020 1:47 pm
If anyone's reading this, there are some good references on it. LC-GC did a very good one, Feb 2009 22:82-85, probably a Dolan article. There are also some more hairy things like Epshtein's Pharmaceutical Chemistry J. 38:212-225.
To my mind, it is totally logical to use the s.d. of the calibration curve to estimate the LOQ (not LOD) because what you're interested in is the lowest level at which you can report a value with equal-to-or-better-than a certain precision. So the important thing is the precision of the measurements at that point.
This also makes it clear that the s.d. at the y-axis is irrelevant. What you need is the s.d. at the point where you're worried about measurement reliability (i.e. the LOQ). It's not really possible to quote a single s.d. for a line, because, assuming linearity, the slope and the intercept might have different errors (which is the same as saying that the error at x=1 may be different to the error at x=10). Some statistician will have to correct me, but I think the s.d. of the curve, as produced by packages, assumes the line moves up and down while retaining the same slope.
But basically, the key point is, you need to measure your LOQ, if you use the calibration curve method, by using a calibration curve around the expected LOQ. It doesn't work if you use values a long way away and extrapolate. In this case, your actual s.d. where it matters is likely to be wrong, so the calculated LOQ will be wrong.
Now, to apply it to the LOD. The thinking here is more woolly, because really you're now calculating an LOD based on the precision with which a peak's area can be measured. So the assumption is, noise makes a peak hard to recognize, and noise makes a peak's area unreliable. Therefore the unreliability of the peak's area (s.d. of the calibration curve at this point) is a measure of the noise which in turn is a measure of whether I'm likely to recognize this as a peak or not.
The logical way to consider whether you have a peak (LOD) is indeed to look at S/N ratio, because you're asking the question: do we have sufficient deviation from the expected background for me to believe that something has happened here that is not background? But there are caveats (see below).
The problem is that people then calculate the LOQ from the same S/N ratio as they used for the LOD by using a different factor. It's a calculation that's actually quite hard to justify. But it also has a side-effect of establishing a relationship (a fixed multiplication factor) between the LOD and the LOQ, which encourages those who've got a rational way to calculate their LOQ, to derive the LOD in the same way, using the factor...
But, the caveat on S/N ratio, and this is a serious caveat that really messes up use of S/N ratio, especially for LOQ. There is a real problem where a detector has a signal threshold below which it reads zero. This will often happen in MS software, which may ignore signals less than a threshold because it believes them to be meaningless noise (and therefore file-size bloat). It can also happen physically in some detectors. Whatever the reason, it can crop up in MRMs, for example, where there's a real risk that a peak that is actually very poorly reproducible (+/- 200% errors) nevertheless has an infinite S/N ratio because there is absolutely no background signal whatsoever within 20 peak-widths either way! In this situation, it's difficult to know how to assess the LOD (because any peak, even one produced by a size of injection that would only be detected one in ten runs, will produce an infinite S/N ratio). And it's obviously completely impossible to calculate a LOQ.
The bottom line on LODs is that no matter how you choose to measure them, from the various methods available, you must, must, must go back and make a few injections at the LOD and check that it's really realistic. That's the proof.