What is the stretch of consideration to the LOD and LOQ?
Posted: Mon Nov 09, 2015 8:13 am
by carlmed00
Good day,
We are having a dilemma in our Limit of Detection and Limit of Quantification. We use the formula 3.3xSE/slope to measure the limit of detection.
Even tho a certain measurement is found to be below the LOD, we can still see a very prominent peak from our data.
We tried lowering the range (removing the higher concentrations in the regression) but it just makes the LOD higher.
Below is a sample of the peak that is said it be below the LOD but the peak is still very prominent. Wouldn't it still be considered to be detectable?

Re: What is the stretch of consideration to the LOD and LOQ?
Posted: Mon Nov 09, 2015 2:40 pm
by James_Ball
I know that what is considered the Instrument Detection Limit is the amount of analyte that will give you a signal to noise of 3:1 to 5:1. That peak would be well above that limit.
The limit of detection for the method has to take in the Instrument Detection Limit along with extraction efficiency. So if you can inject a 1ppm standard and it gives you a signal to noise of 4:1, you can call 1ppm your Instrument Detection Limit, and if you know that your extraction always gives you 50% recovery then your Method Detection Limit would be (1ppm/0.5) / extraction factor. Where extraction factor is the ratio of initial volume or weight to final volume. So if you extract 1L sample and have a final volume of 1ml for injection then your extraction factor is 1000. You would then have (1ppm/0.5)/1000 which would make your method detection limit 0.002ppm or 2ppb.
Of course if you are directly injecting your sample then Instrument Detection Limit is equal to Method Detection Limit.
That is the practical definition of detection limits, problem is most governing agencies base the limits on statistical manipulations of the data and sometimes I have seen it work out to be nearly the same and other times I have seen the statistics give a limit of detection that is far below what the instrument could actually see.
Basing your LOQ and LOD on the slope of the curve works well IF your curve is truly linear across the entire range down to zero and you do not have and offset.
Some of the new EPA methods have done away with a minimum detection limit and have switched to using a Minimum Reporting Level, which is defined as the minimum concentration at which you can expect the reported value is within +/-50% of the true value. It isn't perfect but I think it is a step in the right direction at least for that type of routine analysis, and keeps labs from reporting ridiculously low detection limits just to qualify for contract bids.
Re: What is the stretch of consideration to the LOD and LOQ?
Posted: Mon Nov 09, 2015 3:15 pm
by lmh
There are several things that can go wrong with a LOD based on SE of calibration curve. If your curve was taken over an inappropriate range of concentrations (for example far higher than the LOD) then the error at the LOD may be very badly over-estimated, in which case the LOD is wrong (too high). As James said, if the response curve over the range you used wasn't linear, the situation is awful, because much of the "error" isn't really random error at all, but is a systematic deviation from a straight line.
The idea of basing LOD on error of the calibration curve is that this error is your measure of how likely it is that the peak would, through statistical random fluctuations, disappear below the threshold of visibility. If the error isn't statistical random fluctuations, the whole argument goes wrong.
I am deeply suspicious of statistical calculations that tell me something that my eyes and common sense tells me is wrong. Nevertheless I don't think you can ignore your LOD calculation just because it doesn't make sense. Really, you are quite right to treat this as a trigger that something is going on, and needs to be understood.
There are, for example, situations where the LOD is telling you the truth: that although you can see the peak in a few samples at low concentration, its area is actually so hugely random that there's insufficient guarantee you'd see it in the next sample.
To take a very extreme situation, imagine an autosampler with a very badly-washed needle and a "sticky" analyte. It's likely that there will be a nice peak clearly present in all injections, but its area will be very much a random number, depending as much on the previous sample as the current one! In this situation S/N tells you that the peak is detectable (even in the blank if you ran one! LOD = 0.00, and a lovely graphic to prove it!), but a LOD based on the SE of the calibration curve is going to be very high, because the calibration curve has vast errors.
Another situation: imagine a mass spec doing a method that causes rapid build-up of source contamination, leading to loss of efficiency. The peak areas of three injections of the lowest level standard are 100, 60 and 30. Even in "30" the peak is very obvious with a nice S/N ratio, but quite rightly, the SE-of-curve method knows that the SE is very big and the area of the next injection could easily be 5, or even -5; with this trend, you can't guarantee to detect the next sample.
I'm pretty sure you'd spot these extremes though!