Calculating detection limits

Discussions about GC-MS, LC-MS, LC-FTIR, and other "coupled" analytical techniques.

12 posts Page 1 of 1
What is your preferred method for calculating limits of detection and quantitation?

I've heard varying school of thought on how this is best accomplished

Some people I know favor the traditional method of creating a calibration curve with multiple replicates and calculating the LoD from the regression they get out of the Data Analysis toolpack. 3 * the standard error of the y-intercept / the slope of the calibration curve.

Others I know prefer to get the S/N ratios for each calibrant and plot them as a function of concentration. Then they extrapolate back to S/N = 3.

And I know others prefer much more involved computations that involve running many many samples and doing computations on the noise itself.

Which method do you prefer? Which method do you think is best -- most accurate, most efficient, most statistically sound?
I prefer to let some one else tell me whatever the hell they want :P. Sorry for the cynicism, but there are a lot of opinions surrounding this issue. I got tired of arguing years ago.
If you use the search function, this has been discussed here before.

Use whatever method makes sense to you. I prefer to actually inject decreasing amounts of analyte down to the point where S/N = 3. That way it's not a calculation or an extrapolation.
MMJ88 wrote:
If you use the search function, this has been discussed here before.

Use whatever method makes sense to you. I prefer to actually inject decreasing amounts of analyte down to the point where S/N = 3. That way it's not a calculation or an extrapolation.


To me this is the most accurate since not all compounds respond in a linear fashion. Also if you have any kind of extraction involved you should also factor in your extraction efficiency. Just because your instrument can see 1ppm at s/n=3 doesn't mean you can see that in your sample if your extraction procedure only recovers 50% of the compound present.

I use this for my non-EPA methods. For the EPA methods I am stuck with using the standard deviation of seven replicates times Student t value for seven replicates, which if you have very reproducible results can give you detection limits that are impossible to actually see. But that is the government for you.
The past is there to guide us into the future, not to dwell in.
I agree, James. The EPA way of doing it gives you an overly optimistic LOD in my opinion. I do it that way only when I have to.
My worry about S/N is that in some MS^2 instruments running SRMs you can get to a stage where there is no baseline noise whatsoever within a lot of peak-widths of the peak. In this case the S/N ratio is infinite, and it remains infinite as you dilute the standard, until the moment when the peak becomes so small the integrator can no longer find it. The idea of using S/N is based on the idea that it is noise that makes the signal hard to see and hard to quantify. If in fact it's the overall size of the signal that makes it hard to see, then noise is no longer relevant.

An analogy is this: Say you are looking for sailing boats in a lake full of reeds. Your detection limit is set by the ratio of mast-height to reed-height: if the mast doesn't stick out of the reeds, you can't see it. Now say you're looking for a submarine in the lake. Even if there are no reeds at all, you haven't got any mast to look for. There is just a tiny periscope maybe. In these circumstances, the mast:reed ratio is completely irrelevant, because your sensitivity to spot the "boat" depends on factors other than the reeds - it depends on your basic ability to spot an extremely small periscope even on flat water.

This is why I quite like the approach of quantifying peak-area in a calibration curve approaching the expected LOD (or LOQ as appropriate). That way you are looking at your ability to see the peak itself, irrespective of the different factors that make it hard. It's like looking for periscopes in whatever lake you have (including reeds if present!), using a set of ever-smaller submarines, until you know that there is too great a variability in what you're seeing for you to be sure you'd see it every time.
lmh wrote:
My worry about S/N is that in some MS^2 instruments running SRMs you can get to a stage where there is no baseline noise whatsoever within a lot of peak-widths of the peak. In this case the S/N ratio is infinite, and it remains infinite as you dilute the standard, until the moment when the peak becomes so small the integrator can no longer find it. The idea of using S/N is based on the idea that it is noise that makes the signal hard to see and hard to quantify. If in fact it's the overall size of the signal that makes it hard to see, then noise is no longer relevant.

An analogy is this: Say you are looking for sailing boats in a lake full of reeds. Your detection limit is set by the ratio of mast-height to reed-height: if the mast doesn't stick out of the reeds, you can't see it. Now say you're looking for a submarine in the lake. Even if there are no reeds at all, you haven't got any mast to look for. There is just a tiny periscope maybe. In these circumstances, the mast:reed ratio is completely irrelevant, because your sensitivity to spot the "boat" depends on factors other than the reeds - it depends on your basic ability to spot an extremely small periscope even on flat water.

This is why I quite like the approach of quantifying peak-area in a calibration curve approaching the expected LOD (or LOQ as appropriate). That way you are looking at your ability to see the peak itself, irrespective of the different factors that make it hard. It's like looking for periscopes in whatever lake you have (including reeds if present!), using a set of ever-smaller submarines, until you know that there is too great a variability in what you're seeing for you to be sure you'd see it every time.


In this situation which occurs with MRM and sometimes SIM work, I normally set a minimum peak area instead of s/n criteria. If a minimum area of 1000 will give me a peak with at least 5 scans width and a Gaussian shape then that is the minimum area and the concentration that will give me that area consistently I would consider the Limit of Detection.

Sometimes I have seen what looks like a single scan spike and people call that a detectable peak, but I would never consider that a reliable detection of any compound. No chromatographic peak is going to be that narrow. Even when I inject something without a column I see peaks wider than that.
The past is there to guide us into the future, not to dwell in.
I think that the above used methods only work when you monitore one mrm. Most protocols monitor two mrm's and the ratio must be in certain limits. In this situation if you see a peak on the T and no peak on the Q or the raratio is out limits you report negative.the So i inject Standard 20 time and calculate rsd in the ionratio. Even the rsd reaches 20% that is your lod. Funny things is that when the Q is smaller the T your lod is hoger than your
Loq!
Three times noise on a chromatogram, or the smallest peak with an rsd of less than 33% estimates only the contribution of the instrument to the variation in the result. Good luck to you if you can just inject sample in the form that it is submitted because then 3 times noise is a valid estimate of LOD. But if you have to do sample prep of any kind, including dilutions, then the variation in the sample prep adds to the instrument variation, and the 3 times noise estimate is too low. The LOD of an analytical method is the lowest content in a sample that gives results in replicate preparations and analyses with an rsd less than 33%. One of the big organizations (I'm not going to look it up) likes it this way.
Peter Apps
Peter Apps wrote:
Three times noise on a chromatogram, or the smallest peak with an rsd of less than 33% estimates only the contribution of the instrument to the variation in the result. Good luck to you if you can just inject sample in the form that it is submitted because then 3 times noise is a valid estimate of LOD. But if you have to do sample prep of any kind, including dilutions, then the variation in the sample prep adds to the instrument variation, and the 3 times noise estimate is too low. The LOD of an analytical method is the lowest content in a sample that gives results in replicate preparations and analyses with an rsd less than 33%. One of the big organizations (I'm not going to look it up) likes it this way.


I was going to say "The first challenge is to get a group to agree on a noise calculation method." but this way removes that - I like it.
Thanks,
DR
Image
completely agreed: in the end RSD is what matters, because it's directly what interests you for a LOQ (is my error bigger than an acceptable value) and indirectly what interests you for a LOD (is my value statistically significantly different from zero).

The point about qualifiers and quantifiers is an interesting one and worth thinking about (it's good someone quite rightly brought it up). It's true that merely taking the error on the quantified result, which uses only the quantifier, could make it possible to calculate a very low LOQ, but it's not the right approach. The qualifier is proof that the peak is the correct thing. If you are using qualifiers, you have taken a decision that you do not believe the peak is the right thing unless the qualifier is present at the correct intensity ratio. The consequence is that a tiny peak of the quantifier ion without the correct qualifier is, from your perspective, not the right chemical and therefore meaningless, and should not be included in the calculation.

When calculating a LOQ based on RSD values from real measurements of dilute standards or samples (ideally spiked control samples/spiked matrix blank etc.), the LOQ is always the higher of two values: (1) the concentration where you get an acceptable RSD; (2) the lowest concentration for which you have real data. Since you cannot have data below the point where you can unequivocally prove detection of the compound, based on acceptable qualifier ion ratio, if the qualifier has a large RSD, your LOQ is limited by (2), the lowest concentration where you were able to detect the peak with the right qualifier ion ratio. Thus LOQ isn't lower than LOD, instead they become equal (with the side-effect that at the LOQ, you actually have a much better RSD than the LOQ was designed to produce).

Think about it this way: an athlete with a failed drugs test wants to know: (a) are you sure that I had that drug in my urine? (b) are you sure it exceeded the limit? There is no value in being absolutely sure that the level exceeded the limit, if you can't be equally sure that the drug was what you think it was.
This is why I am beginning to like the new approach in the EPA drinking water methods, where Method Detection Limit(LOD) is no longer calculated, but Minimum Reporting Limit (LOQ) must be proven.

In the new procedure, you begin by statically determining the MRL, then you process seven replicates through the entire procedure spiked at or below the determined level and they must meet upper and lower limits.

5.3.3 Half range for the prediction interval of results
Determine the half range for the prediction interval of results (HRPIR) using Equation 2:

HRpir=3.963×sd

Note that the factor 3.963 is specific only to use of seven replicates.

5.3.4 Confirm PIR limits are within QC criteria
Confirm that the upper and lower limits for the prediction interval of the result (PIR) satisfy the upper and lower recovery limits.
The lower PIR limit must be greater than or equal to 50 percent recovery:
PIRlower = (mean-HRpir/TrueValue)×100%

The upper PIR limit must be less than or equal to 150 percent recovery:
PIRupper =(mean+HRpir/TrueValue)×100%

The MRL is validated if both the upper and lower PIR limits meet these criteria.


This gives a fairly reliable limit to what can be quantified, and any result below that must be flagged as an Estimated Value.
The past is there to guide us into the future, not to dwell in.
12 posts Page 1 of 1

Who is online

In total there is 1 user online :: 0 registered, 0 hidden and 1 guest (based on users active over the past 5 minutes)
Most users ever online was 1117 on Mon Jan 31, 2022 2:50 pm

Users browsing this forum: No registered users and 1 guest

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry