Advertisement

MDL Levels

Discussions about GC and other "gas phase" separation techniques.

7 posts Page 1 of 1
Hi, everyone;

I have a question about Method Detection Limit determinations in GC for hydrocarbon analysis, using GCFID & GCTCD. We prepare an MDL standard at 1.5-5 X the assumed MDL, run ten replicates, and then quantitate the resulting signal using our normally calibrated instrument.

My question is, does the calculated MDL standard concentration you get matter? For example, if your spike level is 1.7 ppm for a 0.5 ppm MDL, does it matter whether you get 0.9ppm, 1.2ppm, or 2.3ppm consistently for the calculated concentrations of ten replicates? Is it reasonable to accept a calculated value that is lower than the expected concentration, or a bit higher, if they pass the required statistical calculations for the MDL?

One employer insists that it matters, but a former stated that it does not. Can’t find a definitive answer on the web.
I'm worried and starting to doubt myself! Anyone care to share an opinion so I can have a little peace of mind? :?

Thanks to all who care to comment.
To me, the detection limit is where you can say - with some statistical confidence - that you can detect the analyte. It's there or it's not. To quantitate the analyte in the sample generally requires a signal that is quite a bit larger than that.

If you can make those low-concentration standards and reliably see a signal in them that is different from a suitable blank, that's all it tells you. My instrument can say "present" or "absent". Whether it gives you the "right answer" in those lower concentration ranges seems irrelevant to me from a detection-limit standpoint.
Your procedure does not determine the MDL.

If you are quantifying, the peak area you get must be on your calibration curve (you are not allowed to extrapolate). Therefore your lowest standard must correspond to a lower concentration than what you are analysing. You lowest standard is detectable, so must be above the detection limit, and your MDL spike is higher than the lowest standard, so by definition is higher than the MDL.

If you are going to all the trouble of ten replicates , then simply make up test samples at the concentration you want to claim as the MDL and show that the rsd of the peak areas is less than 30%. There is no need to calibrate or calculate concentrations

Peter
Peter Apps
Your procedure does not determine the MDL.

If you are quantifying, the peak area you get must be on your calibration curve (you are not allowed to extrapolate). Therefore your lowest standard must correspond to a lower concentration than what you are analysing. You lowest standard is detectable, so must be above the detection limit, and your MDL spike is higher than the lowest standard, so by definition is higher than the MDL.

If you are going to all the trouble of ten replicates , then simply make up test samples at the concentration you want to claim as the MDL and show that the rsd of the peak areas is less than 30%. There is no need to calibrate or calculate concentrations

Peter
Peter, the MDL replicates we analyse are spiked at a level below our lowest calibration point, so you are right - they are not reliably quantifiable, which is what I felt but wasn't able to get across to my employer. However, the "powers that be" in our QC department want to see those MDL replicates' quantified values at or very near the theoretical spike level.

Thanks for clarifying the glaringly obvious to me!

One other small point - some of our gas hydrocarbon analyses are quantitated on a single calibration point using Average RF (we run three reps of one level and the average response is used), based on ASTM methods. However, again, when validating an instrument, I am asked to perform both a linearity check (using diluted gas standards) and an MDL study (diluted gas standards, in 10 reps as in original post).

Even though hydrocarbons are generally very linear on an FID or TCD, if you're doing a 1-point calibration with a 9% gas standard, how can you expect to accurately quantitate a 0.0005% standard? It is required by our QC department, but just seems overly ambitious to me.

Sorry, just a little venting as I'm continually frustrated by these requirements. :o Thanks again to both of you for the replies.
It sound like your QC department doesn't know what they are doing. The point of an MDL determination is to ascertain the level that you can reliably measure a response greater than background. If you can accurately quantitate at this level you are most likely not at the MDL.

A simple fix would be to report your quantitation below your low standard with an estimated error (+ or - some %). As you approach the mdl the % error becomes large.
Of course then the question becomes; how does one estimate the error? Something to put to "the powers that be" in your QC department.
It sounds like they are confusing Minimun Detection Limit and Minimum Quantitation Limit or LOQ.
The correct way of reporting an MDL result as a quantitative result is "less than X". where X is the lower limit of quantitation.

This is probably in your QC documentation somewhere, so the QV dept cannot argue against it.

Peter
Peter Apps
7 posts Page 1 of 1

Who is online

In total there are 69 users online :: 1 registered, 0 hidden and 68 guests (based on users active over the past 5 minutes)
Most users ever online was 5108 on Wed Nov 05, 2025 8:51 pm

Users browsing this forum: Google [Bot] and 68 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry