Advertisement

Duplo partially below LLOQ

Discussions about GC-MS, LC-MS, LC-FTIR, and other "coupled" analytical techniques.

11 posts Page 1 of 1
Hi, a question about quantification;

Let's say one does a duplo analysis on a sample, and one measurement yields a concentration just below the LLOQ, the other just above, in such way the average is above the LLOQ. There is no option for a re-measurment.

What is the best way to approach this? Given one cannot quantify correctly below LLOQ, but it is well within 15% margin around the LLOQ, normally accepted for a QC.

I am curious what the thoughts are!
My 1st thought is DAMN! You can report that the analyte was detected but below your quantitation limit, so you cannot give a definite concentration (>DL but <QL). Another sample would have to be obtained and concentrated if an answer is 'really' required.

Why are you working in this range where the concentration so low?
If you are running duplicate samples and reporting the means then you should have determined the LOQ in the same way - run duplicate reference materials and take the mean.

Peter
Peter Apps
Your value above the LLOQ verifies that there is a quantifiable amount, and the low value is still within the +/-15% around the LLOQ and the average is above the LLOQ, I would report at least the average. You would need to document and qualify the result so the end user knows what happened but you have enough data to prove there is a positive hit for the analyte. The exact concentration would be the only thing in question, and with any test the closer you come to the LLOQ the less accurate the result will be.
The past is there to guide us into the future, not to dwell in.
Thanks for the ideas so far;

Let me elaborate a bit; my case is a bioanalytical one, where re-analysis is unfortunately not an option. Calibrationstandards are prepared in duplo, as well as the QC-samples and the real samples. So normally everything is averaged.

I am aware of the fact that closer to the calibrationcurve limits, things do not get more precise, however one has to accept and set the limits of a method somewhere. Basically, the limits are simple: ULOQ and LLOQ. So strictly, anything below LLOQ is not quantifiable. Therefore there is not a number one can use for averaging.

However, reporting the concentration being <LLOQ also doesn't 'feel' right, as the duplo is >LLOQ and the average as well. Again, re-analysis is not an option. Neither is reporting both values, one value needs to be reported.
It sounds as if you are averaging too late in the process - rather than taking the mean of the results for each replicate, take the mean of the peak areas and apply the calibration (based on the means of the two values at each level) to that.

The answers should always be the same of course.

Peter
Peter Apps
If the LLOQ is based on the average of duplicate numbers then you should have a reportable number.

If the low standard is 1 and you run that standard in duplicate and you get 0.9 and 1.1 for the individual analyses but the average is 1, and that average is what is used in the calibration, then the sample should be treated the same way. As Peter said, average the peak areas first, then calculate the reportable number, because that is most likely what the software does when it builds the calibration curve.

If you did three replicate analysis and averaged the results instead of two replicates, would you consider it a non detect if only two were above the LLOQ? What if you did ten replicates, do all ten have to be above the LLOQ to be considered a positive result?
Neither is reporting both values, one value needs to be reported.
If both of these results had been above the LLOQ, would you report both numbers or the average of the two?
The past is there to guide us into the future, not to dwell in.
The situation is actually inevitable and just proves that the world is working correctly. If you have a symmetrical error about a mean, then if you assay samples that contain exactly your LLOQ, half of the measurements will fall below the LLOQ.

The point about the LLOQ is that it corresponds to a just-acceptable RSD. If your LLOQ is based on an acceptable RSD of, say, 20%, then if you were to measure a lot of samples at the LLOQ, the answer should be "X" +/- 20%, and if it isn't, then something's wrong.

Therefore Peter is correct: the right thing to do is quote the average. The only complication (from a strict statistical perspective), comes from the fact that the LLOQ is usually intended to give the RSD on a single measurement. [But if you do everything in duplicate and do all the averaging before finding a LLOQ by a means that uses the calibration curve, rather than by inspection of noise, then of course the LLOQ actually applies, correctly, to the mean of the duplicate measurements]. If your LLOQ was calculated from a calibration curve of individual points, but you now apply it to the mean of two measurements, you are actually being pessimistic, because the mean has a smaller error than the individual measurement (the mean has a "standard error", while the individual measurements have a "standard deviation", where the standard error is the standard deviation divided by the square root of the number of measurements). Actually this is all a bit irrelevant and hypothetical because determining errors and variability from a handful of measurements is extremely error-prone, so it's highly unlikely that the difference between using a standard error or a standard deviation is significant.

Practically what you should do: quote the mean, as it is. The mean is almost exactly at the LLOQ. This indicates that the expected error on this value is whatever you set as the limit for the LLOQ.
Just a quick follow-up on the danger of threshold values.

Imagine your LLOQ is 27units, but this assumes an RSD of 20%. The standard deviation is therefore 5.4units.

Now imagine someone asks you to measure a lot of samples that happen to contain exactly 27 units of stuff. When you assay these, your answer should be 27 +/- 5.4. Say you report all assays below 27 as less than the LLOQ, but report the rest as real values, what happens next?
If you're unlucky, your client is less statistically-aware than you, and decides only to use the data that have genuine reported numbers, quite reasonably excluding the rest as unmeasured/unmeasurable (typical situation: he weeds out all the data that are too weak to report, and passes all reportable data to a 3rd person, such as his boss, who is unaware that there were any unreportable data...).
50% of the data are now gone. The remaining data range from 27 upwards, with 34% (68% of data that got used) somewhere between 27 and 32.4, and 16% (32% of data that got used) somewhere above 32.4.
If your client then averages the data that he has, for all the samples derived from that particular treatment/sample-type, instead of getting the correct mean of 27, and knowing that the value is +/- 20%, he'll actually get a value quite a lot above 27 (probably somewhere between 30 and 31), and a much smaller error if he chooses to calculate it (because he's thrown away half the normal distribution curve). This means his data look more trustworthy, but are more wrong.

Of course if the final end user has any sense, they will know that the final mean at 30.6 is only very marginally different to 27, and if they're seriously worried about values in this region, they need a more sensitive method. But I'm basically suspicious of working-procedures that potentially introduce unnecessary bias in measurements. We need thresholds for decisions, but we should be aware of how their misuse can cause errors.
Good point.

Just shows that the reason we have trained analysts in the lab instead of robots is because the analyst can use reason when the unexpected happens.

A divide by zero result will stop a robot cold in its tracks, an analyst just shrugs and goes on with the work.
The past is there to guide us into the future, not to dwell in.
Just a quick follow-up on the danger of threshold values.

Imagine your LLOQ is 27units, but this assumes an RSD of 20%. The standard deviation is therefore 5.4units.

Now imagine someone asks you to measure a lot of samples that happen to contain exactly 27 units of stuff. When you assay these, your answer should be 27 +/- 5.4. Say you report all assays below 27 as less than the LLOQ, but report the rest as real values, what happens next?
If you're unlucky, your client is less statistically-aware than you, and decides only to use the data that have genuine reported numbers, quite reasonably excluding the rest as unmeasured/unmeasurable (typical situation: he weeds out all the data that are too weak to report, and passes all reportable data to a 3rd person, such as his boss, who is unaware that there were any unreportable data...).
50% of the data are now gone. The remaining data range from 27 upwards, with 34% (68% of data that got used) somewhere between 27 and 32.4, and 16% (32% of data that got used) somewhere above 32.4.
If your client then averages the data that he has, for all the samples derived from that particular treatment/sample-type, instead of getting the correct mean of 27, and knowing that the value is +/- 20%, he'll actually get a value quite a lot above 27 (probably somewhere between 30 and 31), and a much smaller error if he chooses to calculate it (because he's thrown away half the normal distribution curve). This means his data look more trustworthy, but are more wrong.

Of course if the final end user has any sense, they will know that the final mean at 30.6 is only very marginally different to 27, and if they're seriously worried about values in this region, they need a more sensitive method. But I'm basically suspicious of working-procedures that potentially introduce unnecessary bias in measurements. We need thresholds for decisions, but we should be aware of how their misuse can cause errors.
Great post! This is a pretty subtle topic, I sometimes have difficulty discussing this with students and even faculty that I report results to.
11 posts Page 1 of 1

Who is online

In total there are 24 users online :: 2 registered, 0 hidden and 22 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: Ahrefs [Bot], Google [Bot] and 22 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry