Advertisement

How should I correctly report this?

Discussions about HPLC, CE, TLC, SFC, and other "liquid phase" separation techniques.

18 posts Page 1 of 2
Hi,

I have a small issue with reporting results from one of our methods. I'm going to make up the numbers here to make them more palpable.

LOD = 1
LOQ = 3

Anything below the LOD is reported as "None Detected <1."
Anything below the LOQ is reported as "Detected <3."

Anything above 3 is reported as the actual value.

We do a blank correction for the test samples and we ran into a small reporting issue with our Quality department the other day.

Say our blank sample gave us a value of 5 and our test sample gave us a value of 7, 7-5 = 2.

Our reported value was 2, as the "machine detected level" was 7 and blank was 5 thus both above both LOD/LOQ and so valid for this calculation. But our Quality department argue that we cannot report this value as it's below the LOQ. I know it's just a terminology thing but does anybody have any guidance on how I can clarify my reporting style to account for this?

Many thanks.
I would agree with quality on this one, if you have a corrected value that is below LOQ I would report the result as "detected below LOQ" as you cant confidently state a number due to variation at the level between LOD and LOQ.

out of interest are you doing blank subtraction manually i.e. with excel or some such or allowing the CDS to do it? and are you actually seeing your LOD and LOQ injections after subtracting your blank peak/area?

blank subtraction can be a sore spot for auditors so I would advise erring on the side of caution when reporting subtracted results and note it in your audit trail etc.
I would agree with quality on this one, if you have a corrected value that is below LOQ I would report the result as "detected below LOQ" as you cant confidently state a number due to variation at the level between LOD and LOQ.

out of interest are you doing blank subtraction manually i.e. with excel or some such or allowing the CDS to do it? and are you actually seeing your LOD and LOQ injections after subtracting your blank peak/area?

blank subtraction can be a sore spot for auditors so I would advise erring on the side of caution when reporting subtracted results and note it in your audit trail etc.
Hi Chromavore. We have Chromeleon automatically do it inside the report using excel formula which cannot be altered by operator-level users, everything is generated using an automatic eWorkflow.

I just can't really get my head around how a sample value of say 100 minus a blank value of 98, which are both way above the LLOQ of 3, cannot be reported as 2 even though there is no question in the validity of the results as your samples that got you to the value of 2 are orders of magnitude above your LLOQ i.e. you got to that value through data processing/calculations rather than it being a physical limitation of the instrument/method.
I think the easiest way to explain it would be a signal to noise ratio problem, if you had a peak response of 100 and a blank peak of 1 your s/n=100, no problem reporting that, if you have a blank peak of 98 you have a s/n=1.02 which is actually below the guidelines of 3 for LOD and 10 for LOQ.

Chromeleon has a function for visualising this, in data processing, processing method go to tab "general" and select "chromatogram subtraction" then "subtract fixed injection" and select a blank injection, this will subtract the data points of the blank injection from your sample injection and you should be able to see that your sample response is <LOQ sorry if this is what you are already doing it is a bit unclear from your response.

Cheers,
Chromavore
I think the easiest way to explain it would be a signal to noise ratio problem, if you had a peak response of 100 and a blank peak of 1 your s/n=100, no problem reporting that, if you have a blank peak of 98 you have a s/n=1.02 which is actually below the guidelines of 3 for LOD and 10 for LOQ.

Chromeleon has a function for visualising this, in data processing, processing method go to tab "general" and select "chromatogram subtraction" then "subtract fixed injection" and select a blank injection, this will subtract the data points of the blank injection from your sample injection and you should be able to see that your sample response is <LOQ sorry if this is what you are already doing it is a bit unclear from your response.

Cheers,
Chromavore
Many thanks for the replies.

Unfortunately we don’t use the chromatogram subtraction as our calibration samples are made from a certified reference material and don’t require blank correction. Our test samples have a slightly different preparation (are manually derivatised) and required correction with a ‘blank’ containing derivatisation reagents only. If we use chromatogram subtraction, all samples in the sequence are corrected, including the calibration samples, which we don’t want.

So in the report designer in Chromeleon, we have a template set up that subtracts a ‘blank’ sample (run as an unknown) from test samples only.
I just can't really get my head around how a sample value of say 100 minus a blank value of 98, which are both way above the LLOQ of 3, cannot be reported as 2
An error of, say, about 1 unit is only 1 % of 100 and 98, but this error is 100*1/(100-98) = 50 % of your calculated result. Apparently small relative variation of the two peak areas leads to a huge relative variation of their difference, rendering the result highly imprecise. Such a result is out of the range of quantitative analysis. Even a result 5 originating from the difference 100-95 can be unacceptably imprecise.

If the large peak on the blank sample chromatogram is due to the analyte or due to an unknown compound in the blank sample, you have a bad sample preparation procedure. If the blank peak is a "system" peak (or, again, a peak of an unknown compound), you have bad chromatography.

What would you do if the sample peak area is 4 and the blank peak area is 1 or 2? Would you consider the blank peak area zero?
After subtracting out the "baseline of 98", pretend that the baseline is actually flat (it looks this way if you do the chromatogram output in CM to reflect the subtraction), then you're looking at a peak of 2, below LOQ. You have to remember that the baseline value is BS and that you're interested in the net value. Your LOQ is based on the accuracy with which a result can be resolved from other possible results.

PS - anytime I run into baseline subtraction, I suggest checking on a number of things to improve that baseline! A good data system can't fix bad chromatography.
Thanks,
DR
Image
After subtracting out the "baseline of 98", pretend that the baseline is actually flat (it looks this way if you do the chromatogram output in CM to reflect the subtraction), then you're looking at a peak of 2, below LOQ. You have to remember that the baseline value is BS and that you're interested in the net value. Your LOQ is based on the accuracy with which a result can be resolved from other possible results.

PS - anytime I run into baseline subtraction, I suggest checking on a number of things to improve that baseline! A good data system can't fix bad chromatography.
Hi, just to clarify. The ‘blank subtraction’ is really a sample matrix correction for the test samples as the derivatisation agent can contain small amounts of the analyte of interest.

The numbers in my post were completely made up. Our baseline and system health/suitability are extremely good.

In relation to this being a sample matrix correction, if this is still the correct way to go I will amend our reports :)

What would you do if the sample peak area is 4 and the blank peak area is 1 or 2? Would you consider the blank peak area zero?
Regardless of blank amount we still subtract it. Would you suggest an alternative approach?

Many thanks all for the input, always appreciative of others sharing their knowledge.
I guess you should argue with the uncertainty of the two results, not only with the (single) "mean" values of the matrix blank and sample.

E.g. if your blank is 98 +/- 0.5 and your sample is 100 +/- 1, then your sample contains 0.5 to 3.5 (2 +/- 1.5), regardless that the instruments LOD is 1. If the two confident intervals overlap, you cannot result any number.

You need to define the probability level of the uncertainty according to your needs; if single std-dev is sufficient or if this needs to be statistically increased to 95 or 99%, depending on the number of replikates.

Maybe this link may give further inputs:
Eurachem, Measurment Uncertainty, https://www.eurachem.org/index.php/tskmu
When the calibration curve is built, the calibration standards are derivatized the same way as the samples. The signal generated by the derivatization reagent can be assumed to be the same for the samples and standards. There is no need to subtract the derivatization blank from the samples.
When the calibration curve is built, the calibration standards are derivatized the same way as the samples. The signal generated by the derivatization reagent can be assumed to be the same for the samples and standards. There is no need to subtract the derivatization blank from the samples.
Not in this case:
our calibration samples are made from a certified reference material and don’t require blank correction. Our test samples have a slightly different preparation (are manually derivatised) and required correction with a ‘blank’ containing derivatisation reagents only. If we use chromatogram subtraction, all samples in the sequence are corrected, including the calibration samples, which we don’t want.
For example, you can use a commercial standard of DNPH-aldehydes, but you have to derivatise the samples with DNPH.
No, if the samples are derivatized with DNPH, the standards have to be derivatized with DNPH under the same conditions. DNPH derivative cannot be used to build the calibration curve. The response factors using the two different approaches are significantly different.
No, if the samples are derivatized with DNPH, the standards have to be derivatized with DNPH under the same conditions. DNPH derivative cannot be used to build the calibration curve. The response factors using the two different approaches are significantly different.
Obviously, the text in bold type is always the case. So, why are the ready-to-use standards sold?
Aldehyde/Ketone-DNPH Stock Standard-13, Supelco, CRM44933
TO11/IP-6A Aldehyde/Ketone-DNPH Mix, Supelco, CRM47285
Acetone-2,4-DNPH solution, Supelco, CRM4M7341
If reference standards of the derivatives are used to build the calibration curves, it is based on the assumption that the derivatization efficiency for the samples is 100%. The assumption is flawed to say the least. It is not uncommon to see multiple derivative products for a derivatization reaction. I have even seen different derivatization efficiency for standards (underivatized) at different concentrations.
If reference standards of the derivatives are used to build the calibration curves, it is based on the assumption that the derivatization efficiency for the samples is 100%. The assumption is flawed to say the least.
All the assumptions are verified (or disproved and rejected) during the method development.
18 posts Page 1 of 2

Who is online

In total there are 12 users online :: 0 registered, 0 hidden and 12 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: No registered users and 12 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry