Advertisement

LOQ is smaller than lowest calibration point

Discussions about HPLC, CE, TLC, SFC, and other "liquid phase" separation techniques.

22 posts Page 2 of 2
It makes sense that at least one point in the LOQ calculation should be "at or below" the LOQ, because otherwise you're extrapolating in calculating the LOQ (taking standard-deviation measurements from a region of the curve away from where you intend to use them), and extrapolation is always a bad idea.

You asked about whether you should take just the three lowest points for calculating your LOQ. The problem is that you're then calculating a standard deviation based on only three points, which means it will have a very large error (I always struggle to get my head around errors-on-errors, but the s.d. value has its own s.d., and in this case, you're trying to find out the average variability in low measurements, and it's the error on this variability that's causing variable LOQ values. Your aim is to have a LOQ that's reliable and accurate, rather than just a meaningless number that jumps up and down from batch to batch).

It is this problem with measuring s.d. at low concentration that prompts regulators to ask for 7 replicates of a point at the LOQ.

My own entirely theoretical and personal viewpoint is that if you have 7 different points, all fairly close to the LOQ and scattered just above and below it, your estimate of the s.d. close to the LOQ should be as good as if you have 7 points with the same concentration. It's not the lack of replication of each point that's the problem, it's a lack of overall replicates. Points a very long way from the LOQ are, of course, of less value (and in a regulated environment the personal opinion of lmh is worth nothing: you have to do what the regulators ask).

Of course I couldn't agree more with Peter Apps point about measuring samples close to the LOQ. Samples are very different to clean standards. You only need a nearby contaminant peak to mess up a LOQ completely.
Hello lmh,

Thanks for the discussion. Sample with an analyte concentration close to LOQ is not that easy to find. If I want to develop a general SOP, what is your suggestions? My current thoughts are:
1. use full standard curve (5 points) to find an estimated LOQ value using ICH approach.
2. confirm the LOQ number by running low concentration standard for 6 replicate times.
What do you think?


It makes sense that at least one point in the LOQ calculation should be "at or below" the LOQ, because otherwise you're extrapolating in calculating the LOQ (taking standard-deviation measurements from a region of the curve away from where you intend to use them), and extrapolation is always a bad idea.

You asked about whether you should take just the three lowest points for calculating your LOQ. The problem is that you're then calculating a standard deviation based on only three points, which means it will have a very large error (I always struggle to get my head around errors-on-errors, but the s.d. value has its own s.d., and in this case, you're trying to find out the average variability in low measurements, and it's the error on this variability that's causing variable LOQ values. Your aim is to have a LOQ that's reliable and accurate, rather than just a meaningless number that jumps up and down from batch to batch).

It is this problem with measuring s.d. at low concentration that prompts regulators to ask for 7 replicates of a point at the LOQ.

My own entirely theoretical and personal viewpoint is that if you have 7 different points, all fairly close to the LOQ and scattered just above and below it, your estimate of the s.d. close to the LOQ should be as good as if you have 7 points with the same concentration. It's not the lack of replication of each point that's the problem, it's a lack of overall replicates. Points a very long way from the LOQ are, of course, of less value (and in a regulated environment the personal opinion of lmh is worth nothing: you have to do what the regulators ask).

Of course I couldn't agree more with Peter Apps point about measuring samples close to the LOQ. Samples are very different to clean standards. You only need a nearby contaminant peak to mess up a LOQ completely.
Hello lmh,

Thanks for the discussion. Sample with an analyte concentration close to LOQ is not that easy to find. If I want to develop a general SOP, what is your suggestions? My current thoughts are:
1. use full standard curve (5 points) to find an estimated LOQ value using ICH approach.
2. confirm the LOQ number by running low concentration standard for 6 replicate times.
What do you think?

1 is OK, but 2. misses the point - you will always get a better (i.e. lower)
LOQ if you just run replicate injections of standards than if you do replicate preparations and analyses on real samples.

If you cannot get samples with concentrations close to the LOQ you need to be asking whether you need the LOQ to be that low. If you can get clean matrix you can spike it with known quantities of analyte to do the sample replicates.

It makes sense that at least one point in the LOQ calculation should be "at or below" the LOQ, because otherwise you're extrapolating in calculating the LOQ (taking standard-deviation measurements from a region of the curve away from where you intend to use them), and extrapolation is always a bad idea.

You asked about whether you should take just the three lowest points for calculating your LOQ. The problem is that you're then calculating a standard deviation based on only three points, which means it will have a very large error (I always struggle to get my head around errors-on-errors, but the s.d. value has its own s.d., and in this case, you're trying to find out the average variability in low measurements, and it's the error on this variability that's causing variable LOQ values. Your aim is to have a LOQ that's reliable and accurate, rather than just a meaningless number that jumps up and down from batch to batch).

It is this problem with measuring s.d. at low concentration that prompts regulators to ask for 7 replicates of a point at the LOQ.

My own entirely theoretical and personal viewpoint is that if you have 7 different points, all fairly close to the LOQ and scattered just above and below it, your estimate of the s.d. close to the LOQ should be as good as if you have 7 points with the same concentration. It's not the lack of replication of each point that's the problem, it's a lack of overall replicates. Points a very long way from the LOQ are, of course, of less value (and in a regulated environment the personal opinion of lmh is worth nothing: you have to do what the regulators ask).

Of course I couldn't agree more with Peter Apps point about measuring samples close to the LOQ. Samples are very different to clean standards. You only need a nearby contaminant peak to mess up a LOQ completely.
Peter Apps
Thank you both! They are good suggestions!
Hello lmh,

Thanks for the discussion. Sample with an analyte concentration close to LOQ is not that easy to find. If I want to develop a general SOP, what is your suggestions? My current thoughts are:
1. use full standard curve (5 points) to find an estimated LOQ value using ICH approach.
2. confirm the LOQ number by running low concentration standard for 6 replicate times.
What do you think?

1 is OK, but 2. misses the point - you will always get a better (i.e. lower)
LOQ if you just run replicate injections of standards than if you do replicate preparations and analyses on real samples.

If you cannot get samples with concentrations close to the LOQ you need to be asking whether you need the LOQ to be that low. If you can get clean matrix you can spike it with known quantities of analyte to do the sample replicates.

It makes sense that at least one point in the LOQ calculation should be "at or below" the LOQ, because otherwise you're extrapolating in calculating the LOQ (taking standard-deviation measurements from a region of the curve away from where you intend to use them), and extrapolation is always a bad idea.

You asked about whether you should take just the three lowest points for calculating your LOQ. The problem is that you're then calculating a standard deviation based on only three points, which means it will have a very large error (I always struggle to get my head around errors-on-errors, but the s.d. value has its own s.d., and in this case, you're trying to find out the average variability in low measurements, and it's the error on this variability that's causing variable LOQ values. Your aim is to have a LOQ that's reliable and accurate, rather than just a meaningless number that jumps up and down from batch to batch).

It is this problem with measuring s.d. at low concentration that prompts regulators to ask for 7 replicates of a point at the LOQ.

My own entirely theoretical and personal viewpoint is that if you have 7 different points, all fairly close to the LOQ and scattered just above and below it, your estimate of the s.d. close to the LOQ should be as good as if you have 7 points with the same concentration. It's not the lack of replication of each point that's the problem, it's a lack of overall replicates. Points a very long way from the LOQ are, of course, of less value (and in a regulated environment the personal opinion of lmh is worth nothing: you have to do what the regulators ask).

Of course I couldn't agree more with Peter Apps point about measuring samples close to the LOQ. Samples are very different to clean standards. You only need a nearby contaminant peak to mess up a LOQ completely.
The question of LOD (limit of detection) and LOQ (limit of quantification) is very tricky and has been around for a very long time. AOAC and USEPA define the LOQ as 10 times the signal to noise (S/N), or 3.3 times the LOD (defined as 3 times S/N). No one other than ICH that I know of uses the SD of the calibration curve to calculate LOQ because you can radically change the LOQ by merely changing the calibration points.

The question comes in as to how to define signal to noise. AOAC makes the determination visually (I don't like this approach but it may be the best one). USEPA makes the determination statistically (40 CFR 136 Appendix B), and has been roundly criticized for this approach since your choice of starting low standard concentration changes the LOD.

Note that the approach that Peter champions is, in my estimation, the true LOD/LOQ for a specific sample. When you run standards in clean matrices you are really measuring the IDL/IQL, or the instrument detection limit/quantification limit. This is not at all the same as the SDL/SQL, the sample detection limit/quantification limit, since real samples have interferences. Some regulatory programs require you to report both the IDL and the SDL, but don't give you guidance as to how to do this.

In real terms you can get around all of this by using the method of known addition. You analyze the sample and the sample fortified with at least 3 different levels of standard, and then graphically determine the concentration of the analyte in the sample. This always works and you can get around all of the statistical mumbo-jumbo that you get caught up in when you go down the LOD/LOQ path, plus almost all of the regulatory programs will accept known addition results.
Mark Krause
Laboratory Director
Krause Analytical
Austin, TX USA
1, at low impurity level, 1ug/mL vs 1.4 ug/mL, essentially, are the same (level).
2, while LOQ can be estimated statistically, the final test is if it has enough precision and accuracy. If you are not sure if 1 ug/ml level has good precision and accuracy test it.
3, the current case, calibration curve went to 1 and LOQ estimated 1.4, it is perfect to claim LOQ is 1.4, or even 1 , remembering the larger uncertainty is, the lower the concentration is.
Excel
Thank you for the suggestions!

I guess my question is that: we need to provide LOD and LOQ to client to demonstrate our method's capability. That's why we need a clear and always-work SOP to develop the LOD/LOQ. My understanding is that: Standard addition calibration curve can be used to determine accurate results of a sample that contains matrix interference. But besides providing accurate results, we also need to provide LOD/LOQ of our method. Any thoughts?


The question of LOD (limit of detection) and LOQ (limit of quantification) is very tricky and has been around for a very long time. AOAC and USEPA define the LOQ as 10 times the signal to noise (S/N), or 3.3 times the LOD (defined as 3 times S/N). No one other than ICH that I know of uses the SD of the calibration curve to calculate LOQ because you can radically change the LOQ by merely changing the calibration points.

The question comes in as to how to define signal to noise. AOAC makes the determination visually (I don't like this approach but it may be the best one). USEPA makes the determination statistically (40 CFR 136 Appendix B), and has been roundly criticized for this approach since your choice of starting low standard concentration changes the LOD.

Note that the approach that Peter champions is, in my estimation, the true LOD/LOQ for a specific sample. When you run standards in clean matrices you are really measuring the IDL/IQL, or the instrument detection limit/quantification limit. This is not at all the same as the SDL/SQL, the sample detection limit/quantification limit, since real samples have interferences. Some regulatory programs require you to report both the IDL and the SDL, but don't give you guidance as to how to do this.

In real terms you can get around all of this by using the method of known addition. You analyze the sample and the sample fortified with at least 3 different levels of standard, and then graphically determine the concentration of the analyte in the sample. This always works and you can get around all of the statistical mumbo-jumbo that you get caught up in when you go down the LOD/LOQ path, plus almost all of the regulatory programs will accept known addition results.
22 posts Page 2 of 2

Who is online

In total there are 21 users online :: 0 registered, 0 hidden and 21 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: No registered users and 21 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry