Advertisement

LOQ vs. Y-intercept of standard curve

Discussions about HPLC, CE, TLC, SFC, and other "liquid phase" separation techniques.

16 posts Page 1 of 2
Hi Everyone,

I've encountered something I don't know how to solve.

Using LC-QqQ, I determined the following for a compound of interest:

1. LOQ (based on S/N =10) is 1 nM. I interpret this to mean that I should be able to quantify anything above 1 nM.

2. The y-intercept of my standard curve is -112 nM. Obviously I did not force the line through zero due to all the warnings about that. The R2 is 0.9999.

The problem is that I get peaks that are clearly higher than the LOQ, but the quantitation based on the standard curve determines the concentration is negative - can be -90, or -80 nM, for example.

This makes no sense to me. I know the compound is there in appreciable amounts, but if I abide by the rules of constructing and using standard curves, I have to report that a negative amount is there?

How do you handle this situation?

Many thanks!
Oh, this is a "can of worms". This is precisely why I don't think an intercept should be allowed (it seems that zero concentration or nondetectable = zero response/noise to me). Negative intercepts don't make physical sense to me. I'd love to hear/read someone's explanation of how to make sense of them. If your data correlation is 0.9999 I'd bet that not allowing an intercept in your regression calculation won't change it a great deal.

In my experience, negative intercepts mean that your data is curved - probably ever so slightly - and concaved up. I just described this yesterday (about midway down) at:

viewtopic.php?f=1&t=46281

Sorry. I just don't know what to make out of negative intercepts except that you need to figure out why your calibration data is causing it. It's hard to deal with the fact that you can measure a good signal/noise on your analyte yet can't use your regression coefficients to estimate the concentration.
This is a *very* common problem. To *grossly* oversimplify, the problem results from the fact that ordinary least squares is based on the assumption that the magnitude of absolute errors is essentially constant across the entire range of the data. The catch is, that's not the way chromatographic data behave; the magnitude of *percentage* errors is approximately constant! What that means in practice is that errors at the high end tend to dominate the calculation. If the calibration range is sufficiently narrow (e.g., less than one order of magnitude or so) the problem usually is not too bad but for wide range calibrations it's a real problem.

The most common solution is some form of weighted least squares (e.g., 1/x or 1/x^2) or something like a log-log transform.

As r6banjo suggested, you may also simply have a non-linear response. There are "lack of fit" tests that you can use, but that is waaay outside the scope of a Forum post. Check Sections 22 and 23 of the Coleman & Vanetta series on Statistics in Analytical Chemistry (actually, the whole series is worth downloading and reading! ) :
http://www.americanlaboratory.com/1403- ... lpi_4346=4
-- Tom Jupille
LC Resources / Separation Science Associates
tjupille@lcresources.com
+ 1 (925) 297-5374
A practical approach:

LOQ (S/N = 10) is the achievable lowest quantification level.
But your methods LOQ finally is given by the lowest calibration level. Extrapolation to lower concentrations is critical.
Dr. Markus Laeubli
Manager Marketing Support IC
(retired)
Metrohm AG
9101 Herisau
Switzerland
This is a *very* common problem. To *grossly* oversimplify, the problem results from the fact that ordinary least squares is based on the assumption that the magnitude of absolute errors is essentially constant across the entire range of the data. The catch is, that's not the way chromatographic data behave; the magnitude of *percentage* errors is approximately constant! What that means in practice is that errors at the high end tend to dominate the calculation. If the calibration range is sufficiently narrow (e.g., less than one order of magnitude or so) the problem usually is not too bad but for wide range calibrations it's a real problem.

The most common solution is some form of weighted least squares (e.g., 1/x or 1/x^2) or something like a log-log transform.

As r6banjo suggested, you may also simply have a non-linear response. There are "lack of fit" tests that you can use, but that is waaay outside the scope of a Forum post. Check Sections 22 and 23 of the Coleman & Vanetta series on Statistics in Analytical Chemistry (actually, the whole series is worth downloading and reading! ) :
http://www.americanlaboratory.com/1403- ... lpi_4346=4
Nice explanation. In our lab it's common practice to weigh chromatographic calibration 1/x. Is there any risk/pitfall associated with this?

It would be interesting in this discussion to know the range and the points of the calibration curve of the original poster. How exactly did you determine/validate the LOQ? There are different methods, and signal-to-noise is, in my experience -correct me if i'm wrong-, not the best one for LC-MS/MS data.
Thank you everyone for your input. I'm working through all these ideas and I appreciate the contributions!
From the American Lab series it appears that the only way to combat the nonzero intercept is to run many standards at the low-concentration end of the range required. Do your best to force a zero intercept.
I'm with rb6banjo and Marcus_Laeubli on this one.

There are two possible reasons why your straight-line fit hasn't gone through the origin: (1) you've fitted a straight line through a set of points that actually follow a curve; (2) the points do, sort-of, follow a straight line, but the errors on the high-concentrations have pulled the line a long way from where it should be for the low concentrations. Weighting mitigates the second problem (though it cannot solve it). Weighting will tend to improve the percentage accuracy of the low-concentration points at the expense of the high (which start off over-accurate). Weighting does nothing to address the first problem: curvy calibrations.

There are lots of reasons why a calibration curve might not be straight. As an easily-understood example, imagine what happens if some component of the glassware you've used (your vial) or the instrument has a capacity to bind the analyte. If binding-sites are limited, it will simply titrate out a portion of the analyte. Say it can remove 4pmoles, and you inject 5, 10, 15, 20pmoles. Your actual calibration curve will be for the remaining 1, 6, 11, 16pmoles, and will hit the x-axis at 4 pmoles, therefore giving a negative intercept with the y-axis, and potentially giving negative results.

Since fiddling with the calibration curve won't make the problem go away, I think Marcus is right: it is not safe to extrapolate a limit of quantification below the lowest point for which real data exist. The functional LOQ is the higher of two values: the lowest injected standard, and the calculated LOQ derived from the existing experimental data.

Incidentally, the S/N approach is a surrogate in which we're really making the following argument:
(1) I would like to quantify to a precision of +/- X%
(2) I believe that precision is related to signal:noise ratio, because noise creates the variation that leads to loss of precision
(3) I believe that a S/N ratio of Y corresponds to a precision of X
(4) Therefore the point at which I can achieve S/N Y is my LOQ

This chain of argument falls apart if the relationship between S/N and concentration differs from the relationship between precision and concentration. Sadly, in some detectors the two are very different.

The most obvious example is a mass spec where selectivity can be so high that there is no noise. In this case a very small peak will have an infinite S/N ratio, but clearly it hasn't got infinite precision. The S/N method, while greatly used, was based on noise being present, genuinely measurable, and the major contributor to overall precision - assumptions that are not true in most mass specs. I'd argue that the S/N method should only be used on a signal that is above zero at all times.

To avoid emotive arguments about the relative merits of different approaches to finding a LOQ, I'm quite certain that the best thing to do is treat them iteratively: work out your expected LOQ from your current test-runs, then re-inject closer to the new expected LOQ and check it again, and continue until you arrive at a situation where you are making genuine injections that just exceed your expected precision. We can all disagree about details of how to calculate LOQs, but I think we all agree on the value of real injections to prove it works.

Incidentally, a LOQ depends totally on the matrix, including any coeluting stuff found in real samples.
Thank you, LMH, and everyone for your expertise. Looking at my data, it seems to be the case that I'm getting a nonlinear response, which as LMH suggests, could occur due to binding sites in the instrumental system. I am using LC with QqQ. I don't know if I can get rid of this effect, so I'm working on understanding how to work with nonlinear calibration curves. It seems that people use them, but there also seems to be a discomfort with them.
Check sections 9 & 10 of the Coleman & Vanetta series for a discussion of "lack of fit" tests which can be applied to proposed fitting functions (e.g., to tell whether WLS or a quadratic fit might be better than OLS). They also present more detailed explanations in sections 22 & 23.
-- Tom Jupille
LC Resources / Separation Science Associates
tjupille@lcresources.com
+ 1 (925) 297-5374
LC-QQQ data is almost always non-linear. It can be linear over a narrow calibration range, but in general it fits a polynomial curve much better than a linear curve (remember that a linear fit is a special case of the more general polynomial fit).

We routinely work with polynomial fits. The desire to work in linear space comes from old, old data processing (strip chart recorders) when it was truly difficult to work with non-linear calibration curves. Given the software that is available now there is no reason at all not to work with polynomial curve fits. I prefer forces zero fits if my blank is truly blank since it "anchors" the curve to reality.
Mark Krause
Laboratory Director
Krause Analytical
Austin, TX USA
In the new EPA Drinking water methods they have gone to using a Minimum Reporting Level(MRL) and they dictate that the calibration curve must have its lowest concentration at or below that level. The MRL is similar to the LoQ in other work.

They also dictate that once you produce the calibration curve, you must then quantify each standard against the calibration curve and any point at or below the MRL must be +/-50% of the true value, and all other points above the MRL must be +/-30% of the true value. They also ignore R^2 values since weighting is often employed to make the curve fit the data. After using this approach I like it much better than trying to use S/N or intercept to determine the actual MRL(LoQ). Those methods are good to estimate what it could be, but then you actually have to show accuracy at that level before you can proceed. Also most instruments now are very sensitive but that increase in sensitivity has resulted in a loss of linearity and stability, good for finding that last lowest bit of an analyte that might be present, but terrible for accurate quantification for a series of a large number of samples in a short period of time.

For the negative intercept, think about a meter stick where you cut off the 0-1cm portion of it. If you measure from 100cm down you will always run out of measuring stick before you reach true zero. Same thing happens with instruments, once you reach zero signal, or zero signal above the background noise, your measuring stick runs out, even if you are not at zero concentration. And on most mass spec instruments that measuring stick is also curved which makes things worse.
The past is there to guide us into the future, not to dwell in.
I love the measuring-stick analogy!

It's important to understand why a calibration curve deviates from linearity. Yup, there's no logical reason to insist on a straight line if the detector's response is genuinely likely to be curvy (choose a curve of the right shape). But if the response is non-linear because someone's chopped the bottom off the measuring stick, or because the top points have maxed out the detector, then fiddling with different curves is at best concealing the problem rather than fixing it.

In MS, I'm always happier with calibration curves that are convex than concave; ionisation efficiency can drop with increasing concentration. It still has consequences: if the curve gets too flat, precision will be poor.

Yes, I like the idea of insisting on a certain precision above and below a minimum reporting level. I'm guessing that you're allowed to reject standards that are so dilute that you cannot achieve even 50% accuracy?

On the measuring-stick with the end chopped off, and forcing to zero: it is (I think) perfectly OK to have a method that doesn't go through the origin. If you only use the method between a lower limit and an upper limit, and you have evidence that the method is sufficiently precise and sufficiently accurate over the whole of that range, for the samples you're running, what it does outside the range doesn't matter (I don't care if a police speed-camera clocks a snail at 5mph; I care intensely whether it is accurate at the 30mph threshold where it's deciding whether I was speeding).

I am trying to remember: there was, I think, a John Dolan article some years ago about whether it is OK to force calibration curves through the origin. I think the conclusion was that it's OK provided the calibration curve in any case passed within a statistically insignificant distance of the origin (which is a bit pointless because it means you can force it through the origin only if this doesn't make any difference anyway...).
Personally I'll admit I do force through the origin on occasions, but only because I'm in an academic environment where it's hard to explain to senior scientists why they're getting negative numbers. This situation typically happens where someone is quantifying an analyte that is usually present, but there are negative controls to check that it is absent or very low in the absence of whatever treatment was supposed to make it happen. I worry a lot about what I'm doing to these data, which are inevitably lower than my lowest calibration point (because they're essentially zero). I can report them as "not detected", but you can't plot that on a bar graph! I can even estimate the detection limit, but that still isn't helpful on the bar-chart. If you're going to display the positive data on a bar graph, then the negative control has to be there too - but a truly negative number looks too weird for any journal to accept. So I worry a lot, but in the end, I'm worrying about 2 or 3 pixels in the bar-chart, something that won't even be visible in the final printed paper.
The whole issue about negative intercepts is a good example of why reporting numbers below the calibration range without error estimates is misleading. If when you plotted that -0.4 ppb result you included the + or - 0.8 ppb it would be a lot more acceptable to the end user. They might even understand why the duplicate analysis was 30% off. OK, that might be asking too much.
On mass spec data there is also a cutoff for minimum peak area. If you don't account for that it can produce even more misleading results.

When I look at some very low level work I am thinking that a minimum reporting limit should be based on the instrument response (e.g. area) rather than a calculated concentration. My 2 cents worth
The whole issue about negative intercepts is a good example of why reporting numbers below the calibration range without error estimates is misleading. If when you plotted that -0.4 ppb result you included the + or - 0.8 ppb it would be a lot more acceptable to the end user. They might even understand why the duplicate analysis was 30% off. OK, that might be asking too much.
On mass spec data there is also a cutoff for minimum peak area. If you don't account for that it can produce even more misleading results.

When I look at some very low level work I am thinking that a minimum reporting limit should be based on the instrument response (e.g. area) rather than a calculated concentration. My 2 cents worth
A lot of our data users would be totally confused by the +/- 0.8ppb, but then we had one client leave us because we reported Lead and Copper in the same sample at the exact same concentration to two decimal points in ppm. He said it was impossible to have the same value and we had to be faking the data :roll:

I normally use either a 2000 or 1000 area count threshold for my work with MS, or if doing SIM work it has to actually make a smooth peak, not just a spot of jagged noise where the peak should be. But that would be more of an art versus science thing probably.

Those new EPA drinking water method, you would not drop a point because it didn't calculate within 50% at the MRL, since the MRL is set in the regulations and you must hit that accuracy for that point. Below that you could remove the points if they fail, but they still want you to report any blank with a value of 1/3 the MRL as failing, requiring a re-analysis even if the samples are non-detect, which is another crazy hoop to jump through. The good thing though in those methods is that it forces you to prove you can see what you are seeing and that your calibration curve is accurate over the range you are calibrating. Something what would be hidden often until someone dug deep in a audit situation. It levels the playing field for those who actually do care about their results and not just their billing.
The past is there to guide us into the future, not to dwell in.
16 posts Page 1 of 2

Who is online

In total there are 13 users online :: 1 registered, 0 hidden and 12 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: Google [Bot] and 12 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry