Advertisement

calibration curve forced through zero

Discussions about GC and other "gas phase" separation techniques.

17 posts Page 1 of 2
Hello

I have a question about the condition for linear regression calibration curve (y=ax+b) to force through zero (y=ax).
Generally, Forcing through zero has no big effect to relative coefficient (r) but it has indeed big effect to the value of sample in the lower concentrations range of calibration curve ( a few times difference between forcing zero or not ). So I just wonder whether there is criteria I can use as a judgment to determine forcing or not zero point ?


Thank you advance

You should do a standard curve starting a bit above your anticipated concentration and then going toward zero. Once you have that, you can compare r² values w/ force through zero on/off.

If you want to really get into the statistics, there are threads on this board where Tom discusses doing a weighted regression.
Thanks,
DR
Image

Personally, I always force through zero when I'm doing a method where I'm expecting to analyze something at a very low concentration, e.g., a residue-specific cleaning verification method. Then I assess the LOQ and LOD of the method and if a quantified peak comes in with a value lower than the LOQ I just report as < LOQ and if no peak is detected I report as < LOD.

My rationale is that for the method to be suitable for this type of application, the response must be zero when the concentration is zero. I don't care if it gives a better or worse r-squared value, I just declare in my protocol that I will force through zero and have an acceptance criterion of r-squared NLT 0.997 or something and if the results comply (as I know they will because I've tested it already ;)) I go ahead with it.

For an assay like a w/w assay of an API (potency), on the other hand, I never force through zero because whether the method is linear through zero is immaterial to its suitability to assess the quality of the material, as long as the preparation is within the range of the method. The preparation should of course always be close to a target concentration. I would just define a range for the method (say 50-150% of the nominal concentration of the smaple prep - ICH only recommends 80-120% for potency, of course, so that's overkill) and check linearity and range in my validation and that's it.

For a method in support of production where I might for example get column fractions of significantly varying concentration from zero to quite high I would personally take that on a case by case basis. If I expect to get some samples with low concentration and people are actually going to be curious about those numbers, I will try and force through zero and see if the r-squared is OK and if it is I'll go that route - simply because it is not enjoyable to explain to production why some sample apparently has a negative assay (because you're close to the intercept). If it doesn't take away substantially from the efficacy of the method, I'll choose to avoid that scenario.

Personally I never use any kind of weighting, not for any philosophical reason, I just haven't felt a need to and don't know a lot about it, so I think I'll hunt up Tom's article and see what he says!

Stephen

Hm, sdegrace,
if you assume that the response must be zero with a blank than why do you not assume that a X concentration must give a X response?

Well, I'm not completely pulling it out of my ass, I think :), first of all I'm mandating no interfering peaks detected in the blank for my selectivity, second of all I'm defending the assumption to a reasonable approximation by requiring the r-squared to conform when forced through zero. It's an idealization to say that nothing present should give no response, but I think it is possible to construct a rationale based on the evidence as to whether it is analytically defensible to quantitate based on it in a given case.

That is, if I understand what you're asking correctly... I mean, the idea that response is directly proportionate to concentration is the whole premise behind quantitating based on a linear fit, so trivially it is an assumption that is made every time one goes to quantitate based on it, and certainly the assumption is attendant when you force through zero. The coefficient of determination is examined to ascertain how reliable these assumptions are likely to be.

Stephen

Tests are there to check whether one has this relationship between response and concentration (or whatever). It is exceedingly easy to include a zero standard, that way you don´t have to use force.
What I tried to say above is that if you assume that your response is zero for a zero standard you don´t have to test whether you have a relationship between response and concentration....., you assumed that you have this relationship.

I would always demonstrate the relationship, for my own sake even if no one was demanding it... even if I make the assumption implicit in forcing through zero, if I run and check a blank every time to look for interference, the assumption is not necessarily invalid. But I think I see your point. It does seem more fundamentally honest to run a zero standard and include it as a point in the fitting.

For me, the choice of whether to force through zero in mathematically quantifying an analyte depends on the need the method is meant to fulfill and on whether I can produce evidence to justify the approach. For low-level samples, weirdness as you approach the LOQ like negative assays, or assays that approach a positive number as the response approached zero just aren't helpful - plus they generate way too much questions and excitement from non-analysts. For stuff like column fractions where the results will be used to predict yields and do mass balance, I'll always force through zero if I have evidence to justify it.

Honestly, even though I see what you're saying I have to confess that while I sometimes force through zero, I never include (0,0) as a point. Partly because if there are no peaks seen in a blank then by definition if a response is there it is below the LOD of the method and so I'm not sure if I feel right including it, and partly, in the spirit of confession, because if I'm forcing through zero I have a specific presentational reason to do it, and when I don't have such a reason, like with a potency assay, I don't care what it does close to the intercept.

Stephen

Well you will get the least amount of question from superiors, etc., if you tell them what they want to hear (that´s easy to figure out).

If you run a blank i.e. a standard with no analyte, then you have a peak area (hopefully zero) for zero analyte that you can include on your calibration curve. While it is (or should be) true that you will get zero area for zero analyte it is also true that you will get zero area for non zero analyte (i.e. a quantity of analyte below the LOD minus the noise) - and that is why you should not force the calibration line through zero. Looked at another way, forcing through zero extrapolates beyond the range for which you have data.

Peter
Peter Apps

I would tend to view forcing through zero as just another kind of linear fitting - justified on the same basis as any other kind of linear fitting, e.g., some specification level based on the coefficient of determination. If the specification is met, I see it as a perfectly valid option. It is not so uncommon a choice, either - when you are quantifying against a single standard or a couple of close standards and using the "sensitivity" approach of just dividing the standard response by the concentration to get a sensitivity and then dividing the sample response by the sensitivity to get the sample concentration (or something mathematically equivalent), you are effectively doing a linear fit forced through zero. I see it as a perfectly legitimate tool, even if my reasons for selecting it in some circumstances may be admittedly somewhat cynical (to my shame no doubt).

Stephen

I have never seen a mathematical justification of forcing any line through any point in linear fitting. There are mathematical reasons for weighting, but that does not eliminate statistics like forcing through zero does.

The ultimate justification for using any kind of fitting whatsoever is that the r2 or some statistical measure of fitting meets some predefined specification of acceptability, is it not?

"The ultimate justification for using any kind of fitting whatsoever is that the r2 or some statistical measure of fitting meets some predefined specification of acceptability, is it not?"


Sorry, I can't agree with that - this is eqivalent to saying that the function of a laboratory is to meet its own QC specs (i.e. an r-squared), which is not necessarily the same as generating accurate, useful results. If you are prepared to add a point for which you have no data in order to improve an r-squared, why not delete a point (or two) that lies off the line in order to achive the same goal ?. Given the increasing obsession with regulation I can understand why analysts lose sight of what it is that they are supposed to be doing.
Peter Apps

Furthermore, the weighting with which I am familiar is used to compensate shortcommings of the normal linear regression math. It will show much better when the low values are off a straight line.
On the contrary, forzing through zero may "beautify" bad data.

Personally, I see statistics, as commonly used in HPLC, only as a means to standardize, internationally, statements on accuracy and precision. If one forces through zero the whole world will never know how far off the data were in the low range.

Another kind of "forcing" that is used to improve r-squared is to put a point high on the right side of the calibration graph, with all the rest of the points clustered low on the left. You see this commonly in application notes from instrument companies because it makes their hardware look better. Taken to extremes it gives what is in effect a two-point calibration, which by simple geometry gives a perfect straight line.

Now consider the opposite case; the points for which there is real empirical data lie high and right on the graph, and you force the line through zero, which lies as low and left as you can get. No surprise that the r-squared looks better.

In both cases a closer look at the real data clustered into the corner of the graph reveals substantial scatter around the line.

Peter
Peter Apps
17 posts Page 1 of 2

Who is online

In total there are 49 users online :: 2 registered, 0 hidden and 47 guests (based on users active over the past 5 minutes)
Most users ever online was 5108 on Wed Nov 05, 2025 8:51 pm

Users browsing this forum: Baidu [Spider], Google [Bot] and 47 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry