Advertisement

Weighting in Quantitation in LCMS

Discussions about HPLC, CE, TLC, SFC, and other "liquid phase" separation techniques.

7 posts Page 1 of 1
Tom Jupille talked mentioned the following in another note..

known weakness of non-weighed least squares to characterize chromatographic data over a wide range. "The standard least-squares fit implicitly assumes that the data set is homoscedastic (the average errors have the same magnitude across the whole range). Chromatographic data tend to be heteroscedastic (the percentage errors have about the same magnitude across the whole range; the absolute errors therefore are larger near the upper end). In effect, errors at the high end tend to dominate the fit."

"For wide-range data (e.g., drug metabolites by LC-MS) it can be a very big problem, and a variety of weighted least squares fits are used to get around it."


He mentioned weighting. Could someone explain to me the difference between 1/x2 vs 1/x vs 1/y vis 1/y2. I usually use the 1/x2 which gives me good fits over the range by weighting the lower end... Normally no weighting or the others don't improve things..

What instances would the others be valuable or do you just try them all and see which works best. Seen a few articles on quantitative LCMS where they indicated quadratic fits with 1/y..
Sailor

Usually you use whatever works best. 1/x or 1/x^2 are the most common.

I'm not a statistician (far from it, in fact) but I'd venture a guess that 1/x corrects for heteroscedasticity, 1/x^2 provides some additional correction for non-liearity. Higher-order fits would simply start fitting to random errors. I've never used 1/y type weightings.

Sometimes graphing the results on a log-log plot helps clarify some of the issues. I've stuck an excerpt from our LC/MS/MS course that discusses this up on the LC Resources web site:
http://www.lcresources.com (click on "Resources", then on "Exchange").
-- Tom Jupille
LC Resources / Separation Science Associates
tjupille@lcresources.com
+ 1 (925) 297-5374

If you have data that demonstrates that your true intercept is negligible (which you can obtain by running a linear regression with several solutions at all levels; or by evaluating linearity on the low end separately; or running several regressions and averaging the intercept; etc...) then the simplest approach - in my view - would be to run a 'normal' regression and force the origin.

This is a simple and painless way to address the problem that Tom mentioned - which is that a linear regression line is largely a function of the points on the higher end of the curve. But it is only valid if you have demonstrated that your true intercept is negligible.

Adam

i had seen somewhere where 1/y weighted the top end of the curve..
Sailor

I haven't dealt with this in a while. I had hoped that an expert on this would jump in, but this has not happened yet.

Standard regression assumes that the absolute error is equal at the low end of the calibration curve and at the high end. Of course, this is not correct with anything that we are doing in reality.

Since we are injecting for the most part a fixed volume with variable concentrations, the RELATIVE error is (for the most part) the same at the high end and at the low end of the curve. (Just look at your own calibration data and you see this.)

Consequently, we should use a weighted least-squares fit. If my memory serves me correctly, one should use a 1/x^2 for this typical situation described above. Therefore it seems to me that what you are doing is correct.

Other situations may require a different approach. The correct fitting procedure depends on how the error is distributed over the data range.

I never use a weighting of 1/y or 1/y^2; if you have an incidental relativ stong deviating response for a calibration point, this would stongly influence your regression line and this is not what I want. You can validate your method for both 1/x and 1/x^2 and use the weighting factor that give you the best results in each individual run. In the equations, used for linear regrassion, the square of the nominal concentration is used, so in my view, 1/x^2 should compensate best for the effect Tom mentioned.

Regards Bert

I still say forcing the origin is a much easier way to deal with this problem (assuming your true intercept is zero - which is almost always the case in chromatography).

Forcing the origin is far better than any other approach, in terms of setting the low end of the curve where it should be.
7 posts Page 1 of 1

Who is online

In total there are 16 users online :: 1 registered, 0 hidden and 15 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: Google [Bot] and 15 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry