Page 1 of 5

Method Validation - Linearity

Posted: Tue Apr 05, 2005 12:47 am
by syx
Dear Members,
I have read an article about calibration curve. In this article there is statement: the coefficient r is a measure of correlation not a measure of linearity.
If it isn't the measure of linearity, what parameter is more suitable, especially when used in validation procedure?
Thank you for any explanation.

Best regards,
SYX

Posted: Tue Apr 05, 2005 6:22 am
by bert
Try to back calculate your calibration samples on the regression line. The difference in nominal and back-calculated concentration can be used to evaluate linearity.

hope this helps.

Posted: Thu Apr 14, 2005 5:14 pm
by Ricardinio
Bert,
what is the limit for the error of estimates

Posted: Thu Apr 14, 2005 5:16 pm
by Ricardinio
Try to back calculate your calibration samples on the regression line. The difference in nominal and back-calculated concentration can be used to evaluate linearity.

hope this helps.

What is the limit for the error of estimmates

Posted: Fri Apr 15, 2005 1:43 am
by syx
Based on Funk et al (1992) Qualitatsschirung in der Analytische Cemie, pp. 5 - 39, other parameters should be determined are mean of residual deviation from regression line (Sy), function of st dev (vervahrensstandarabweichung, Sxo) and coeff variation of function (verfahrenvariationkoeffizient, Vxo) ... hope I translate them correctly ... :lol:

Sy= sqroot((sum(Yi-Y)^2)/N-2)
for Yi = a + bXi

Sxo=Sy/b
Vxo=100%.Sxo/X

The homogeneity of linearity curve should be proven with to run analysis to 10 samples on highest (N) and lowest concentration (I) from the linearity range, and then PW value is calculated using formula:

Pw=SN^2/SI^2
for SI^2=(sum(Yi-Y)^2)/N-1)

PW value should be lower than F table (f1=N-1; f2=N-1).

A linearity/calibration curve can be used for quantitative concentration analysis only if it is statistically proven that lowest concentration (xI) is significantly different to 0 value. For this reason, it is necessary to prove that Xp value should be less than xI. Xp value can be calculated with formula … I consider not to write the formula here, it is too complex :? .

Is this method used widely now?

Posted: Fri Apr 15, 2005 9:54 am
by bert
To Ricardinio

It depends on what samples you analyze and the guidelines you have/want to follow. According to the FDA ‘Guidance for Industry, Bioanalytical Method Validation’ (for biological samples as tissue and plasma), accuracy should be within ±15% of the nominal value, except at LOQ or below, where it should not deviate by more than ±20%.

regards Bert

Posted: Fri Apr 15, 2005 5:29 pm
by adam
There has always been some rumbling that R or R-squared is not the best way to measure linearity. While it may not be perfect, it is really not a bad way of doing it.

Truthfully, no statistical test is as valuable as your eyes. If you generate a plot and look at it, that's probably the most reliable way to evaluate linearity. It is a little flimsy though (in a GMP environment) to have a criteria based only on visual evaluation.

Bottom line: in my opinion a painfree approach that makes a lot of sense is to use a combination of the two. Hence, the SOP that I've written says to evaluate that the plot is linear visually and evaluate the R-squared value. I have never felt the need to bring residuals into the picture, or to back-calculate the standards against the curve.

Linearity

Posted: Sat Apr 16, 2005 4:52 pm
by Ben Buglio
Syx - r defined as the coefficient of correlation is not a very useful measure of linearity. The coefficient of determination, r squared, is a measure of a linear relationship between 2 variables and its value should be 0.999x. For supporting data regarding linearity you can report the residual sum of squares.

Posted: Mon Apr 18, 2005 2:19 pm
by adam
It makes no sense to say R is not a measure of linearity and R-squared is a measure of linearity. These 2 parameters are obviously directly related to one another. Whatever utility one has can be applied to the other as well.

Posted: Mon Apr 18, 2005 4:03 pm
by MG
But neither R nor R^2 is a good measure of accuracy near the low end of the curve. You can have a great R^2 and have 50% error in your bottom point(s).

Posted: Tue Apr 19, 2005 12:57 am
by syx
Based on Barwick (2003) Preparation of Calibration Curves: A Guide to Best Practice:
The correl coeff, r (and the related parameters r^2 and adjusted r^2) is a measure of the strength of the degree of correlation between the y and x values. It is a measure of correlation not a measure of linearity. :shock:

r^2 is often used to describe the fraction of the total variance in the data which is contributed by the line that has been fitted.

adjusted r^2 is useful for assessing the effect of adding additional terms to the equation of the fitted line (e.g., if a quadratic fit is used instead of a linear fit) and more useful in such cases as it takes account of the reduction in the degrees of freedom which occurs each time an additional term is added to the equation of the line, and therefore does not automatically increase on addition of extra terms.

Posted: Tue Apr 19, 2005 11:29 am
by putnam
I have a general sense that the r/r^2 determination is not enough to show linearity, especially with a method that covers a range of one or more orders of magnitude. However, I am unable to explain this in statistical terms.

I used to require both r^2 and some degree of deviation of the y-intercept from the origin with respect to the 100% target value. However, recently I have seen some methods which appear to be linear (visually and great r^2 value), yet the y-intercept was more than, say, 5% of the origin relative to the 100% target response.

I then began to calculate response factors relative to the average response factor at 100% and construct response factor plots. This provided a clearly visible picture of the nonlinearity with regard to the lower concentrations. I have been told that this method is more dependent on the slope of the line than just using r^2, though I am not quite sure what this means. I just know that it seems to provide more meaningful information and, I believe, adds weight to the linearity section of my validation package.

I have read that determining the residuals and constructing a residual plot can give me further information about the types of errors in the method which may be contributing to nonlinearity, but I am unsure about this.

I am left to believe that a simple plot and calculation of r^2 is insufficient. Anyone care to comment, more elegantly than I, why this would be so?

One other thing, while I'm at it - I always prepare three stock solutions and from each a series of dilutions to generate three sets of linearity solutions (ie, 3 at 100%, 3 at 75%, etc.). After analysis, I plot all values in one plot for linearity and one response factor plot. I have been asked why I do this rather than generating three independent plots of each type. I really don't know, except that this was the way I was trained awhile back. Any ideas?

Ciao,

P

Posted: Thu Apr 21, 2005 11:26 pm
by tom jupille
Putnam, my (layman's) understanding of the issue on r^2 is that it is meaningful if the absolute magnitude of the errors is essentially constant over the entire range (I think the term is "homoscedastic"). In chromatographic calibrations, the percentage error is likely to be constant, which means that the absolute errors are larger at the upper end of the range. That means that the upper end errors dominate the r^2 value (the lower end is "swamped out").

Posted: Fri Apr 22, 2005 6:55 am
by HW Mueller
Having skimmed through this chain only lightly, I hope that nothing is duplicated here. The reason for my low interest was just mentioned by Tom. An example: We had a low carryover (cortisol in the Rheodyne) which was on the order of the lowest concentration of a dilution series. When I treated the values with Excell´s ANOVA I got a super correlation coefficient, etc., even though the lowest value was 100% too high, the next to the lowest also far off. Furthermore, the math is oblivious to the fact that this was not a random error, both being on the high side.
(I have seen something on such trends being recognized by other methods, but it "hurts" to try understanding it all. I thus lean toward Ernest Rutherford´s statement: "If your experiment needs statistics, then you ought to have done a better experiment".)

Posted: Fri Apr 22, 2005 7:09 am
by bert
"If your experiment needs statistics, then you ought to have done a better experiment".)
Or better statistics??? :roll:

A weighting factor is often neccesary if the calibration curve covers a wide concentration range. This will also prevent that the upper end errors dominate the r^2 value as Tom mentioned.