Page 1 of 1

Method Validation Linearity

Posted: Sun Apr 21, 2013 10:37 pm
by ashfaq
What is the significance of calculating relative response factor in linearity?
Also Confidence limit.

Re: Method Validation Linearity

Posted: Mon Apr 22, 2013 12:07 am
by tom jupille
The *relative* response factor has nothing to do with linearity. If you mean the *response factor*, that is the slope of the calibration plot.

The subject of confidence limits is far outside the scope of a Forum post. A good place to start would be by reading the first four installments of the excellent series on "Statistics in Analytical Chemistry" by Coleman and Vanetta:
http://www.americanlaboratory.com/913-T ... ry-column/
http://www.americanlaboratory.com/913-T ... mber-line/
http://www.americanlaboratory.com/913-T ... t-squares/
http://www.americanlaboratory.com/913-T ... t-squares/

Re: Method Validation Linearity

Posted: Mon Apr 22, 2013 9:03 am
by aceto_81
The relative response factor can be found by dividing the slope of the regression line of component A by the slope of the regression line of component B.
Other than this I don't see any use of the relative response factor in your linearity method validation.

Confidence intervals for the regression line are used to:
- check if your line goes through 0
- check if your slope differs from 0
- check if 2 regression lines are equal.

HTH

Ace

Re: Method Validation Linearity

Posted: Mon Apr 22, 2013 9:06 am
by lmh
... just a minor addition: if you are using an internal standard method, then you are measuring the ratio of the intensity of two signals (one for the analyte, the other for the internal standard), and in this case you are measuring a relative response factor (rather than an absolute one). Your calibration curve is the response of the analyte relative to the response of the internal standard.

This means that if your internal standard should, theoretically, have an identical response to the analyte (for example, you are using mass spec detection and your internal standard is an isotopically-labelled version of the analyte; you are using absorbance and the chromophore is completely unchanged, the molecule merely has an extra non-absorbing bit somewhere that does not influence the chromophore) then the relative response factor should be 1.

In this case it's a sensible thing to check. If it deviates wildly from 1, maybe a standard has been prepared wrongly, or one of the signals is being measured incorrectly (wrong wavelength, etc.). In the same way as bad deviation from linearity could indicate a problem with the assay, bad deviation from expected slope can indicate a problem.