-
- Posts: 426
- Joined: Mon May 19, 2008 10:09 am
our SOP states that we need to perform linearity 3 times in a validation.
Each time a calibration line was made.
Requirements for each calibration line:
R^2 > 0.99
%residuals between -5% and + 5%
and intercept equal to 0 with a t-test (0 should be in the confidence interval)
But now we have a very precise method, with some %residuals of 0.005%.
This means we have a very narrow confidence interval, which doesn't include 0.
If you calculate the actual deviation from 0, we get 0.05% which is much less that the precision of our method (eg if we perform 6x an assay, our deviations between them is more than 0.05%).
How do you perform linearity, or which specifications are you using?
Should 0 be in the interval, even such narrow?
Thanks
Ace