by
lmh » Fri Sep 27, 2013 9:49 am
OK, I'm going to take the rare step of half-disagreeing with Tom, and I'll try to answer all questions, but probably not very well.
(1) The argument of the LC-GC article is that if the distance between the y-axis intercept and zero is statistically insignificant, then it's (probably) OK to treat it as zero, and force the line through the origin, but more importantly the point of the article is that if the distance is greater than can be accounted for by random errors, then it is wrong to force the line through the origin.
Summary: you can force the line through the origin if doing so doesn't actually make any difference! In which case why bother....
(2) Since errors can be positive or negative, although Tom's right that it's perfectly OK to report a negative intercept, I think he might be wrong to stop you from forcing through the origin in this case. If the absolute value (ignoring sign) of your intercept is less than one standard deviation, it may be accounted for by random errors and it should make no significant difference whether you force or not, so if you insist on forcing, you're doing no harm.
(3) Your question about whether the results are being conservatively over-estimated. No, here I believe you're wrong. Whether the results are overestimated depends on why the intercept is not zero. For example, if you had a calibration curve that is not straight, and this is the reason why your intercept is not zero, then you can get over-estimation or under-estimation depending on concentration: the true curve will deviate from a fitted straight line one way in the middle, and the other way at the ends. (Deviations in both directions will happen whether you force to the origin or not, and whatever direction the curve goes, concave or convex).
(4) Peter, if the original idea is based on "no statistically significant difference between the y-axis interercept and the origin", then the moment you can demonstrate a significant difference, you have to believe it. The situation is exactly the same as any other experimental measurement: one experiment may leave you with an insignificant result, but the pooled data from 3 experiments may give a result that's significant. What you do about it, though, I have no idea! If you're not forcing to the origin, then you're in the happy situation of not having to worry!
In the real (non-regulated) world, my impression is: the main motivation for forcing to the origin is that some samples contain vastly less analyte than the lowest point on the calibration curve, and bosses can't cope with being given impossible negative results. They're also sceptical about reported values of "not detectable" because they can't put them in a nice bar-chart for Figure 3 of their paper. And most of us don't want to go back and do a limit of detection study to demonstrate how little of something we could have seen (without doing this, "not detectable" is meaningless), when the result is very clear on Figure 3 anyway, because all the other treatments have hugely beautiful bars, and this is the negative control... therefore we force to the origin so we get a value that is meaningless but boss-friendly, and so close to zero that the bar-chart won't show it, and it doesn't actually affect the data the reader sees, or the conclusion. Somebody once told me you can waste a lot of time trying to measure something that's not there, and it's true; unless you have a good reason to prove exactly how not-there it is (e.g. you're looking for something horribly toxic in food), sometimes it's not worth worrying too much.