by
lmh » Thu Sep 17, 2015 9:32 am
rb6banjo, sorry, I didn't explain clearly. I don't believe in trying to estimate the error from genuine blank runs. I meant that I like to use a LOD/LOQ based on the error at zero as estimated from a series of calibration points approaching the lower limit of the method. The idea is that you run a calibration curve at very low concentration, ideally with replicates of each concentration, so that in all cases there is a real peak to integrate. Obviously as the peak becomes very small, there will be significant errors in the quantification. You then estimate the vertical error of the calibration curve. This seems to be referred to by different bits of software by different names, even when numerically identical: (Excel: standard error of the y-estimate; Chemstation: standard deviation of the residuals).
The point about this vertical error is that it is a very real estimate of error, taken from real measurements, close to the point where you want to know the error. If you can tolerate a 5% error in quantification, and you find that the measured error is actually 0.5 pmoles, then your LOQ is 10pmoles.
For reference to this and other methods, including S/N, my favourites so far are still: Épshtein, Pharmaceutical Chemistry Journal 38: 212-225, and also LC-GC Europe Feb 2009, 22:82-85. The ICH site is
http://www.ich.org, and the most relevant section is Q2(R1)