Hello,

I am validating an LC-MS/MS method for several analytes. Because of somewhat poor analyte recovery and significant matrix effects, I will be using the method of standard additions (i.e. multiple spike levels and extrapolating the unspiked concentration via the x-intercept). Isotopically labeled internal standards are not economical for these analytes.

When I have validated methods in the past that simply used an external calibration, I have relied on simple statistical metrics like RSD_r and RSD_R, the R^2 of the calibration curve, etc. I'm less clear on how to evaluate a method that uses standard additions. One thought I had was to compute the standard error for the slope and/or the x-intercept (i.e. the unspiked concentration) of each sample's regression. Does it make sense to then compute a confidence at one standard deviation, and normalize to the value (slope or intercept) to compute a sort of "RSD" of the regression parameter?

What is typical for assessing the accuracy and precision of a method that uses standard additions for quantitation? This would most likely be in a US food regulatory context. Any insights would be appreciated.

Thank you!