My company has traditionally measured standard solution stability by storing a portion of standard then analysing it the next day against a fershly prepared standard. I don't think this approach is correct as the freshly prepared standard may be at 98-102% of t's theoretical concentrartion. I would prefer to measure the response factor on day 1, then again on day 2; and not determine recovery using new stds. If this was done on the same system, do you think my alternative approach would be acceptable?
We also use comparison of stored v fresh as our primary method for long term storagetesting (> 24 hour). For < 24 hour we use comparison vs time zero data. the reason we dont use it for longer periods is the risk of incorporating the daily instrument variations into the calculations.
What level of deviation are you willing to accept in the STDs and how is it determined.
On a personal Note: we have an acceptance criteria for reproducibility of STDs in a sequence CV <= 2% (consecutive replicate injections) and across a sequence (CV <= 3%) and usually set stabilty as additive of these two criteria so if the RF ratio is 95 - 105% the STD is stable.