by
lmh » Fri Aug 15, 2014 11:15 am
The two values interact. Logically, your product specification needs to be aware of your accuracy of measurement. The thinking should be like this:
(1) For the intended application of this product, a variation more than (for example) +/- 20% would be bad.
(2) Include a general safety-factor if desired. Maybe if 20% deviation has seriously bad consequences, we would prefer to reject product that deviates by more than 10%.
(3) If we think the analytical precision and accuracy allow a 5% error, then if we specify +/- 10%, a measurement of 10% could actually be as bad as 15%, so instead we should insist on a smaller analytical error-window such that analysis still guarantees that the product is within the safety-window we specified. Therefore if we want to guarantee +/- 10%, we can specify our product as +/- 10% if our analysis is perfect, or perhaps +/- 5% if it's not...
What accuracy and precision you get is going to depend on the instrument, the analyte, the level at which you're analysing etc. etc.; it has to be good enough for purpose, but for once I agree with ICH in not specifying a rigid number.