by
krickos » Wed Aug 26, 2009 11:23 am
Hi
I do not think there is an answer that fits all but page 34-35 of this pdf might be a start:
http://www.lcresources.com/resources/ex ... extras.pdf
Depending on industry etc different approaches may apply. For instance in the pharma buissness we are more or less forced to use a couple of SSTs typically those recommended by FDA and mandatory in general chapters in USP/EP.
Then we have the purpose of the method (assay, impurities, traces etc), somewhat different SSTs may be used, for instance a LOD or S/N check usually is not relevant to an assay analysis.
As you touched on alreay, the robustness part of the validation but also throughout the whole validation SSTs should be kept in mind so one can extract what is really critical.
Length of analytical sequence (number of injections) is another thing, FDA likes when you bracket standards in a longer assay run, while in a longer impurity run resolution/signal to noise/LOD may be more intresting/critical to recheck in the end.
Also some technical aspects may aspects may be worthwile to keep in mind. For instance the general pharma requirement for LC with regard to RSD is not more than 2,0% but most/many applications/instruments today usually can stay under 1,0% without problem.
On the otherhand there is no point to set too harch SST demand unless the validation points in that direction, because you want some flexibility (column choices, shift of intruments brands, transfer to other laboratories, normal variation etc).
Personally I belive efficiency (N) is a good column/system state indicator together with tailing factor, but N limit does not always has to be set as tight as you stated.
Generally I would start with the following:
Blank evaluation (not to forget)
Precision
N and Tailing factor
Then add others as necessary depending on application/industry:
Resolution
LOD/signal to noise ratio
k´ in some cases
Column dead time (gradients)
correlation factor if standard curve is used
might be others aswell.
I hope it helped a bit