by
krickos » Wed Mar 16, 2011 7:15 am
Hi
I would like to know about determination of related impurities. Does it vary from system to system (HPLC)?
Yes, bluntly put I would say any quantitative analys has a day to day variation. The question is if it is significant/acceptable or not.
Is it required to perform stability analysis, from initial to end of shelf life, (impurities) in the same HPLC system to follow a trend in analysis?
.
No.
If this is true, how does the method conform to validation(Intermediate precision)? A well developed method should behave in any system with little or minimal changes. How much percentage of change is allowed between system to system?
Would not state a general limit as it depends on what level in concentration you do compare at and the individual method. Consider the following example two different persons runs 6 injection at different days on different instrument with different columns. They evaluate largest single unspecified impurity as it turns out it is present at the lower end of reporting limits in for example ICH Q3A (0,03-0,05%).
Analyst 1 gets 6 values of 0,03 and analyst 2 gets 6 values of 0,04, that gives a total RSD (n=12) of what..~15%, and not really suprising. If the RSD were to be calculated on 3 decimal places instead the RSD might drop significantly.
So close to the reporting limit/LOQ I would generally expect a higher RSD for intermediate precision, but it is crucial to consider number of analysis per analyst, how many decimal places and what concentration level to be evaluated to set an acceptance criteria.
For very pure substances/products it might sometimes be better to work with samples solutions spiked with impurities at various specification limits to get a better idea of the intermediate precision.