Relative Response Factor Determination
Posted: Tue Jun 07, 2005 5:58 pm
Does anyone have much experience with impurity determination using an external standard and a relative response factor (RRF)? I have used such a method without issue, but now understand that in some cases RRF may be dramatically affected by numerous factors (such as small changes in detection λ, buffer pH, temperature, etc.). What is the best way to approach this issue, if time-of-use preparation of impurity standards is impractical or impossible (due to very limited supply)? Also, it is generally accepted that an RRF of 1 is to be used for all unknowns, although the actual RRF may also be affected by these external changes.
In cases where the RRF is sensitive to external factors do we simply challenge our RRF during method robustness testing (and during development, of course!) or is there a way to incorporate RRF determination into a method validation? What happens if we do find that the particular impurity has an RRF sensitive to these small changes?
FYI-an example of this problem came during a method validation in which a particular impurity exhibited a change in RRF approaching 50% between two different detectors with different lamp ages. It is unlikely that lamp age caused this in and of itself, but rather it is possible a small shift in actual detection λ that resulted from the lamp age could have caused it. Regardless of the actual cause, we need to be confident that the RRF isn't going to be changing significantly from run to run.
I have found some helpful information posted online from a method development and validation performed by BAS Northwest Laboratory (we know we can trust their work!), but it still doesn't directly answer this question. This seems to be an issue that doesn't get discussed much but has much importance in the field of analytical chemistry. Please share if you have any insights!
In cases where the RRF is sensitive to external factors do we simply challenge our RRF during method robustness testing (and during development, of course!) or is there a way to incorporate RRF determination into a method validation? What happens if we do find that the particular impurity has an RRF sensitive to these small changes?
FYI-an example of this problem came during a method validation in which a particular impurity exhibited a change in RRF approaching 50% between two different detectors with different lamp ages. It is unlikely that lamp age caused this in and of itself, but rather it is possible a small shift in actual detection λ that resulted from the lamp age could have caused it. Regardless of the actual cause, we need to be confident that the RRF isn't going to be changing significantly from run to run.
I have found some helpful information posted online from a method development and validation performed by BAS Northwest Laboratory (we know we can trust their work!), but it still doesn't directly answer this question. This seems to be an issue that doesn't get discussed much but has much importance in the field of analytical chemistry. Please share if you have any insights!