-
- Posts: 2
- Joined: Thu Apr 16, 2009 5:43 pm

I am developping a method for unknown by-products of an active ingredient in a formulation which should allow the monitoring of the by-products over some years. As UV-detection is not sensitive enough I tried fluorescence detection (successfully). But I am an unexperienced user in this science and I wonder how to make sure that a certain minimal concentration of by-products can always be detected; that is with different laboratory assistants (also without experience and not having time to read lots of manuals!!!) and even different labs.
As far as I understood there are some factors that influence sensitivity:
1) the parameter "Gain": I must set it to a certain point to assure LOD and LOQ; with that I will validate the method. But not all detectors use the same way of setting this parameter (I have seen two differnt ways to do so - so far, Agilent and Shimadzu (old instrument); they are not comparable, so the mere value of e.g. '17' won't be much use.
2) the lamp gets older and the other functional parts as well; so I certainly will have to change the "gain" in the course of time to make sure I still see very low concentrations of the by-products.
As I see it the parameter "gain" doesn't help much. I thought therefore of using a control solution of a minimal concentration that must be detected for LOQ. This means gain must be fitted to a value which gives a S/N of not less than 10 and has to be verified each time a sequence / the method is started. But if I do so, how does this affect my validation of the method? Do I have to validate over the whole range of the detector? As I haven't found a monograph in the USP, how is it usually done? Please give me some input!
I also welcome tips on literature, everything about fluorescence.