Page 1 of 1

How to make sure of LOD / LOQ in fluorescence HPLC?

Posted: Thu Apr 16, 2009 7:18 pm
by CurrerBell
Hello everyone, it's my first time in this forum and I think I've got a nice question for you :wink: .

I am developping a method for unknown by-products of an active ingredient in a formulation which should allow the monitoring of the by-products over some years. As UV-detection is not sensitive enough I tried fluorescence detection (successfully). But I am an unexperienced user in this science and I wonder how to make sure that a certain minimal concentration of by-products can always be detected; that is with different laboratory assistants (also without experience and not having time to read lots of manuals!!!) and even different labs.

As far as I understood there are some factors that influence sensitivity:

1) the parameter "Gain": I must set it to a certain point to assure LOD and LOQ; with that I will validate the method. But not all detectors use the same way of setting this parameter (I have seen two differnt ways to do so - so far, Agilent and Shimadzu (old instrument); they are not comparable, so the mere value of e.g. '17' won't be much use.

2) the lamp gets older and the other functional parts as well; so I certainly will have to change the "gain" in the course of time to make sure I still see very low concentrations of the by-products.

As I see it the parameter "gain" doesn't help much. I thought therefore of using a control solution of a minimal concentration that must be detected for LOQ. This means gain must be fitted to a value which gives a S/N of not less than 10 and has to be verified each time a sequence / the method is started. But if I do so, how does this affect my validation of the method? Do I have to validate over the whole range of the detector? As I haven't found a monograph in the USP, how is it usually done? Please give me some input!

I also welcome tips on literature, everything about fluorescence.

Posted: Thu Apr 16, 2009 8:26 pm
by tom jupille
The "gain" control is more or less equivalent to the "range" on a UV detector, with the inconvenience that there are no standard units associated with it. As you increase the gain, everything is amplified. As such, it has no direct effect on signal/noise ratio. If it is set to high, then you will lose the high end (equivalent to going out of range on UV in the old days when you had only a strip chart recorder!).

As to making the method "idiot-proof", that's impossible (idiots can be *very* ingenious!). What you *can* do is to flag problems by running replicates as part of your calibration plot. Include replicates at your desired LOQ, and set as part of system suitability that the RSD at that level should be less than a target value. Ditto for the high end of your range. Then let your users set the gain as appropriate to meet those targets.

Posted: Sat Apr 18, 2009 9:43 am
by CurrerBell
@tom: thx very much - your suggestion is very helpfull :D