Page 1 of 1

Test Method Validation

Posted: Mon Mar 13, 2006 1:12 am
by Consumer Products Guy
My pointy-haired boss wants me to do a test method validation for a non-pharmaceutical ingredient in a product line recently purchased from another company. Their people had "some" documentation on validation, was not complete. While gathering my data, I sure feel that some of the "robustness" parameters are quite archaic when using the computerized HPLCs everywhere these days. For example, when mixing aqueous A with organic B, the computer directs the mixing step, flow rate, column temperature, etc., and all can be tracked. But here I go running 3 degrees below and 3 degrees above my target temperature, 5% above and below on my organic, 12% low and 12% high on my flow rate, reduced acid and extra acid in my mobile phase, etc. Anyone have a published source/link which states that such robustness studies are a carryover from non-temperature controlled systems, when one mixed the organic and aqueous together in a single reservoir? It's a bunch of extra work and documentation, ans such variations could likely be rationalized as within "tweaking" parameters anyway. It's like why are detection limits and quantitation limits important if the use level is thousands times greater?

Posted: Mon Mar 13, 2006 2:10 am
by tom jupille
I understand the way you feel, but I really disagree with you. Murphy's Law says that pumps will malfunction and columns will crap out sooner or later.
such variations could likely be rationalized as within "tweaking" parameters anyway.
But your users need to be able to judge how much and in which direction to tweak when problems arise. If you provide the information as part of the robustness evaluation, they won't have to reinvent that part of the wheel.

The particular values to use for robustness testing aren't written in stone in any case. My usual recommendation is that they should be substantially larger than the random variation in conditions; it makes it easier to measure the effects.
It's like why are detection limits and quantitation limits important if the use level is thousands times greater?
Based on a quick read of ICH, I don't believe that LOD or LLOQ are required for "high level" assays like potency, content uniformity, etc.

Posted: Mon Mar 13, 2006 8:04 am
by Alex Buske
The point is that any equipment has its own specification and column thermostats can e.g. specified with +-2°. Also for flow rate, mixing reproducibility, wavelength etc you will have a specified range. If you look to these parameters they often have impact on the results, especially if some effects are overlaid.
I often use a matrix design to check this impact. Thats usually done with 5 series (1 standard conditions and 4 modified) in combination with precision overnight.

Alex

Posted: Mon Mar 13, 2006 4:54 pm
by Consumer Products Guy
Thank you, Tom, for your input. I will follow your recommendations, as you are considered a very credible and knowledgeable source. Besides, my supervisor and statistician will love all the data.

Posted: Tue Mar 14, 2006 2:19 pm
by DR
Image

couldn't resist - hope you brought your sense of humor, cpg.

Posted: Wed Mar 15, 2006 3:04 am
by Consumer Products Guy
Good job, DR. I still am unable to post any pictures, have had tons of trouble uploading any pictures to a web space to link to here.

Posted: Thu Mar 16, 2006 3:56 pm
by Rob Burgess
I don't want to overstate the use of Drylab, but I wonder if there will come a time when you could prove some of these instrumental setting robustness parameters (i.e. temp. flow rate, composition etc) with modelling them rather than doing the actual "real-life" practical work!
Just a thought...

Posted: Fri Mar 17, 2006 8:13 am
by Alex Buske
Rob,

Even for drylab you need chromatographic input. In order to simulate influence of different factors (temperature, % B, Buffer conc etc;) one would need a full factorial experimental design (2^n runs). Some Robustness parameters you can not adequately handle with Drylab (detetion wavelength, influence of deifferent solvent mixing methods...). And finally I dont believe that any regulatory agency will be satisfied with simulated runs.
On the other hand you could do robustness testing with a reduced factorial design and can include parameters that have an effect on the separation.

Alex

Robustness of computerized systems

Posted: Fri Mar 17, 2006 8:43 am
by Peter Apps
Hi Consumer Products Guy

You obviously have much more faith in automated instruments than I do !

Leaving that thought to one side, the ability of modern instruments to track and record their own operation is exactly what makes robustness studies so useful.

Suppose for the sake of argument you run an overnight batch, and in the morning you find that flow rate or something had shifted 10 % in the middle of the night. If you have validated robustness to 12 % you can go on with your life, if you had not done robustness you would be stressing out checking the performance on the check samples (if you had run any). If you had not run check samples you would have to do the whole batch over again.

If you did not have a record of the flow rate you might release a set of data that was rubbish.

All method development and validation is really about investing time and trouble now to save stress in the future.

Also, the computerisation and programmability makes the robustness studies much easier than in the old days of manual mixing - you just programme your mixer pump to give you the required variants on the method. Compare that to manually mixing, filtering and degassing a litre of mobile phase for each composition variant.

Regards Peter

Posted: Fri Mar 17, 2006 9:13 am
by bartjoosen
There is already a way to achieve the robustness in Drylab:
http://www.rheodyne.com/downloads/excha ... stness.zip

Made by Tom