-
- Posts: 292
- Joined: Wed Jan 19, 2005 2:20 pm
Background:
I'm trying to come up with a CLND compatible related impurity method for a basic pharmaceutical blend drug product (2° amine functionality amongst other moieties). It also has a large aromatic acidic counter ion.
I performed 4 modelling runs, as per usual with DryLab, (short grad - high and low temp. & conversley long grad high and low temp.). Gradient was 20 mins and 60 mins 5 - 100 % B (A = water + 0.1% formic acid; B = MeOH + 0.1% formic acid) - temp. were 30 and 70°C. This seemed to be OK although I had to switch wavelength to 254 nm as the slope @ our usuual wavelength of interest (220 nm) was just too severe to be able to detect any small impurity peaks! the column I used was a std. Luna C18 (2) (150 x 4.6 mm) 3µ. Flow rate set to 1.0 mL/min. The diluent for the drug product blend was 20% aq. IPA (there was a danger of using MeOH forming methylated imps apparently!). Samples were filtered before injection.
I eventually got to a point where I was able to model between 9 -11 peaks in total (including the main basic peak and its acidic counter-ion). The DryLab modelling seemed to be OK and the temp. gradient mode appeared to predict a variety of conditions that could have been promising.
The Problem:
I've used DryLab in the past and it has been very useful, but for some reason the accuracy of tR prediction seems to rather poor for the early eluting peaks (basic compound) +/- > 20% of actual run. However, for the later eluting peaks (acidic counter ion etc.) the comparative run was as predicted from DryLab (within +/- 1-2%) tR. Is there be any possible reason for these observations? Have I modelled the runs properly I wonder? Should I repeat the modelling runs again? Are strong solvent affects causing a problem I wonder (the peak shapes are poor for several gradients I've tried from predicted by DryLab although the modelling runs appeared OK generally).
Any ideas on this problem would be much appreciated... thanks in advance... Rob.

