HPLC - Method development for impurities in drug product
Posted: Tue Apr 14, 2015 10:11 am
Hi everyone,
I'm facing a real big big challenge in method re-development for impurity determination.
The original method conditons are the following:
- ACE 3 C18 3.0 µm 150 mm x 4.6 mm
- Detector Wavelength : 220 nm
- Column temperature : 20 ºC
- Autosampler temperature : 10 ºC
- Flow : 0.9 mL/min
- Mobile phase A : MeOH:ACN:20mM K2HPO4 pH 7.0 (50:17:33, v
v)
- Mobile phase B : MeOH:ACN:20mM K2HPO4 pH 7.0 (55:35:10, v
v)
- Dissolution mixture : 0.01% TFA in MeOH
- Injection volume : 15µL
- Run time : 45 min
- Gradient Program : Time (min) % Mobile phase A %Mobile phase B
0 100 0
2.0 100 0
20.0 60 40
30.0 0 100
38.0 0 100
38.1 100 0
45.0 100 0
This original method was scientifically sound validated (not fully), but we are having increasing problems in stability (example attached):
- Variability in the individual content of impurity peaks
- Co-elution of impurity peaks / peak distortion
- Poor reproducibility in relative retention times of impurity peaks
The drug product is the worst case in terms of matrix complexity, since this product is manufactured by blending the API with additional formulation excipients, originating a more complex excipient matrix which is finally compressed into film coated tablets.
A significant matrix effect caused by the excipient portion of the formulation can be recognized when chromatograms of drug substance versus drug product (tablets) solutions are compared. A marked baseline rise is observed.
At different wavelengths, baseline rise is attenuated but we loose sensitivity for the main peak and impurities.
I would like to ask some help or recommendations based on your experience in order to optimize chromatographic solutions and ensure sufficient resolution as a form of anticipating a future regulatory audit.
Anyone interesting in discuss this in detail? Could you please indicate an email to go through the steps and tests already performed with gradient changes, etc.
Thank you in advance,
Pedro Serôdio
I'm facing a real big big challenge in method re-development for impurity determination.
The original method conditons are the following:
- ACE 3 C18 3.0 µm 150 mm x 4.6 mm
- Detector Wavelength : 220 nm
- Column temperature : 20 ºC
- Autosampler temperature : 10 ºC
- Flow : 0.9 mL/min
- Mobile phase A : MeOH:ACN:20mM K2HPO4 pH 7.0 (50:17:33, v
- Mobile phase B : MeOH:ACN:20mM K2HPO4 pH 7.0 (55:35:10, v
- Dissolution mixture : 0.01% TFA in MeOH
- Injection volume : 15µL
- Run time : 45 min
- Gradient Program : Time (min) % Mobile phase A %Mobile phase B
0 100 0
2.0 100 0
20.0 60 40
30.0 0 100
38.0 0 100
38.1 100 0
45.0 100 0
This original method was scientifically sound validated (not fully), but we are having increasing problems in stability (example attached):
- Variability in the individual content of impurity peaks
- Co-elution of impurity peaks / peak distortion
- Poor reproducibility in relative retention times of impurity peaks
The drug product is the worst case in terms of matrix complexity, since this product is manufactured by blending the API with additional formulation excipients, originating a more complex excipient matrix which is finally compressed into film coated tablets.
A significant matrix effect caused by the excipient portion of the formulation can be recognized when chromatograms of drug substance versus drug product (tablets) solutions are compared. A marked baseline rise is observed.
At different wavelengths, baseline rise is attenuated but we loose sensitivity for the main peak and impurities.
I would like to ask some help or recommendations based on your experience in order to optimize chromatographic solutions and ensure sufficient resolution as a form of anticipating a future regulatory audit.
Anyone interesting in discuss this in detail? Could you please indicate an email to go through the steps and tests already performed with gradient changes, etc.
Thank you in advance,
Pedro Serôdio

