Advertisement

how to evaluate robustness in gradient method

Discussions about HPLC, CE, TLC, SFC, and other "liquid phase" separation techniques.

19 posts Page 1 of 2
Hi,everyone,

In method validation, robustness is considered that whether the separation result is affected with some changed parameters deliberately, e.g. in an isocratic elution mode, mobile phase composition ratio, flow rate and so on. Then how we evaluate the parameter on gradient separation, and will the composition,gradient time, flow rate be changed? Any information will be appreciated greatly!

Best regards

Austin :lol:
The God had ever have three apples. Adam was tricked eating up one of them in Eden, and the second dropped from the tree and hit Issac Newton on his head, then what happened on the third one?Interestingly it was bit by Steve Jobs!
In robustness testing I think you should evaulate each parameter according to:

- How high is the risk that the parameter will change during normal operation (probability)
- What impact will a change have (criticality)

Typically, steps that involve manual labour have a high probability of change (e.g. adjustment of pH, premixing of mobile phases etc). Things that are controlled by the system have a very low probability of change (e.g. flow rate, wavelength, gradient time).

Thus I would not evaluate flow rate and gradient time, since they will simply never change if the system works as specified. Important thing to evaluate may be (it all depends of the application of course):

- different batches of columns
- different LC-systems (especially the column heater design can affect the separation)
- pH of mobile phase
- % organic in mobile phases (in case that you premix the mobile phases)
Mattias has stated the principle very clearly, but I would disagree on some of the details :wink:
flow rate and gradient time, since they will simply never change if the system works as specified.
Actually, that is a *very* dangerous assumption (and violates the point to robustness testing, which is that conditions might change). I'll agree about gradient time, but flow rate *can* change (e.g., due to a leak). In a gradient system -- unlike isocratic -- flow rate and gradient time can both affect peak spacing.

In our method development course, we suggest that gradient time (or flow rate; they are actually equivalent variables in chromatographic terms) be included as part of robustness evaluation. Another important one is dwell volume (aka "gradient delay volume") which can be simulated (and corrected for) as an isocratic hold at the start of the gradient. Temperature of the mobile phase (as implied by Mattias) is another wise selection.

On the detector side, while the nominal wavelength probably will not change, the bandpass (or slit width) may be well be different from system to system; depending on the absorbance spectrum, you can see differences in relative response as a consequence, so I would suggest those be looked at as well.
-- Tom Jupille
LC Resources / Separation Science Associates
tjupille@lcresources.com
+ 1 (925) 297-5374
There are so many things that can influence a separation, and in my opinion you must select the ones that are most likely and critical.

I do not agree to include the flow rate, since no analytical method can be considered to work if there is a leak in the system. At least in the Pharma business, the flow rate of the pump is qualified on a regular basis - and should be considered as true as gravity :)

The dwell volume is more interesting, and something that should be included (or corrected for) for gradient methods.

An another question is what should be the output parameter? If the method determines several peaks, I usually prefer resolution as the studied parameter. It could be argued that one should look for actual results instead.
Hypothetical scenarios:

A method is not shown to be robust to flow rate, at a routine flow rate check the pump is shown to be 10% fast. Result - all the results since the last flow rate check have to be re-evaluated, and samples probably re-run. Much wailing and gnashing of teeth.

A method is validated to be robust to flow rate deviations of 15%. At a routine check the pump is shown to be 10% fast. Result - no worries - the method is robust and so the results are valid.

Peter
Peter Apps
I see your point Peter, but I must admit that I have never heard of a pump fail the flow rate qualification (working in a building with 30 LCs for 12 years).

I maintain that there are other parameters that are more important. Since most robustness tests are setup acc. to experimental designs - each extra parameter increases the number of experiments a lot.

Flow rate, wavelength and ion strength are typical parameters that I personally never include in a robustness study (working with reversed-phase and peptides)
Hi Mattias

It is just another aspect of the optimised balancing act - whether it is worth establishing robustness for a parameter depends on how likely it is to deviate from set point, and that depends on the design, construction and maintenance of the instrument, which depends on (or determines) the costs of purchase and maintenance. The impact of a failed batch depends on batch size, turn around in relation to production throughput, are you making floor polish or drugs etc etc etc.

We might be getting away from the original question, which I interpreted as a request for guidance on what to change to establish robustness for gradient methods.

Peter
Peter Apps
During my ~20 year tenure as a chromatographer I've validated just a few gradient methods and as a matter of course, I do evaulate small changes in flow rate as part of robustness testing. First, it's not that much work. Second, I would argue that one needs to know where the weak points for a method are. Knowing whether or not changes in flow rate, which can vary from system to system, affect resolution is important. On a typical modern LC (not UHPLC), passing specification for 1.00 ml/min could be anywhere between 0.988 to 1.012 ml/min. That seems tight, but when one considers the fact that pretty rapid gradients can occur on short columns packed with small particles, room for error gets smaller, too.

On the other hand, you also state the following:

"...since no analytical method can be considered to work if there is a leak in the system"

While in gross terms, you are correct, I can easily think of a scenario where your presumption falls flat and would not necessarily be detected immediately:

Say your technician changes the filter between your pump and injector, the compression fitting on the outlet of the filter housing leaks just a little, and the technician fails to notice immediately because she has 40 samples to prepare. Typically, one would not need to re-evaluate the flow rate after such a simple operation, yet your flow rate could be slightly OOS, especially if it was on the low side of the spec at last evaluation. Your flow would still be smooth, your mixing accurate, and your separation & quantification would likely be just fine. Perhaps your tech finds the leak 2 days after changing the filter. What do you do with those data? To go to Peter App's point, If you measure the pump flow rate at that point and it's OOS for what it's set at, but within the range that you checked during robustness testing, you're fine. Write a deviation noting the facts of the case, fix the leak, and move on. If you don't have robustness data for that parameter, you're 2 days behind, yet you have this burning feeling that your data are just dandy. Where do you want to be?
http://the-ghetto-chromatographer.blogspot.com/
Does anyone combine changes in parameters while doing robustness testing? Like what if temperture of column was 5 degrees too low while the flow rate was 10% too high?

I agree that with modern systems that will not inject if a parameter is read as different than the specified one, that much of the robustness tests are of low value. But that's my opinion only, and I'm not QA or the FDA.
I agree with Mattias. If a leak occurs, the chromatography is work malfunctionally, so the data generated are not reliable and accurate. With my former projects containg gradient method, we evaluated the gradient time and mobile phase composition ratio, however, the flow rate change was neglected.

I appreciate you give such important information. But I want to get a guidance on this issue and is there some literature availalbe on this wesite? Thanks in advance!

Best wishes

Austin :roll:
The God had ever have three apples. Adam was tricked eating up one of them in Eden, and the second dropped from the tree and hit Issac Newton on his head, then what happened on the third one?Interestingly it was bit by Steve Jobs!
I agree with Mattias. If a leak occurs, the chromatography is work malfunctionally, so the data generated are not reliable and accurate. Austin :roll:
Sorry Austin, I have to disagree with you there. One of the many unfortunate consequences of the overwhelming emphasis on acronym driven QC is that instruments and methods have to be running perfectly in order to generate good data. Good data is fit for purpose - usually you want it to be precise and accurate of course - but it is perfectly possible to generate fit for purpose data using methods and instruments that by modern QC standards would never be accepted into service. Under these circumstances more people have to have their brains switched on, because there has to be an understanding of both the problem and the tools used to solve it.

Peter
Peter Apps
+1 for Peter.

A simple scenario & question: I have a (retired) 25 year old quaternary pump that likely has a small leak somewhere, but I've not been able to locate it. It consistently and very smoothly delivers 0.974 ml/min when set to 1.00 ml/min, which is just slightly but definitely OOS. The GPV on the pump passes specification perfectly. Do you think this instrument is capable of pushing MP through an HPLC and generating reliable data? I do.

Data needs to be evaluated on it's own merits. Is the RT within spec? Is resolution sufficient? Is analyte response consistent across the calibration range? Are analyte peaks in samples within the calibration range? Is the s/n (if relevant) sufficient? Did system suitability fall in specification? If the answer is yes to all of these, I don't think it matters much whether your pump is running a little slowly or if your detector's wavelength accuracy is off by 1.5nm (as long as you're aware of those things) IF you've tested for variations in those parameters.
http://the-ghetto-chromatographer.blogspot.com/
I agree with Mattias. If a leak occurs, the chromatography is work malfunctionally, so the data generated are not reliable and accurate. Austin :roll:
Sorry Austin, I have to disagree with you there. One of the many unfortunate consequences of the overwhelming emphasis on acronym driven QC is that instruments and methods have to be running perfectly in order to generate good data. Good data is fit for purpose - usually you want it to be precise and accurate of course - but it is perfectly possible to generate fit for purpose data using methods and instruments that by modern QC standards would never be accepted into service. Under these circumstances more people have to have their brains switched on, because there has to be an understanding of both the problem and the tools used to solve it.

Peter

This I have to agree to. The methods that are developed in my company nowadays contains more or less a full validation in the system suitability of each run, in order to be sure that the system is fit for its purpose. There is a S/N test, a resolution test, a repeatability test and sometimes even a linearity test. In these cases, the robustness validation is just a formalised procedure to have in the documentation. You would never trust that the method would work everywhere, just becuase you have succeeded to show that the method is robust (using maybe 3-4 parameters).
With regard to the specific issue of flow rate (which seems to be the focus of the debate):

A big part of the interest in robustness data is evaluating how much a parameter can be adjusted in order to meet system suitability (if you *do* meet system suitability, then there is nothing to worry about! :wink: )

In an isocratic method, most people would evaluate the effect of mobile phase strength (%B), because that can affect not only retention but selectivity (peak spacing).

In a gradient method, the equivalent to mobile phase strength is controlled by a combination of gradient steepness (time and range), flow rate, and column dimensions (yes, gradient separations *are* more complex than isocratic). Column dimensions are not a convenient adjustment tool ("Would you please pass me the hacksaw.") but flow rate and gradient time certainly *are*. Add that to the fact that flow *can* change accidentally (and just because it was correct last week when you did the PQ does not guarantee that you have not sprung a small leak in the meantime), and I think it is prudent to add flow to the list of robustness parameters. Is it absolutely required? No.
-- Tom Jupille
LC Resources / Separation Science Associates
tjupille@lcresources.com
+ 1 (925) 297-5374
Tried to post something and lost it all, so sorry if this is a re-post. At risk of asking another question on top of an existing one, doesn't robustness testing depend on the rest of the process (how the method will be used, and how the data will be analysed)? I don't work in an environment that does much QA, so I'm curious but ignorant.

For example, if a method states that standards will be run, and retention times corrected based on the standards, small variations in retention time caused by variations in flow won't matter much (provided separation is still good). On the other hand I've encountered environments where people simply apply a processing method without ever changing it, and here changes in retention time are disastrous.

Many years ago I worked near a GC-FID lab, where they had a single method for handling FAMES. One day (probably following a column change), one analyst started to get the normal pattern of peaks, but about 2 minutes earlier than we'd normally see. As a result, the peaks were misannotated. Several of us tried to explain that the annotations were based only on retention time, so if retention time is wrong, the annotations are wrong.
"But you said the peaks are in the wrong places. That can't be true," she said. "Look, this peak at X minutes is labelled 18:2, and that's where 18:2 always comes off, so the retention times haven't changed much at all".
19 posts Page 1 of 2

Who is online

In total there are 20 users online :: 2 registered, 0 hidden and 18 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: Baidu [Spider], Google [Bot] and 18 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry