Advertisement

Forcing Integration parameters

Discussions about chromatography data systems, LIMS, controllers, computer issues and related topics.

22 posts Page 1 of 2
Hi everyone,

I just want to hear what people in the forum thinks about using few of the following integration parameters that came with the data systems:

Force peak (you force your integration at the begining and at the end of each peak)
Force drop line
Inhibit integration.

I run into some old (not that old) data where I saw either Force peak or inhibit integration was used in each peak.

Does this will be consider as data manipulation? Even if it is only use in impurities?

What integration parameters will be consider acceptable (ej. Valley to valley, etc)

Thanks

I'm pretty new, so take my opinion with a grain of salt -

My understanding is that inhibit integration is very commonly used, especially before and after your "expected" peaks. I took an Empower Pro course and it was one of the first things they showed us how to do. I've heard of force drop line being used well, but I have never had the need. I don't have an opinion on Force peak.
Kind Regards,
Jade Barker

We have discussed this many times. The basic criterium is that you want to determine the true composition, you as scientist can not hide behind statistics, etc. You have to make sure whether your analysis is sufficient/correct. Very often, if there is overlap of peaks, one is doing guesswork, nothing else.

I'm not in a quality assured environment, so take my comments with a 2nd grain of salt.

My personal feeling is that provided you've validated your method properly, it doesn't really matter what integration parameters you use (but please use them consistently!). If your validation is carefully planned to test all eventualities, then if your integration parameters are bad, the validation will fail. Stopping and starting integration at fixed time-points will only work if retention times are reliable and the baseline between the peaks is genuinely baseline. If this is the case, it might be fine.

Having said the above, (1) I'd want to check that the integration looked sensible, even in the dirtiest and lowest-level samples; (2) I don't see any advantage to the approach of fixing peak starts and ends. A modern automatic integrator algorithm should be better, given sensible parameters.

all are acceptable.......it depends on situations which one to follow.....inhibit integration and valley to valley are commonly used but force peak is used when it is impossible to integrate peak with normal setting...

This has also been mentioned many times before: A way around the guesswork of these "methods" often can be found in using "peak hight" instead of "peak area". One needs to calibrate in peak hight when doing this, of course.

I’ll only comment on the “inhibit integrationâ€
Learn Innovate and Share

Dancho Dikov

HWM is correct that peak heights are more accurate when the resolution is not good. Valley methods usually produce negative integration errors, and should be avoided in most cases, although there are a few exceptions. The "Drop" method is best in most cases, but when the resolution decreases or one peak becomes very small compared to the other, then sometimes the valley and skim methods are more accurate.

Turn your integration off to ignore regions of the chromatogram where you are not integrating anything. This often helps avoid problems caused by baseline fluctuations.
Merlin K. L. Bicking, Ph.D.
ACCTA, Inc.

peak height is nice in some circumstances, but awful where peak width varies from sample to sample. OK, I know width should be constant, but there again: in an ideal world peaks should also be properly resolved!

But really, all this tangent-skimming and dropping lines from valleys is the most awful fudge, valid in the days of integration-by-scissors-and-balance, but hardly befitting an age of high-power computing.

Some years ago I saw a PhD thesis (on-line) on the subject of fitting tailed Gauss curves through hplc data. Unfortunately I've never been able to find it since. Perhaps someone else here can help me?

The idea was that given two poorly-resolved peaks, you assume both are tailed Gauss curves, and find a best fit for the sum of the two curves through the given data. One of the parameters of the Gauss curve is its area, and this is what you then report...

Of course the dangers are (a) that the original peaks may not have been the right shape, and (b) that you are fitting 8 parameters through some data, and almost anything can be fitted if you've got enough parameters to play with. (It's 8 because you've got 2 peaks, each of which has a location (retention time), an area, a width, and a tailing factor).

This approach, which has to be better than messing about with dropped perpendiculars, is never going to happen unless some of the instrument software manufacturers drag themselves kicking and screaming towards the end of the 20th Century.

lmh:
I think most of us would prefer a nice mathematical approach to determining peak area. Unfortunately, one that works well in most situations has not been developed. There have been many papers on the subject, but until we can define a specific equation to model peak shape, it will be a theoretical topic only. I don't blame the data system developers for this. I know many have tried to come up with something better.

Three years ago I published a summary of which integration techniques work best in certain situations. Height was more accurate than area, and drop was less inaccurate than the other options in many cases. But when the resolution became poor, or one peak became very small, in some cases there was no method that gave an accurate answer. In other cases, there were some situations where the other integration methods gave better results.

The links are:

http://chromatographyonline.findanalyti ... ?id=318543

http://chromatographyonline.findanalyti ... 9&pageID=1

The Conclusion: The only real way to improve your analytical results is to improve your accuracy. Your target Rs value should be at least 4.
Merlin K. L. Bicking, Ph.D.
ACCTA, Inc.

lmh:
I think most of us would prefer a nice mathematical approach to determining peak area. Unfortunately, one that works well in most situations has not been developed. There have been many papers on the subject, but until we can define a specific equation to model peak shape, it will be a theoretical topic only. I don't blame the data system developers for this. I know many have tried to come up with something better.

Three years ago I published a summary of which integration techniques work best in certain situations. Height was more accurate than area, and drop was less inaccurate than the other options in many cases. But when the resolution became poor, or one peak became very small, in some cases there was no method that gave an accurate answer. In other cases, there were some situations where the other integration methods gave better results.

The links are:

http://chromatographyonline.findanalyti ... ?id=318543

http://chromatographyonline.findanalyti ... 9&pageID=1

The Conclusion: The only real way to improve your analytical results is to improve your accuracy. Your target Rs value should be at least 4.
Merlin K. L. Bicking, Ph.D.
ACCTA, Inc.

I'll paraphrase: you can't compute your way around bad chromatography.

There is a great summary in Dyson's book on Chromatographic Integration Methods, at the end of the chapter on integration algorithms, that makes the point explicit:

". . . errors arising from peak overlap are introduced by the algorithms of perpendicular and tangent separation and cannot be eliminated by anything but better chromatography. Integrators are able to generate a highly precise and totally inaccurate set of results for all of the forgoing examples."

Dyson, Chromatographic Integration Methods, 2nd Ed, pg 67; RSC Monographs (1998)
-- Tom Jupille
LC Resources / Separation Science Associates
tjupille@lcresources.com
+ 1 (925) 297-5374

Tom:
While I generally agree with Dyson's statement, there are a few particular cases where the exponential and Gaussian skims methods, and even valley, will generate an accurate number (and at the same time drop produces worse results). You cannot make the blanket statement that they are always bad when the resolution is poor. V. Meyer published a nice series with lots of examples more than 10 years ago. However, these cases are unique and difficult to implement for the practicing analyst, so in practice the advice is "caveat integrator."
Merlin K. L. Bicking, Ph.D.
ACCTA, Inc.

With narrow enough constraints on calibration method and sample composition it is possible for almost any integration method to give repeatable, and even accurate, results. Nonetheless, in the real world where samples vary in content of both target analytes and interferences no method that relies on anything except properly resolved peaks (i.e. good chromatography) is likely to be robust or reproducible.

Peter
Peter Apps

Tom and MBicking, thanks for useful comments and links.

Although I also value well-resolved peaks, let me play devil's advocate for a moment.

There are plenty of situations in analysis where we routinely use calculation to get round bad resolution. Obvious examples to me personally include things such as chlorophyll estimation from spectra of a crude extract of leaves (the spectra of the chlorophylls overlap strongly), and non-aqueous fractionation of cellular organelles, which absolutely cannot give pure fractions. No doubt everyone can find a personal list of things that can't be properly resolved, but must nevertheless be measured.

The maths of sorting out these situations, where we want to measure A, B, C but our measurements a, b, c don't contain pure "A" etc. has been around for centuries, and the methods to check the process hasn't gone wrong are also very well-described.

Much as I love properly resolved peaks in chromatography, we also have to think about the cost of proper resolution. A method has to be good enough for its purpose, not perfect. If the best method takes twice as long as the quick-and-messy version, I can only look at half as many replicate samples, and overall my experiment may be less reliable because of loss of replication...

Given that we're in the business of generating imperfect-but-adequate methods, at a cost, for imperfect-but-adequate samples, I find it a little lazy to reject computational approaches to improving the result. It's definitely lazy to reason "my resolution is poor, and since there's no way to improve it, I shall be content with an integration approach that I know to be awful/highly subjective". It's also a bit lazy to take the "I've seen this all before" approach: it discourages any improvements in automatic integration/quantification, and neglects the fact that computers even 5 years ago didn't have the power to do some of the more demanding curve-fitting that can be tackled routinely today. Perhaps some older theory could be revisited nowadays?

Ha, got that off my chest! Now I'll go and get separating again, because frankly, I agree that it's a lot easier for a chromatographer to improve his peaks than for an informaticist to untangle them.

Thanks again for your comments and links.
22 posts Page 1 of 2

Who is online

In total there are 14 users online :: 0 registered, 0 hidden and 14 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: No registered users and 14 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry