Page 2 of 2
Re: Resolution, method of integration and validation
Posted: Sat Oct 19, 2013 4:11 am
by gtma
Forgive me if this has been discussed elsewhere or in the past. Personally, I don't like perpendicular drop for impurity methods and prefer tangent skim approach. Perpendicular drop integration approach has pros and cons. In general, it overestimate the impurities....good thing from a regulatory perspective. However, it can lead to unknown impurities (eluting close to the tail of the main analyte) exceeding the ICH identification and qualification limits....can be an expensive exploration! Also, for peaks eluting close to the tail of the main analyses, it can lead to chromatogram robustness failure if the impurity is well resolve in some runs and not well resolved in others. I think it's acceptable to use tangent skim as long as you "consistently" use/specified this approach in the method. Any thoughts or feedback? Thanks in advance!
Re: Resolution, method of integration and validation
Posted: Sat Oct 19, 2013 8:44 am
by danko
Tangent skim is a pure fantasy. Unless ofcourse one has the individual compounds and has run them (individually) in order to determine the peak shapes each of them results in.
I mean how would you know - other wise - how much the first peak of interest tails and the second fronts when you have partial separation? And in which way it is possible to skim consistently - especially if the peaks differ in area and hight through the analytical series and/or from day to day.
Best Regards
Re: Resolution, method of integration and validation
Posted: Sat Oct 19, 2013 3:07 pm
by tom jupille
There is a great statement in Dyson's book on integration:
“. . . errors arising from peak overlap are introduced by the algorithms of perpendicular and tangent separation and cannot be eliminated by anything but better chromatography. Integrators are able to generate a highly precise and totally inaccurate set of results for all the foregoing examples.”
Dyson, Chromatographic Integration Methods, 2nd ed. , pg 67; RSC Monographs (1998)
To paraphrase: You cannot compute your way around bad chromatography. If the peaks are not fully resolved, no matter which way you slice them, it will be wrong. The best you can hope for is to be wrong the same way every time.
Re: Resolution, method of integration and validation
Posted: Mon Oct 21, 2013 8:17 pm
by gtma
Your point is well taken. I have encountered a few situations where some compromises are needed for resolution of impurities. I can separate the impurity from the main analyte to show that it not significant and produce results closer to tangent skim than to perperdicular drop approach. Unfortunately, other critical pair impurities co-elute within the chromatogram. You may experience this problem if you have a lot of impurities (including impurity close to the tail of the main analyte) and do chromatographic robustness study and evaluate the individual and total impurities results against the nominal condition.
Re: Resolution, method of integration and validation
Posted: Thu Oct 24, 2013 3:26 pm
by lmh
I think I've said this before, but I sort-of have to say it again.
Of course Tom's right, and the Dyson quote is good, but it has induced paralysis on the part of software writers, an unjustified paralysis. There will always be situations where you've done your very best within the available constraints, and unfortunately the two peaks are still not base-line resolved. There is no option to improve chromatography, because we've done all we can. The precious sample is now used up, the boss has called a halt to buying better columns, we've optimised gradients as far as possible, this is it: we're up against the wall.
So what do we do now? Because we "ought to improve chromatography" and Dyson says all integration of improperly-separated peaks is evil, instead of doing the best we can with the data we've got, we accept one of a variety of acknowledged dreadful approaches. Since we can't be absolutely right, let's be drastically wrong! That's plain silly!
Over the years numerous better approaches have been suggested, including, for example, fitting several (probably 2) tailed gauss-curves through the affected region of data, and finding the areas under each curve. The main argument against this is the error created by our assumption of the peak shape ((1) is our real peak a tailed gauss-curve of the shape of our model, or something different? (2) how well can we estimate the parameters of our model; what error does a bad estimate introduce?).
There's some truth in the argument, but it's still spurious. It's like saying "OK, we know that if we time a 100m sprinter using a stop-watch instead of a proper Olympic-standard system, we will probably get a pretty inaccurate answer, so let's use an egg-timer instead".
Actually, I suspect the real reasons why this approach doesn't exist in mainstream chromatography data systems are (1) that it would have needed greater computational power than was available was available when computer-controlled chromatography was at its fastest stage of evolution; (2) that chromatographers and manufacturers are both change-averse. We sometimes value historical validity over correctness.
Incidentally, this sort of approach isn't as weird as it looks. UV-Vis spectroscopists are quite used to the idea of (spectral) peaks of compounds overlapping, and there are oodles of methods where a set of pigments are measured separately by calculation based on measurements at various wavelengths. One generalised chromatographic equivalent is to create a calibration curve not by integrating a peak, but by treating the whole set of measured points in each chromatogram of the standards as an independent measurement that might indicate something about the chemical present. Subject the whole thing to a partial least squares analysis with the concentrations in the standards as the variable against which you are regressing, and then treat the points (time-points) as measures of the chemical in proportion to the loading they have (most points, not in the peak, will have a loading of nearly zero). Of course I'm not particularly advocating this because in its unrefined form it's pretty over-complex, and error-prone because of the large number of completely irrelevant data-points. But the output of this approach is basically a set of equations for calculating concentrations based on multipliers of measurements at each retention time through the peaks, which is an interesting way to look at things. In the end it's arithmetically very close to an integration by more traditional means, but with the weightings chosen experimentally.