Page 1 of 1
Straight skim/Exponential skim Error comparison
Posted: Wed Mar 17, 2010 3:06 pm
by malsonj
Are there any published sources or accepted industry standards for determining which type of tangent skim method affords the least integration error and provides the most consistent results. In particular interested in best option (least error) using Agilent chemstation in quantifying poorly resolved (USP resoultion approx. 1 with minor peaks <0.5% that follow major peaks ) I know drop integration can have high positive error here so its use is not preferred for our application but, how do the skim methods compare?

Posted: Wed Mar 17, 2010 9:03 pm
by tom jupille
I don't think you can generalize, because it will depend on the details of the peak shapes. To put this in perspective, here's a quote from the end of the chapter on integration algorithms in Dyson's "Chromatographic Integration Methods, 2nd ed. , pg 67; RSC Monographs (1998)":
. . . errors arising from peak overlap are introduced by the algorithms of perpendicular and tangent separation and cannot be eliminated by anything but better chromatography. Integrators are able to generate a highly precise and totally inaccurate set of results for all the foregoing examples.
(the italics are mine

).
Posted: Thu Mar 18, 2010 8:57 am
by Csaba
No integration is completly accurate. Selecting algoritm also depends on the application. IF you want to be able to say: "The impurity is at least less than..." then you should not select straight line interpolation since it will always underestimate the riding peak area. In that case it is better to use exp skim and overestimete the area, although the answer may be less accurate when tested with standards.
Posted: Sun Mar 21, 2010 8:51 pm
by unmgvar
you could have a look at a good work in my view done By MERLIN K.L. BICKING.
it called:
Integration Errors in Chromatographic Analysis
it was published in 2 parts in the LC-GC magazine in April and June 2006
great info for what you are looking to solve
Posted: Tue Mar 23, 2010 3:51 pm
by mbicking
unmgvar: thank you for the kind comments. You can search the LCGC archives to find the articles.
As others have pointed out, there is no simple answer. The best method depends on the relative peak size and resolution. In general, the "exponential" and "new exponential (Gaussian)" skim methods are better than the straight skim, but skimming is only recommended at higher resolution (that is, when the small peak is further away on the side of the larger peak). At lower resolution, the drop method may actually be more accurate, and height is better than area.
But, as Tom points out, the real solution is better resolution.
Posted: Tue Mar 30, 2010 1:03 pm
by lmh
I still think there's an element of laziness about the approach offered by hplc software manufacturers. No one likes change, but skim-lines and integration aren't the only way.
If you think you have a sum of two tailing Gauss curves, why not just curve-fit two tailing Gauss curves? The equation of the line has eight parameters: RT, area, width and tailing factor of each peak. If you fit the curve, you automatically calculate all the things we like to know about a peak. If you want to test the model, you can check for non-random residual errors between the curve and the measured data.
Some years ago I saw a PhD thesis that tried this idea with success. I can't remember whose it was; if anyone knows, I'd be very grateful.
I suspect we are stuck with conventional integration because hplc software mushroomed at a time when the curve fitting would have been beyond desktop computers, and now we're too established in the methods we know to consider new approaches. Such is life.
Posted: Tue Mar 30, 2010 2:13 pm
by mbicking
lmh:
I am not a math genius, but I have read a number of the papers on the "exponentially modified Gaussian" function, which is supposed to be the best approximation for real peaks. There is no solution to the equation, so various approximations are used.
I also know enough about the integration software to realize that the programmers are not dummies. If there were a viable way to curve fit the peaks then either Agilent, Dionex, or Waters would be using it.
I think the more practical problem is that not all tailing is the same. Residual silanols, extra-column effects, kinetic issues, and stationary phase structure can all produce some type of tailing, but not necessarily in the same way - which means that there isn't a single equation that you could use for curve fitting. I don't have any mathematical basis for saying this; it is just based on my experiences (good and bad).
Posted: Wed Mar 31, 2010 10:12 am
by lmh
Mbicking, thanks, I don't deny I'm painting it simpler than it is! Another difficulty is that you obviously can't fit a (15*4) parameter curve through a complete chromatogram that contains 15 peaks (and you need to know it contains 15 peaks to know where to start, anyway...), so even if you can feel happy about the exact shape of the tailing, you would have to establish which sets of peaks look sufficiently overlapping to deserve grouping in a single curve-fit. It's messy, but I wish I had that thesis!
Also I don't deny the programmers know their stuff (compare integration results from "proper" hplc software with the sort of "add-it-up-and-hope" integration used in some metabolomics peak-finding where processing speed is of the essence). But perhaps sometimes I feel they're a bit limited by a need to keep to approaches we know work; who wants to try to persuade the QA people that some radically new way to look at peaks is still valid? Where's the financial benefit in it? I'm just stirring in the hopes that somewhere, someone will take a fresh look and see if we have really, really found the best approach.