-
- Posts: 66
- Joined: Sun Sep 19, 2004 3:53 pm
Advertisement
Smoothing settings
Discussions about chromatography data systems, LIMS, controllers, computer issues and related topics.
3 posts
Page 1 of 1
I have low s/n HPLC peaks that I would like to smooth (radiochromatography). My options in Chromeleon are moving average and Savitzky Golay (and some others). The smoothing has a number setting, which I presume is the number of points over which the smoothing is performed on a rolling basis. Can anyone suggest limitations on the parameters? Obviously I want sufficient smoothing to remove noise but not so much to remove small impurities and affect peak shape. This can be done by eye, but I wanted something a little more numerically based. Thanks.
-
- Posts: 155
- Joined: Tue Sep 27, 2011 8:54 am
I would suggest setting up a design-of-experiments test:
- inject a set of calibration samples from your top concentration down to below the approximate detection limit
- define a reasonable range of smoothing parameters by eye, then define combinations of parameters to cover the parameter space
- each time smooth with a combination of parameters, integrate, build calibration curve, calculate linearity, R2, limit of detection, any other parameter you are interested in.
- Repeat for each combination
- Find the one giving the best results and use it
- inject a set of calibration samples from your top concentration down to below the approximate detection limit
- define a reasonable range of smoothing parameters by eye, then define combinations of parameters to cover the parameter space
- each time smooth with a combination of parameters, integrate, build calibration curve, calculate linearity, R2, limit of detection, any other parameter you are interested in.
- Repeat for each combination
- Find the one giving the best results and use it
-
- Posts: 1890
- Joined: Fri Aug 08, 2008 11:54 am
I strongly recommend Savitsky-Golay rather than moving average, because it won't affect peak-shape unless you seriously over-do it. Basically it fits a polynomial curve through a succession of points, and chooses the point in the middle of the curve as the best-fit best-point. This means it can curve itself round the curvy starts and ends of peaks, where moving-average will smudge the peak and make it wider.
Apart from this, it's actually really difficult to recommend ideal parameters, because the smoothing actually interacts with the integrator. The main benefit of the smoothing is that it helps the integrator to find the start and end of the peak more reliably, and not get misled by a dip in the middle that might be mistaken for a valley between two peaks. Basically I smooth enough that a reasonable selection of low-abundance peaks are integrated "correctly" (i.e. not split), and I don't worry about being too quantitative about assessing how well it's done, though I do then insist that I use the same integrator and smoothing settings for all samples (otherwise, in effect, you're doing manual integration, but at arm's length by pretending that manual changes to integration on individual files aren't actually manual...). It is always difficult to assess the accuracy of inherently inaccurate data (weak peaks are always less accurate!) without a definite correct answer to which to compare the measurements.
Apart from this, it's actually really difficult to recommend ideal parameters, because the smoothing actually interacts with the integrator. The main benefit of the smoothing is that it helps the integrator to find the start and end of the peak more reliably, and not get misled by a dip in the middle that might be mistaken for a valley between two peaks. Basically I smooth enough that a reasonable selection of low-abundance peaks are integrated "correctly" (i.e. not split), and I don't worry about being too quantitative about assessing how well it's done, though I do then insist that I use the same integrator and smoothing settings for all samples (otherwise, in effect, you're doing manual integration, but at arm's length by pretending that manual changes to integration on individual files aren't actually manual...). It is always difficult to assess the accuracy of inherently inaccurate data (weak peaks are always less accurate!) without a definite correct answer to which to compare the measurements.
3 posts
Page 1 of 1
Who is online
In total there are 20 users online :: 1 registered, 0 hidden and 19 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am
Users browsing this forum: Google [Bot] and 19 guests
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am
Users browsing this forum: Google [Bot] and 19 guests
Latest Blog Posts from Separation Science
Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.
Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.
- Follow us on Twitter: @Sep_Science
- Follow us on Linkedin: Separation Science
