Calibrations

Discussions about chromatography data systems, LIMS, controllers, computer issues and related topics.

2 posts Page 1 of 1
I've just encountered a situation where a particular method has an averged calibration rather than replacing it each batch. I'm curious as to what effect averaging the calibrations would have versus replaing it each time? To me, it seems that issues could be caused if there was a relatively wide variation in calibration conditions, cause by inlet, flow and column conditions etc..

I must admit to never having used this option before - my calibrations were always replaced to ensure the best possible conditions. Why is this option even a thing?
Simon Wicks
Specialist Chemist
Eurofins Chemtest
(1) Manufacturers include features in software if they're in everyone else's software, and if anyone has ever asked for them. No one wants to lose a sale when the customer says "can your software do..."

(2) But the theoretical answer is this: it depends where your errors are. I assume you are referring to a bracketed situation where you've run the calibration standards maybe 4 times, with three batches (brackets) of samples between the four calibration sets. If you're working in an LC-MS, probably the overall instrument sensitivity (ionisation efficiency) will gradually wander as the spray chamber cleanliness changes. If so, you'd want to use the closest calibration sets to the sample, and not average the whole lot. The largest errors are systematic ones, between brackets.

But if you're using a PDA, it's quite likely there is no systematic error, and the largest error is completely random. In this case, the average of all standards is the best. In fact the main thing the brackets are doing for you is giving you a fall-back position if things go wrong. You can trust your data as far as the last successful standard, so if you just ran a curve at the start, and it went wrong on sample 40, you'd lose the whole lot.

The setting I've never quite understood is the one that uses the first curve for all samples up to the second, then averages the first and second for the samples it encounters before it starts to run the third calibration set, at which point it averages all three for subsequent samples. The only thing I can assume is that this is for people who want their data straight away, calibrated as best as is possible at the time the sample was run.

If you meant averaging from day to day, I think that's probably quite bad practice, but it would depend on whether validation has shown that there is little or no day-to-day variation. If so, and if you believe that individual calibration curves vary, then there might be a case for averaging today's with yesterday's. But if things are that bad, I don't know if I'd trust an individual sample measurement.
2 posts Page 1 of 1

Who is online

In total there is 1 user online :: 0 registered, 0 hidden and 1 guest (based on users active over the past 5 minutes)
Most users ever online was 1117 on Mon Jan 31, 2022 2:50 pm

Users browsing this forum: No registered users and 1 guest

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry