-
- Posts: 1889
- Joined: Fri Aug 08, 2008 11:54 am
In any sequence of samples, you will have QC samples to check that the method is performing within specifications. For example, if I'm measuring samples that are really, really low, and the answers that I will report are "present"/"absent", it makes sense for me to include low calibration points and assess my LOD in the same sequence as the samples.
What I need to do is make sure that I process my samples in exactly the same way as the LOD standards/QC samples or whatever else I need to validate my method.
The integration parameters can be different from day to day, or lab to lab, but each day I am choosing the parameters in a defined way, and (most important!) testing that what I have chosen will do the job to a certain standard.
This is why I'm not happy about manually tweaked peaks. If I start tweaking, how can I be sure I don't try just a little bit harder on my LOD standards than I do on the real samples? So does my LOD really still apply when I quote "absent" from a particular sample?
By the way, there's a big difference between manually chosen integration parameters (used on all samples in a sequence), and manual integration (drawing baselines by hand to where they "look right").
(2) Alex Buske, exactly my point. It seems wrong to be overconcerned about smoothing and then gloss over integration, when the latter is worse-defined and can do much more harm than the former. I'm certainly not advocating throwing away, or hiding, the original unsmoothed data. I'm just advocating well-defined/validated smoothing as part of the process, and as a tool that can help the integrator towards doing a better job, and therefore create a more reliable end result.
But the smoothing debate will no doubt go on for years.
