-
- Posts: 3
- Joined: Wed Apr 01, 2009 5:50 am
Advertisement
Is manual integration acceptable?
Discussions about chromatography data systems, LIMS, controllers, computer issues and related topics.
39 posts
Page 1 of 3
I was hoping that someone can shed some light on the subject of manual integration. In a cGMP atmoshere, when is it ok to manually integrate (for example) small unknown peaks? Empower 2 has a full audit trail, so it's not like the manual integration is being "hid", for lack of a better term, from QA or FDA. Wouldn't it be preferable to have a consistently manually integrated peak over a peak integrated poorly using the constraints of software? Thanks for your input!
-
- Posts: 239
- Joined: Tue Nov 09, 2004 3:06 pm
Hi,
I am not award of any guideline demanding automated integration only. who said manual integration is unacceptable?
Alex
I am not award of any guideline demanding automated integration only. who said manual integration is unacceptable?
Alex
-
- Posts: 3
- Joined: Wed Apr 01, 2009 5:50 am
My current job says it is a violation in the eyes of the FDA...I tried to tell them that we did it often at my last QC job, and we were audited tons of times with no warning whatsoever about manual integration. So now I'm thinking, what are the guidelines for manual integration? I have seen some very poorly integrated peaks at my current job (specifically regarding poor baseline integration), that were deemed acceptable only because they did not want to manually integrate.
-
- Posts: 2175
- Joined: Tue Aug 17, 2004 7:59 pm
A good data system can't fix bad chromatography.
A good data system should be able to render consistent, defensible baselines for an overwhelming majority of injections within a run or method history. For the occasional injection that requires a manual touch, go ahead and do it, but you had better be able to explain why you did it and the result had better look pretty much like the baselines of neighboring injections. Consistency is key.
Of course, if you have SOPs stating that manual integration in prohibited, you're stuck.
If you're seeing dodgy baselines on a consistent basis, either you need to evaluate the differences correcting them would make in the results (if they're trivial, leave it) or you need to suggest a better way to do it. If nobody will listen to your suggestion, then ask supervision "Why do our chromatograms look like this while every single chromatogram in this column catalog looks like this?". Depending on the answer you get, it may be time to update your résumé.
A good data system should be able to render consistent, defensible baselines for an overwhelming majority of injections within a run or method history. For the occasional injection that requires a manual touch, go ahead and do it, but you had better be able to explain why you did it and the result had better look pretty much like the baselines of neighboring injections. Consistency is key.
Of course, if you have SOPs stating that manual integration in prohibited, you're stuck.
If you're seeing dodgy baselines on a consistent basis, either you need to evaluate the differences correcting them would make in the results (if they're trivial, leave it) or you need to suggest a better way to do it. If nobody will listen to your suggestion, then ask supervision "Why do our chromatograms look like this while every single chromatogram in this column catalog looks like this?". Depending on the answer you get, it may be time to update your résumé.
Thanks,
DR

DR

-
- Posts: 586
- Joined: Mon Oct 04, 2004 3:00 am
FDA compliance is a tricky thing, basically the auditor tells you if you are compliant. Different auditors have different standards for compliance, so what one auditor says is compliant may not be accepted by a different auditor. I even know of one instance when an auditor determined Empower software was not compliant.
-
- Posts: 658
- Joined: Thu Jun 19, 2008 11:07 am
Hi
Agree that different auditors may have different opinions (good thing FDA now has a "dispute board" or whatever it is called).
As for manual integration not being allowed, well agree with others that it is not so.
Potentially I think some might mix this up with the software having audit trail or not. In order words, how do you proof/save raw data so that the manual integration shows when you reload data? or is the printout "only valid"?........
So if you have an ER/ES compliant sofware that saves a new version of the rawdata file that can be reloaded and show the manual integration you are off the hook easily. If not you may need manual procedures for ensuring what data is the true data. But neither case prohibits the use of manual integrations.
I kinda stupid actually as your company policy actually may lead to that "poor" peaks at or above the report limit of the method is not included in for example total amount of impurities. How bad is not that?
As stated above: a good data system cannot fix bad chromatography, nor can it fix a "bad" sample and trace analysis is just that.
Agree that different auditors may have different opinions (good thing FDA now has a "dispute board" or whatever it is called).
As for manual integration not being allowed, well agree with others that it is not so.
Potentially I think some might mix this up with the software having audit trail or not. In order words, how do you proof/save raw data so that the manual integration shows when you reload data? or is the printout "only valid"?........
So if you have an ER/ES compliant sofware that saves a new version of the rawdata file that can be reloaded and show the manual integration you are off the hook easily. If not you may need manual procedures for ensuring what data is the true data. But neither case prohibits the use of manual integrations.
I kinda stupid actually as your company policy actually may lead to that "poor" peaks at or above the report limit of the method is not included in for example total amount of impurities. How bad is not that?
As stated above: a good data system cannot fix bad chromatography, nor can it fix a "bad" sample and trace analysis is just that.
-
- Posts: 3594
- Joined: Mon Aug 30, 2004 11:43 pm
I'd like to offer up that even electronic integration can be manual to some extent, as different operators can adjust integration parameters to get slightly different results from the same electronic raw data. Of course having extememly sharp, perfectly-resolved peaks makes that issue much less.
As to what FDA auditor might accept or not accept: who knows. My own QA department questioned the exact wording in a protocol that that I had pasted in from an extremely similar accepted protocol two weeks ago, so go figure.
As to what FDA auditor might accept or not accept: who knows. My own QA department questioned the exact wording in a protocol that that I had pasted in from an extremely similar accepted protocol two weeks ago, so go figure.
-
- Posts: 1889
- Joined: Fri Aug 08, 2008 11:54 am
We're not in a QC environment, but even so, I never touch manual integration, for a whole host of reasons.
One of the strongest is the LOD/LOQ question. If you've measured a limit of detection using dilute standards to make a calibration curve close to the LOD, then your LOD depends on you integrating the samples using the same parameters that you used to derive the LOD. Clearly if you carry out manual integration to optimise the LOD standards, then you will get a lower s.d. than if you just integrate them with the standard parameters in the software integrator, and this will give you a lower LOD than actually applies to the samples. So you might declare a sample as less than 1 pmole when in fact you can only really be sure it's less than 2 pmole.
As to the general question of fiddling with integration, whether manual or automatic; if small changes affect the overall result, then the overall result is probably unreliable.
Many automatic integrators are very sensitive to sudden jumps of signal (they're using slope as a measure of where the peak starts and stops, and a sudden jump is a big slope). I tend to use smoothing routinely, because I'd rather smooth everything than fiddle with slope sensitivities to get the perfect peak. Smoothing is a bit of a naughty issue, but some algorithms (e.g. Savistzky-Golay) do not increase peak width or alter peak shape, and are therefore very safe to use. Others (box-car, many implementations of Gaussian) are travesties of justice that should be stamped out.
One of the strongest is the LOD/LOQ question. If you've measured a limit of detection using dilute standards to make a calibration curve close to the LOD, then your LOD depends on you integrating the samples using the same parameters that you used to derive the LOD. Clearly if you carry out manual integration to optimise the LOD standards, then you will get a lower s.d. than if you just integrate them with the standard parameters in the software integrator, and this will give you a lower LOD than actually applies to the samples. So you might declare a sample as less than 1 pmole when in fact you can only really be sure it's less than 2 pmole.
As to the general question of fiddling with integration, whether manual or automatic; if small changes affect the overall result, then the overall result is probably unreliable.
Many automatic integrators are very sensitive to sudden jumps of signal (they're using slope as a measure of where the peak starts and stops, and a sudden jump is a big slope). I tend to use smoothing routinely, because I'd rather smooth everything than fiddle with slope sensitivities to get the perfect peak. Smoothing is a bit of a naughty issue, but some algorithms (e.g. Savistzky-Golay) do not increase peak width or alter peak shape, and are therefore very safe to use. Others (box-car, many implementations of Gaussian) are travesties of justice that should be stamped out.
-
- Posts: 154
- Joined: Tue Jan 16, 2007 10:44 am
As I can remember any manipulation with raw data (smoothing) is not allowed in GMP environment. Maybe situation changed but I don't believe to it.We're not in a QC environment, ......
I tend to use smoothing routinely, because I'd rather smooth everything than fiddle with slope sensitivities to get the perfect peak. Smoothing is a bit of a naughty issue, but some algorithms (e.g. Savistzky-Golay) do not increase peak width or alter peak shape, and are therefore very safe to use. Others (box-car, many implementations of Gaussian) are travesties of justice that should be stamped out.
We are working in GMP and manual integration fixes are considered as "dirty" and usually is better to adjust method integration parameters.
For "irregular samples" (during production process and method development) is possible to use manual integration.
-
- Posts: 1889
- Joined: Fri Aug 08, 2008 11:54 am
I'm afraid this is going to be another rant...
Yes, even I'm aware that the regulators get upset about "manipulation of the raw data", and once again I can't help but feel that if they took the trouble to engage both brain-cells at the same time, and perhaps read up on their subject, they might feel differently.
Firstly, what makes these particular "raw data" raw? They're not the voltages or currents measured at the actual electronics of the detector. They are merely one particular stage along a data-processing route that begins with a current at a detector and ends with a value in AU (some would argue, ends with a value in mg or micromoles).
In fact these raw data are a watershed between the instrument method and the processing method. Up to this point, we influence the numbers by how we set up the detector. After this point, we influence the numbers by how we post-process. The numbers are important because, if we know how we post-process, we can always redo it differently, if, and only if, we kept the "raw data".
But there's nothing else holy about these numbers. They have already been manipulated in all sorts of poorly documented ways before we see them.
Secondly, the regulators completely misunderstand the process of smoothing. Smoothing is already built into the system. Ignoring the fact that many manufacturers will smooth their signal electronically before they digitise it (something which is a lot less well defined than a numerical smoothing applied later), a flow cell is merely a physical box-car smoothing device. It automatically gives us a running average of the volume of sample that fits in the flow cell. We naturally choose a flow-cell volume to suit our application. If you swap your short flow cell for a longer one when you use a high-flow application (giving better S/N but sacrificing resolution, which in this case doesn't matter), you are doing very much the same as using a short flow cell and box-car smoothing the result. There are parallels in other detectors too. Many detectors are basically integrating a signal over a period of time that the analyst specifies between detection events.
Thirdly, the nature of the "manipulation". At some point, an integrator has to decide where a peak starts. This is an entirely numerical decision based on some manipulation of the data. Most integrators look for a situation where the slope of the chromatogram exceeds some value (or in some cases the 2nd derivative passes through zero). Measuring slope (or second derivative) requires a combination of more than one point in the data. If you measure slope using more than two points, you are already smoothing (albeit for a specific purpose). Even the integration process itself isn't (or shouldn't be) merely a matter of totting up the actual intensities measured between start and end of peak. Since the peak is known to curve, not be pointy, good integration routines allow for the curviness between the known measurements. Again, this is a process of looking at a number of points to find a good approximation to the shape. We tend to take this for granted.
The very manipulations many regulators abhor are already deeply embedded in what is going on, and are embedded for very sound reasons.
I'd be far more concerned by manipulations of changing individual integrations of lone files (the subject of this thread). They may be traceable, but faced with 100 samples, who is going to look to see which have been fiddled with, why, and what they would have looked like if no one had fiddled? Remember why Babbage invented the numerical calculator!
Rant over, sorry folks!
Yes, even I'm aware that the regulators get upset about "manipulation of the raw data", and once again I can't help but feel that if they took the trouble to engage both brain-cells at the same time, and perhaps read up on their subject, they might feel differently.
Firstly, what makes these particular "raw data" raw? They're not the voltages or currents measured at the actual electronics of the detector. They are merely one particular stage along a data-processing route that begins with a current at a detector and ends with a value in AU (some would argue, ends with a value in mg or micromoles).
In fact these raw data are a watershed between the instrument method and the processing method. Up to this point, we influence the numbers by how we set up the detector. After this point, we influence the numbers by how we post-process. The numbers are important because, if we know how we post-process, we can always redo it differently, if, and only if, we kept the "raw data".
But there's nothing else holy about these numbers. They have already been manipulated in all sorts of poorly documented ways before we see them.
Secondly, the regulators completely misunderstand the process of smoothing. Smoothing is already built into the system. Ignoring the fact that many manufacturers will smooth their signal electronically before they digitise it (something which is a lot less well defined than a numerical smoothing applied later), a flow cell is merely a physical box-car smoothing device. It automatically gives us a running average of the volume of sample that fits in the flow cell. We naturally choose a flow-cell volume to suit our application. If you swap your short flow cell for a longer one when you use a high-flow application (giving better S/N but sacrificing resolution, which in this case doesn't matter), you are doing very much the same as using a short flow cell and box-car smoothing the result. There are parallels in other detectors too. Many detectors are basically integrating a signal over a period of time that the analyst specifies between detection events.
Thirdly, the nature of the "manipulation". At some point, an integrator has to decide where a peak starts. This is an entirely numerical decision based on some manipulation of the data. Most integrators look for a situation where the slope of the chromatogram exceeds some value (or in some cases the 2nd derivative passes through zero). Measuring slope (or second derivative) requires a combination of more than one point in the data. If you measure slope using more than two points, you are already smoothing (albeit for a specific purpose). Even the integration process itself isn't (or shouldn't be) merely a matter of totting up the actual intensities measured between start and end of peak. Since the peak is known to curve, not be pointy, good integration routines allow for the curviness between the known measurements. Again, this is a process of looking at a number of points to find a good approximation to the shape. We tend to take this for granted.
The very manipulations many regulators abhor are already deeply embedded in what is going on, and are embedded for very sound reasons.
I'd be far more concerned by manipulations of changing individual integrations of lone files (the subject of this thread). They may be traceable, but faced with 100 samples, who is going to look to see which have been fiddled with, why, and what they would have looked like if no one had fiddled? Remember why Babbage invented the numerical calculator!
Rant over, sorry folks!
-
- Posts: 5433
- Joined: Thu Oct 13, 2005 2:29 pm
If integrating by different methods or with (sensible) small changes in automatic integration parameters makes significant differences to the result I would think that the overall method is failing an important robustness test.
Peter
Peter
Peter Apps
-
- Posts: 1889
- Joined: Fri Aug 08, 2008 11:54 am
Peter Apps, Entirely agreed!
-
- Posts: 5433
- Joined: Thu Oct 13, 2005 2:29 pm
Thanks lmh. A question for the regulation experts - is robustness to integration parameters covered in the sacred texts ? If not maybe it should be.
Peter
Peter
Peter Apps
-
- Posts: 2846
- Joined: Mon Aug 30, 2004 7:17 am
Imh´s rant (to use his own description) is precisely the reason I never used any smoothing method after some initial tests: Too much electronics, too little chemistry/physics. Also, I diddn´t see any smoothing, including S-G, that couldn´t severely change peak parameters. HP integrators of some 30 years ago decided after a certain amount of points whether there is really a slope, if there was they added all this to the peak, they didn´t change the points in any way. All the integration I have used since were based on similar simplicity. I certainly want to know how the signal looks with a minimum of manipulation.
One should not forget that HPLC is a method that doesn´t work without calibration, these showed that my manual integration parameters have been more robust than what programmers put into the software. Of course, I have never had to please regulations, I was always subject to scientific soundness.
One should not forget that HPLC is a method that doesn´t work without calibration, these showed that my manual integration parameters have been more robust than what programmers put into the software. Of course, I have never had to please regulations, I was always subject to scientific soundness.
-
- Posts: 239
- Joined: Tue Nov 09, 2004 3:06 pm
On the LOD/LOQ criterion:
Even if you stay on the same system, lamps age, water quality might change, as do solvents - so the idea of keeping integration parameters constant for all of the methods life cycle is not realistic. What about method transfer to other labs with other software?
Robustness to integration parameters is not covered by validation guidelines, as for my view any integration topic (except definition of S/N and so on).
Correct integration is the basis of any meaningful chromatographic analysis. and there are reason why in some organizations every result is double checked.
Alex
Even if you stay on the same system, lamps age, water quality might change, as do solvents - so the idea of keeping integration parameters constant for all of the methods life cycle is not realistic. What about method transfer to other labs with other software?
Robustness to integration parameters is not covered by validation guidelines, as for my view any integration topic (except definition of S/N and so on).
Correct integration is the basis of any meaningful chromatographic analysis. and there are reason why in some organizations every result is double checked.
Alex
39 posts
Page 1 of 3
Who is online
In total there are 25 users online :: 0 registered, 0 hidden and 25 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am
Users browsing this forum: No registered users and 25 guests
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am
Users browsing this forum: No registered users and 25 guests
Latest Blog Posts from Separation Science
Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.
Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.
- Follow us on Twitter: @Sep_Science
- Follow us on Linkedin: Separation Science