Page 2 of 3

Posted: Sun Nov 01, 2009 3:19 pm
by Uwe Neue
John, Let me defend the EP thinking, although I think that it is overly pedantic.

We know that the height of the signal can be clearly defined.

We know that we can clearly define the noise.

The signal arises above the averaged baseline, which is in the center of the noise.

Thus if I want to have a measure how high my signal is relative to the height of the noise above the baseline, then I need to divide the signal by half the noise, which is what EP is doing.

Posted: Sun Nov 01, 2009 6:16 pm
by HW Mueller
So EP is saying that the noise is 1/2 the noise x 2. (N = 1/2Nx2)

Posted: Sun Nov 01, 2009 7:22 pm
by Uwe Neue
Hans:
The EP is saying (correctly) that the baseline is in the center of the noise. Thus half the noise is above the baseline. The signal is also above the base line.

If I want to compare the signal to the noise that interferes with the measurement of the signal, I need to divide the signal by half the noise.

(The relevant noise is half the noise. Therefore the correct measure should not be signal-to-noise, but signal-to-half-the-noise.)

Are we having fun yet?...

Posted: Sun Nov 01, 2009 9:25 pm
by danko
Yes, but the signal/peak doesn’t necessarily start from the center of the baseline. It can also start from the lowest point of it, as well as from the highest. It is a question of probabilities whether the absorbance reading is under or over the baseline middle (or right in the middle) when the stuff that causes the actual signal/peak enters the flow cell.

And yes, we are having fun :D

Best Regards

Re: S/N calculation per EP and USP

Posted: Mon Nov 02, 2009 9:21 am
by krickos
Certainly this will affect about a gizillion methods that have S/N as part of system suitability.
Hi John

I actually do not think this potential change mentioned in PF 35(6) will that a great impact because:

USP did not have a S/N definition before in chapter <621> nor a general S/N SST requirement as Ph Eur for related substances. So if you have to comply with USP only a few individual monographs with a specified S/N test may be effected.

If you have a non-USP method submitted then usually that S/N calculation mentioned in the non-USP method is still valid ie no change.

Or am I missinmg something?

Posted: Mon Nov 02, 2009 11:21 am
by lmh
Perhaps it's time that S/N was allowed to slide into an honourable retirement in HPLC applications, and removed altogether from pharmacopoeia usage.

What "noise" are you measuring anyway? All chromatography detectors have some sort of damping (even if it's merely the length of time over which a signal is accumulated before a digital value is generated in the data file). Since the noise is greatly affected by time-constants, hidden electronic smoothings going on before you see the data, and such-like, is it a value worth measuring? A detector isn't necessarily more sensitive just because it's doing more smoothing.

Not to mention the problem of measuring noise on a sloping baseline?

We have to think why noise is important. Noise generates uncertainty about where the true value lies. Everyone knows that some software handles noise better than others: some software draws baselines well and accurately through the middle of the noise, other software can be hoodwinked by a couple of low values and draws a baseline no human would choose. If the software makes significant difference to the integrated peak areas, as much as the noise itself, then we need to validate our method in a way that also validates the software's handling of noise. This is where methods based on the s.d. of the calibration curve become valuable: they test the entire procedure including integration as well as measurement.

If we agree that S/N is not the best way to calculate LOD and LOQ values, and there are better system suitability tests built into all modern software, why bother defining noise?

Posted: Mon Nov 02, 2009 12:04 pm
by danko
I’m not sure the S/N is as valueless as you put it.

Firstly I’ve never seen a noiseless baseline, regardless of the applied electronic and software related manipulations. And if some of these manipulations are beneficial, then it must be OK/appropriate to confirm it by calculating the S/N.

As for the applicability of the test, I can mention a couple of fine examples just for the sake of the current discussion. The first is day to day check of the sensitivity – even though many people believe that once the method is validated (LOQ, LOD, linearity/proportionality etc. are demonstrated) it’ll keep producing valid results for the rest of its life. No matter whether the utilized equipment is the same as before, or things have happened since the method validation was completed (long story).

The second is S/N as a troubleshooting tool. It’s easy and fast to apply at different places on the chromatogram (i.e. not necessarily in the same context, as originally) assessing different peaks etc.

The problem here is the bureaucratic approach to the matter. If one just divides these to numbers, just like we’ve always done and uses the result in combination with a good portion of common sense, then everything will be alright and there will be time for other important chromatographic matters.
I’ve seen perfect chromatographic methods (in a bureaucratic sense) documented by means of very complicated statistics etc. and when they’re supposed to produce some reasonable results they fail - with unbelievable clarity - to do so.

Best Regards

Posted: Mon Nov 02, 2009 4:26 pm
by HW Mueller
Imh: Could you clarify the apparent contradiction? In your second paragraph you seem to say that electronic measures don´t do any good, then the statement, "Everyone knows that some software handles noise better than others: . . . ."

I don´t just wonder whether S/N might have some use or not. I will certainly miss it if it should go (I am even willing to do this 1/2 x 2 stuff). The S/N immediatly gives a picture, in ones head, as to how the noise compares to the peak. With the other methods I have to get a book on statistics, before I remember what might have been meant.

Also, the S/N determined on a real sample also includes all the steps in the analysis.

Posted: Wed Nov 04, 2009 5:10 pm
by Rob Burgess
Just a couple of points on this.

The EP S/N measure is measured by Peak-to Peak measurements of noise in the vicinity of the signal peak.

I would rather all of the pharmacopeias come together and stipulate measuring S/N by root mean square (RMS) as advocated by the ICH guidance. tTis makes much more sense as most chromatographt data systems (CDS) can cope with measuring RMS noise and not Peak to Peak noise measuremnts (except Empower that is. ;-).

Posted: Wed Nov 04, 2009 8:27 pm
by krickos
Hi

Well some breif points on my behalf.
From a validation point of view for LOD/LOQ in the pharma buisness with regard to ICH guidlines (Q2R1), well using S/N or other approches are already there so :wink:

On the other hand when it comes to system suitability, troubleshooting etc I agree with some of the other speakers ie S/N is not that bad.
If you start to look at the alternatives and also consider other "necessary" SSTs you soon find that you may end up with an excessive amount of standard/blank etc injections (for example both calibration curves and precision +) long before you start injecting samples.
From that point S/N is quite tempting and perhaps good enough for many applications (imagine ~10 injections a 60min old LC monographs before samples are injected).

Both USP and Ph Eur has discussed this at length as I recall and S/N was considered the best compromise at least for SSTs (though it seems USP still has postponed any mandatory sensitivity SST such as S/N).

Posted: Thu Nov 05, 2009 2:00 pm
by lmh
(1) HWMueller, yup, I didn't express myself too clearly, and I suspect you know the subject vastly better than I ever will, so this is very much merely an opinion:

What I meant by different software packages handling noise worse/better is that given a noisy baseline, some software finds quite good starting and stopping points for a peak, and draws quite a realistic baseline somewhere half-way up the noise pattern. Other software draws a baseline from some random low spike before the peak to a random spike after the peak, thereby increasing the peak area. Both packages would do fine on smoothed data, but one does badly on real, unsmoothed data, and its badness would translate to errors at the low end of the calibration curve.

I'm also not against electronic smoothing in the detector. In fact I'm a big fan of smoothing (because it really, really helps the integrator!). The way I see it, smoothing is a good thing, but is always a compromise, because you don't want to smooth away the chromatographic peaks! A detector with lots of smoothing isn't more sensitive, but it will make it easier for you to see peaks, at the expense of not being suitable for measuring very narrow peaks (uplc, CE). Smoothing is a matter of understanding that noise peaks are very narrow (very high frequency), while chromatography peaks are broad and gently-curved (low frequency), and the smoothing has to sit somewhere appropriate, in the middle.

What I meant, when I said that electronic smoothing doesn't necessarily increase the sensitivity of the detector, is that in some cases at least, the same detector with less electronic smoothing would have given the same results given a bit of software smoothing afterwards. Think scintillation detector for radioactive samples: the number of decays per second is a statistical matter, and if I count individual seconds in the detector and smooth 5 points, it's not dissimilar to counting 5 seconds in the detector.

(2) Danko, sort of related to your baseline comments: I've seen ion trap SRM chromatograms where most points are zero (probably determined by threshold at which instruments saves data). In a data-set like this, N is so badly measured that S/N is problematic. In the worst case, S/N over a set number of peak-widths can become infinite, but the LOD is clearly not infinite!

(3) Everyone: OK, OK, I'll happily concede that S/N is quite a good instant check for disasters in the system, and I'm no more keen than anyone on having to run 58 low-level standards and blanks before I get to my 3 samples, just to prove the system's working...

But for this application, who cares how it's defined? All we want is to know that the method today (in this lab) is not significantly worse than it was when it was validated (in this lab).

Posted: Thu Nov 05, 2009 3:43 pm
by HW Mueller
If a device puts out zeroes for the baseline even though this is not correct, then sensitivity via statistics doesn´t do you any good either.

If you do smoothing to the point of starting to obliterate peaks you just did wrong smoothing. What I really appreciate is the smoothing that is due to preventing electronic noise. For instance, Some 20 years ago we built a radio detector for gamma rays, using an amplifyer which was at least 10 years old at that time. The noise was at least 10 times higher than present apparati. In my thinking this type of preventive smoothing is among the greatest advances made in electronic analytical devices. S/N shows this in a way which is immediatly comprehensible.

Posted: Mon Nov 09, 2009 11:16 am
by lmh
HW Mueller, I'm not sure you're entirely on the same wavelength as me about the sensitivity via statistics and thresholding issue, but I'm a bit fuzzy on these things.

Thresholding should have a minimal effect on the peak area, because all it does is cut off the two wings of the peak where they fall below the threshold. Yes, this will underestimate the area, and the error will be proportionally bigger on smaller peaks. The result is that the calibration curve could, theoretically, be deformed at the lowest points, but I've never seen this in practice because the noise on these lower points is already the larger component of any deviation from a straight calibration curve. Thresholding does't mess up the baseline over the region of the peak itself.

Because the statistical approach works on the peak, and not the baseline either side of the peak, it is using (mostly!) measurements that haven't been cut-off, so the threshold is (mostly!) irrelevant. It should be fairly safe. In fact, this is why I like it compared to theoretical discussions about the standard deviation of a zero-measurement: how on earth can I quantify an hplc peak that isn't there?

Nearly all MS systems seem to use some thresholding, so presumably it must be reasonably safe?

Posted: Mon Nov 09, 2009 11:25 am
by lmh
I'm not sure I entirely agree about the statistical approach not working if the data are thresholded.

Thresholding does, theoretically, cut off the wings of a peak, which will reduce the area, and will affect small peaks proportionally more than big peaks, but the distortion this causes to a calibration curve should never be visible, because at calibration points low enough for it to be a problem, the noise component of the error is probably already vast. The point about choosing a threshold is to make it low enough that the data it throws away were, in any case, meaningless.

Given that the threshold barely affects the peak area, and when it does, increases the apparent error in the calibration curve (in a systematic manner), the statistical approach based on peak area should be safe, and if anything should be perhaps a little overcautious.

But I am a bit fuzzy in my understanding of these things.

Posted: Mon Nov 09, 2009 11:27 am
by lmh
sorry about near-duplicate postings, computer playing up today and gave me a silly error on first go, without apparently posting anything. I should develop computer literacy before being allowed out.