standard deviation smaller then significant digit

Discussions about GC-MS, LC-MS, LC-FTIR, and other "coupled" analytical techniques.

8 posts Page 1 of 1
Hi there,

Wanted to discuss the following. Let's say for a targeted LC-MS analysis, you use a controlchart to guard the quality. The card checks whether your QC is within certain limits, based on the RSD determined during validation.

Because samples are created from a solution described in two significant digits, your result should also contain two significant digits, just as the QC data in the chard.

Now, what happens when your 1S (deviation) is smaller then 1, say 0.7, and your true value 15. Meaning the 1S limits are 14.3 and 15.7.

So, when a QC results in 15.5, it must be rounded to 16 and will be outside the limit, while it actually is within and thus will give a false alarm.

What would be the right way to go, accept the false alarm, or make better use of the data.
Can you explain the logic behind this bit please;

"Because samples are created from a solution described in two significant digits, your result should also contain two significant digits, ...."

Peter
Peter Apps
If the calibration solutions begin at 10, then all of the calibration concentrations would be whole numbers without decimals if using only two significant digits. If he follows that throughout the process then he has to round his check standard to 16 which is above the 15.7 limit.

Of course this also means that the two significant digits are not being applied to the control limits, which if you are going to do that the control limits should also be rounded to 14 and 16 instead of 14.3 and 15.7. It would not be exactly as the 1S limit would suggest but it would follow the process applied to the calibration and sample results.
The past is there to guide us into the future, not to dwell in.
The whole thing degenerates into bad practice. The precision with which we write something down should always be greater than the precision with which we measure, which in turn should be greater than the precision with which we need to know the answer.

This is just a matter of basic numerical safety, a sort of calculating hygiene. It's important to make sure that the way we handle numbers doesn't affect the outcome of what we're doing. It's analogous to a programmer implementing an algorithm: he/she has to select number formats that are big enough such that the precision of the calculating machine doesn't affect the result (too big is merely inefficient; too small produces the wrong result). We need to work in a way that prevents our results from being contaminated by rounding-errors (you could view these as errors of quantisation, where we're taking a continuously-distributed variable and forcing it into a set of boxes; if we're worried about the way the variable is distributed, which is what we're doing when we look at standard deviations and errors, then we mustn't change the distribution of the variable by our approach to rounding, because if we do, our standard deviations become merely the product of our method, rather than our data).

If an error of 0.7 matters to you, there is absolutely no logic in rounding to the nearest integer, and all values should be quoted to at least one decimal place, and probably more.
^^^ In other words DO NOT round anything to your 2 digit reporting requirement until you're actually reporting the FINAL number. You will see most of these problems go away if you do this.

Now - on to what I thought this post was going to be about, and how I handled it.

I had triplicate results that all matched to their last digit. Instead of reporting a %RSD of zero, I chose to alter the last digit on one result by 1, recalculate the %RSD and report the RSD as < that result.

Thoughts on this practice?
Thanks,
DR
Image
Same issue that lmh just addressed; you're rounding data when you shouldn't. Use raw values to calculate your QA limits and data, and then round the final values. If you use raw values you almost certainly will not have triplicate injections that match to the last digit.
Mark Krause
Laboratory Director
Krause Analytical
Austin, TX USA
Well, this actually happened for a UV assay - they did all match.
Thanks,
DR
Image
DR wrote:
Well, this actually happened for a UV assay - they did all match.

Wow !, how many digits did you have ?

If I recall correctly from my days in metrology the proper way to handle this once in a lifetime event is to use the readability uncertainty - the standard deviation of a rectangular distribution whose width is the smallest readable difference; in your case the last digit of you number. In metrology, where measurements are mind bogglingly precise it is routine to put readability into uncertainty budgets.

Peter
Peter Apps
8 posts Page 1 of 1

Who is online

In total there are 11 users online :: 0 registered, 0 hidden and 11 guests (based on users active over the past 5 minutes)
Most users ever online was 200 on Fri Nov 10, 2017 4:15 pm

Users browsing this forum: No registered users and 11 guests