by
lmh » Thu Feb 02, 2017 12:41 pm
Just a quick follow-up on the danger of threshold values.
Imagine your LLOQ is 27units, but this assumes an RSD of 20%. The standard deviation is therefore 5.4units.
Now imagine someone asks you to measure a lot of samples that happen to contain exactly 27 units of stuff. When you assay these, your answer should be 27 +/- 5.4. Say you report all assays below 27 as less than the LLOQ, but report the rest as real values, what happens next?
If you're unlucky, your client is less statistically-aware than you, and decides only to use the data that have genuine reported numbers, quite reasonably excluding the rest as unmeasured/unmeasurable (typical situation: he weeds out all the data that are too weak to report, and passes all reportable data to a 3rd person, such as his boss, who is unaware that there were any unreportable data...).
50% of the data are now gone. The remaining data range from 27 upwards, with 34% (68% of data that got used) somewhere between 27 and 32.4, and 16% (32% of data that got used) somewhere above 32.4.
If your client then averages the data that he has, for all the samples derived from that particular treatment/sample-type, instead of getting the correct mean of 27, and knowing that the value is +/- 20%, he'll actually get a value quite a lot above 27 (probably somewhere between 30 and 31), and a much smaller error if he chooses to calculate it (because he's thrown away half the normal distribution curve). This means his data look more trustworthy, but are more wrong.
Of course if the final end user has any sense, they will know that the final mean at 30.6 is only very marginally different to 27, and if they're seriously worried about values in this region, they need a more sensitive method. But I'm basically suspicious of working-procedures that potentially introduce unnecessary bias in measurements. We need thresholds for decisions, but we should be aware of how their misuse can cause errors.