by
lmh » Thu Sep 22, 2016 3:30 pm
You might want to talk with your client about this as you have two levels of replication in the experiment. The simple answer is this:
if you are routinely measuring each sample twice, and taking an average, then you are reducing the analytical variability, and producing a higher-quality result. Your client does not need to know how you improve your precision. All they need to worry about is the variability of the data you give them compared to the amounts/changes-in-amounts that they want to be able to discern.
If your client is providing samples from a group of samples that are treated the same way, and is interested in the average amongst their samples, but the variation amongst their samples is much greater than the analytical variability, you are indeed wasting their time and money if you run each sample twice. The between-samples variability overwhelms the between-duplicate-runs variability.
Of course if you're in a situation where you believe that no matter how hard you try, there is a tiny risk of mixing up samples, and this risk has unacceptable consequences, then running each sample twice has a very good logic.
Statistically, if you're collecting data with multiple levels of replication, each with a different variability stemming from a different source, and you want to do things really properly, then you could consider doing ANOVA or linear least squares stuff to try to tease apart the different sources of variability. Find a friendly statistician and a decent stats package. This is where I'd recommend teaming up with your client because you'll need someone, somewhere to be aware of the entire experimental design, both the parts that happened in their choice of sample, and in your choice of replication.
What you mustn't do (and from the way you write I'm quite sure you wouldn't think of it. This is for posterity! - and I'm not going to name names of major analytical groups who might just possibly have done this) is take 3 samples run twice each and call it 6 replicates in a t-test or calculation of standard deviation. That's naughty.