Page 1 of 1

Replicate (duplicate) sample max allowed % difference

Posted: Fri Jul 19, 2013 9:15 am
by Pepter
Hello guys.
Another noob question :( for specialist that analyze pesticides in food samples.
Is there a document or i should based on validation data when i set up the maximum % difference that is allowed in replicate (duplicate) samples.
I google 35% in one of the EPA validation document.

p.s.
sry for my english

Re: Replicate (duplicate) sample max allowed % difference

Posted: Mon Jul 22, 2013 8:42 pm
by BHolmes
The level of precision should be equal to the task at hand, great expectations can be a necessary motivator or an unnecessary burden.

So I would suggest that you test YOUR method to see what it is capable of, try setting a temporary maximum % difference and analyze a set of calibration standards 3 times on Day 1, then 1 time each day for the next 4 days. You can make fresh standards each day or if stable enough, you can use the same standards for all testing days. This way you will test not only intraday precision (same day), but interday precision (over time) as well. If your method doesn't pass, then it may be time to redevelop the method or set a higher % difference. Remember, your method is unique - we do not all have the same equipment, chemists, climate, chemicals, etc...

Most of the set limits you will find published in articles/documents were based on tests just like the one I described above. But don't let the perfect (i.e. max of 5% difference) be the enemy of the good, be an analytical chemist and not an anal chemist :lol:

I also work with pesticides and our screens are large with about 300 compounds total between two instruments. Whenever we start a new project I use the following questions to help me define the purpose of the analysis:
1. What is the need for the data?
Now this is especially important for pesticide analysis, prioritize the analytes! you are not going to get 100% recovery for every pesticide, there will always be a "problem child"
2. How much is the data worth and/or what is its impact?
Will it be used to enforce or set regulations or is it simply exploratory (is it there or not)?
3. What is cost to get the final result?
When I say cost I mean it as the sum of: analysis time + chemist time + resources (lab tech time, chemicals, supplies)

Hope this helps, there is no wrong or right way, so long as you document what YOU do....to paraphrase Adam Savage of the Mythbusters "Remember kids, it's not science unless you write it down!!"

Re: Replicate (duplicate) sample max allowed % difference

Posted: Thu Jul 25, 2013 8:32 pm
by Pepter
Thank you for replay BHolmes.
I found some info which i was looking for in SANCO document:

84. Where two or more test portions have been analysed, the arithmetic mean
of the most accurate results obtained from each portion should be reported.
Where good comminution and/or mixing of samples has been undertaken, the
RSD of results between test portions should not exceed 30% for residues significantly
above the LOQ. Close to the LOQ, the variation may be higher and
additional caution is required in deciding whether or not a limit has been exceeded.
Alternatively, the limits for repeatability, or reproducibility, given in Annex
VI to Directive 91/414/EEC
, may be applied, although these do not incorporate
sub-sampling error (which is particularly important when undertaking
dithiocarbamate or fumigant analyses).

And in Annex VI:
(iii) the repeatability must be less than the following values for
residues in foodstuffs:
Residue level (mg/kg) 0,01 Difference in % 50
Residue level (mg/kg) 0,1 Difference in % 25
Residue level (mg/kg) 1 Difference in % 12,5

The method iam working on is initially tested and most of the repolicate samples keep in <10% difference but still i need some document that i can based on and show somoene who will ask me ... why ? where ? show me please ?

Re: Replicate (duplicate) sample max allowed % difference

Posted: Mon Jul 29, 2013 2:03 am
by mckrause
A large share of this is in the detector. If you're using UV, ECD, FPD, etc then you are going to expect significantly better %RSD between replicates than if you are using LC/MS/MS. Our SOPs are written as being instrument/detector specific, as well as concentration dependent. For obvious reasons the replication of analytical data at 1 part per million is significantly different than at 5 parts per billion.

While %RSD is required for validation, the value you really need to be concerned about is %recovery. You need to set a reasonable range for acceptable %recovery of the analyte. We spike every single food sample that comes through our door because the matrix effects in food are severe. We use a default recovery range of 70-130% for most of our methods (almost all are MS-based); if we get recoveries out of this range then we start looking for reasons.