The level of precision should be equal to the task at hand, great expectations can be a necessary motivator or an unnecessary burden.
So I would suggest that you test YOUR method to see what it is capable of, try setting a temporary maximum % difference and analyze a set of calibration standards 3 times on Day 1, then 1 time each day for the next 4 days. You can make fresh standards each day or if stable enough, you can use the same standards for all testing days. This way you will test not only intraday precision (same day), but interday precision (over time) as well. If your method doesn't pass, then it may be time to redevelop the method or set a higher % difference. Remember, your method is unique - we do not all have the same equipment, chemists, climate, chemicals, etc...
Most of the set limits you will find published in articles/documents were based on tests just like the one I described above. But don't let the perfect (i.e. max of 5% difference) be the enemy of the good, be an
analytical chemist and
not an anal chemist
I also work with pesticides and our screens are large with about 300 compounds total between two instruments. Whenever we start a new project I use the following questions to help me define the purpose of the analysis:
1. What is the need for the data?
Now this is especially important for pesticide analysis, prioritize the analytes! you are not going to get 100% recovery for every pesticide, there will always be a "problem child"
2. How much is the data worth and/or what is its impact?
Will it be used to enforce or set regulations or is it simply exploratory (is it there or not)?
3. What is cost to get the final result?
When I say cost I mean it as the sum of: analysis time + chemist time + resources (lab tech time, chemicals, supplies)
Hope this helps, there is no wrong or right way, so long as you document what YOU do....to paraphrase Adam Savage of the Mythbusters
"Remember kids, it's not science unless you write it down!!"