-
- Posts: 15
- Joined: Wed Oct 10, 2012 7:44 pm
I have created a calibration curve from 1 - 200 µg/mL and run it in triplicate. The results are linear, but then I find if I run the curve again as unknown samples the %recovery on the lower end is way out.
However, if I use exponential as the line of best fit, the %error drops for all points and the recovery is much more acceptable.
When modeling calibration curves and looking at the %error plots, how do you decide if there is a sample prep issue or systematic issue that is it ok to use different modelling? My error plots look like a very broad peak (high end has low error, then going towards middle increases and then drops to very low -% error at the low concentration end). I assume this means I am losing analyte as I go down, compounded with greater variation on smaller concentrations?
BTW I perform serial dilutions using micropipettes up to 300 µL each due to volume and cost of standards.
Thanks