Page 2 of 2
Posted: Fri Feb 20, 2009 4:27 am
by sextonrick
All excellent food for thought.
The provider of the reference sample only specifies a minimum purity - I suspect it may be higher. I'm going to try a bunch of things, and then see what i come up with.
Thanks for all the posts
Posted: Sun Feb 22, 2009 7:04 am
by Bruce Hamilton
Peter and Co have identified common concerns and offered suggestions, however I would offer a couple of alternatives.
1. Add an internal standard to your method, this will correct for many sample and injector variables.
2. Dilute your sample 1:1 and spike it with an accurate quantity of your standard, and check the recovery of the spike.
Please keep having fun,
Bruce Hamilton
Posted: Tue Mar 03, 2009 3:34 pm
by mbicking
One more thought:
You say that your calibration is "good" but how do you determine that? The r2 value is only a fair indicator of calibration quality. Even "very good" calibration curves can generate large errors in some cases.
As someone else said, the devil is in the details. If your samples are near the extreme of your calibration range (especially on the low side), these errors can easily exceed 5%. If your sample is near the middle, then maybe this is not the problem. But try to get some better statistics on your calibration: std error, analysis of residuals, etc. Look at how far the actual calibrated points are away from the curve.
If all this is OK, you may have multiple other issues. As others have said, the "certified" purity is often dependent on how it was determined. If you add in small errors in preparation, evaporation of volatile solvents, losses from the vials while waiting for analysis, and normal chromatographic variability, it isn't hard to get to a few percent difference.