-
- Posts: 17
- Joined: Tue Jan 31, 2012 3:25 am
I have been racking my brain over the use of internal standards in quantifying some of my samples. First I'll describe the sample preparation, bare with me, I'll get to the problem. I added 1 uL of an internal standard solution containing 18.75 uM 18O-labeled nicotinamide to 20 uL of milk. I then acidified the milk, incubated it on ice for some time, centrifuged to clarify the supernatant, then dried the sample down. I resuspended in 25 uL. I then set up a standard curve from 0.05 uM to 30 uM for nicotinamide. I created a 5X stock of each and added 5 uL to 20 uL containing 1 uL of the same internal standard mixture used above to prepare my samples. Therefore, the 5 uL of 150 uM solution should produce a final concentration of 30 uM. I then plotted the analyte-to-internal standard ratio on the y-axis and the standard solution concentrations on the x-axis. I quantified the amount of nicotinamide in the milk products by dividing the analyte-to-internal standard ration by the slope of the line of best fit. Now, the problem is this, when I quantify the amount of nicotinamide in my sample the concentrations are with 1 uM. However, I know from the literature that it should be 9.8 uM. Am I making a simple math error here? I thought that once internal standards are added, there is no need for dilution factor corrections as long as the amount of internal standard contained in the sample is the same concentration in the standards. Can anyone see a problem in my work flow here?