Is this appropriate way to use internal standard in HPLC
Posted: Thu Jun 30, 2016 8:14 pm
A sample preparation protocol I use for HPLC is rather simple and straight forward with good recovery and is reproducible. However, it involves a dilution step and sometimes this dilution step can be a source of error. The solvent is an alcohol and sometimes can be difficult to pipette accurately.
Regardless I've tried to do some experiments to see if an internal standard can be used to 1- notice when this error occurs and 2- correct for it. Normally the analyates are quantified with an external standard and there is no real reason I need to use internal standard for that sort of quantification.
The internal standard (IS) is added to the extraction solvent in a known concentration. If I compare a set of samples (n=10) where I perform a dilution and don't perform a dilution the %RSD of the IS peak area is 2.4% for diluted samples and for undiluted stock sample 0.9%. This suggests that the dilution step introduces some variation. In larger sample sets I've seen the %RSD go up to about 6% for the IS peak area.
I'm trying to think of best way to develop a correction factor for dilution error. Internal standard in solvent at 100ug/mL concentration. One thing I've tried is every set of samples generate a response curve for the internal standard in that batch of solvent over a range that covers undiluted and diluted samples 10-100ug/mL. Get a linear regression y = xb. If I made a 5 x solution I should have 20ug/mL IS. So I calculate what the area of IS would be if the IS was exactly 20ug/mL and get an expected peak area value. I then divide the expected peak area value with the actual peak area value and obtain a correction factor. I then use this correction factor to correct peak areas of analyates.
Does this make sense? How can I prove whether or not using the internal standard is more accurate then not using it? Is there a simpler way?
Thanks for any help anyone can provide.
Regardless I've tried to do some experiments to see if an internal standard can be used to 1- notice when this error occurs and 2- correct for it. Normally the analyates are quantified with an external standard and there is no real reason I need to use internal standard for that sort of quantification.
The internal standard (IS) is added to the extraction solvent in a known concentration. If I compare a set of samples (n=10) where I perform a dilution and don't perform a dilution the %RSD of the IS peak area is 2.4% for diluted samples and for undiluted stock sample 0.9%. This suggests that the dilution step introduces some variation. In larger sample sets I've seen the %RSD go up to about 6% for the IS peak area.
I'm trying to think of best way to develop a correction factor for dilution error. Internal standard in solvent at 100ug/mL concentration. One thing I've tried is every set of samples generate a response curve for the internal standard in that batch of solvent over a range that covers undiluted and diluted samples 10-100ug/mL. Get a linear regression y = xb. If I made a 5 x solution I should have 20ug/mL IS. So I calculate what the area of IS would be if the IS was exactly 20ug/mL and get an expected peak area value. I then divide the expected peak area value with the actual peak area value and obtain a correction factor. I then use this correction factor to correct peak areas of analyates.
Does this make sense? How can I prove whether or not using the internal standard is more accurate then not using it? Is there a simpler way?
Thanks for any help anyone can provide.