why internal standards?

Basic questions from students; resources for projects and reports.

8 posts Page 1 of 1
So, this is a student question. A student asked it, and pinned me to the corner of the lab with a stupid expression on my face, and no idea what to say. This is a rough summary of the conversation:
Me: "... so we use internal standards to compensate for losses during extraction, and changes of ionisation efficiency. But because the ionisation efficiency of a target analyte can be affected by other things that coelute, it's really important our internal standard coelutes with the analyte. If it elutes somewhere else, it can't compensate for disturbing things that elute with the analyte. That's why an isotopically-labelled version of the analyte is the best internal standard. We put the internal standard in all the samples, and the same amount in all the standard samples we use for the calibration curve, and we plot the ratio of peak areas, instead of just the peak area of the target analyte"
Student: "why?"
Me: .... starts explanation again, wishing he hadn't gone so fast.
Student: "No, why do you do the calibration curve?"
Me: "because you need a calibration curve to know how much stuff you've got. Different analytes will have different calibration curves."
Student: "no they won't. If you use an isotopically labelled internal standard identical to each analyte, then it'll ionise exactly the same. So the ratio of peak areas will be exactly the same as the ratio of amounts. All your calibration curves will be perfectly linear with a gradient of 1"
Student and I look at a couple of calibration curves. They are perfectly linear with gradients ranging from about 0.95 to about 1.05, even when the corresponding external calibration curves are very curvy.
Student: "you see, all you're measuring is how accurately you made up the internal standard and the analyte stock! What's the point of wasting all those runs to make a calibration curve?"

I rather feel the student has a point. I understand that if the internal standard isn't chemically identical to the target analyte, we probably need to do a calibration curve, as we can't guarantee they'll have the same response. But why am I doing internal standard calibration curves when I went to the trouble of getting a deuterated version of my target analyte? Surely isotope dilution on its own is enough?
If the internal standard (isotopic labeled version) and the native analyte are at the same concentration they will have an accurate correlation. If both internal standard and target analyte respond perfectly linear to the detector then you will have a near 1 r^2 relationship. But what if the analytes have a quadratic response on the detector as some do? Especially those that are reactive or thermally labile. If the target analyte responds in a quadratic fashion then the curve will only be a nice straight line if the internal standard concentration varies along with it. If the internal standard is held constant and the target response varies as the concentration increases then the relationship is no longer 1 to 1, but follows some type of curve fit. This can not be compensated for with a single point calibration even when using and isotopic labeled internal standard.

If you are analyzing something of unknown concentration, then how would you know to vary the concentration of the internal standard to match the unknown amount in the sample? That would be the only way to make it accurate without doing a calibration curve.

I know if I do a calibration for lets say, Chlorobenzene and I use Chlorobenzene-d5 as my internal standard, I can sometimes have a variation of the average response factor across a calibration from 0.5ppb to 200ppb that approaches the limit of 15%RSD, at which point I must use a linear curve fit, but that may exceed the limit of r^2>0.995 at which point I must switch to a quadratic fit. None of the results would be accurate if I used a single point with both the internal standard and target analyte at 50ppb unless the sample is near 50ppb also.

If you work with analytes the are very stable and respond to the detector in a very linear fashion over the range of expected concentrations(usually a narrow range) then you can expect what the student said to hold true. Some of my calibration curves actually look like curves and if I apply a linear equation to them it plots out like a bow with a bow string stretched between the ends, just because that is how it responds to the detector. In school with ideal situations I too believed in the best case scenario, once in the real world, especially where you may be trying to analyze over 100 analytes on the same run, you soon realize things don't always behave in an ideal way. :)
The past is there to guide us into the future, not to dwell in.
thanks, your reply has been really useful.

Having thought about what you wrote, Student and I were failing to distinguish between chemical causes of non-linearity, and detector causes. I tend to assume that non-linearity in a mass spec is caused by a non-linear response of ionisation efficiency to increasing concentration, which is perfectly compensated by an isotopically-labelled internal standard (because the ionisation efficiency depends on the total concentration of analyte and has no idea whether it's labelled or not). But I forget that it can be non-linearity of the ion detection system, and as your reply said, the internal standard and the target analyte could be on totally different parts of that curve.

It makes me realise that the method of isotope dilution (which is basically what my student has reinvented) only works if you're utterly certain that isotope patterns are accurate; that the detector is linear from zero to the highest peak of interest.

I looked back at the situations where the student and I noticed that curvy external standard curves had gone utterly linear, gradient = 1, on using internal standards, and they were all situations where I suspect it was ionisation issues, not a fundamental lack of linearity in the detector. But it's quite hard to distinguish between the two! I'm now trying to think how I would...

Thanks again!
lmh wrote:
thanks, your reply has been really useful.

Having thought about what you wrote, Student and I were failing to distinguish between chemical causes of non-linearity, and detector causes. I tend to assume that non-linearity in a mass spec is caused by a non-linear response of ionisation efficiency to increasing concentration, which is perfectly compensated by an isotopically-labelled internal standard (because the ionisation efficiency depends on the total concentration of analyte and has no idea whether it's labelled or not). But I forget that it can be non-linearity of the ion detection system, and as your reply said, the internal standard and the target analyte could be on totally different parts of that curve.

It makes me realise that the method of isotope dilution (which is basically what my student has reinvented) only works if you're utterly certain that isotope patterns are accurate; that the detector is linear from zero to the highest peak of interest.

I looked back at the situations where the student and I noticed that curvy external standard curves had gone utterly linear, gradient = 1, on using internal standards, and they were all situations where I suspect it was ionisation issues, not a fundamental lack of linearity in the detector. But it's quite hard to distinguish between the two! I'm now trying to think how I would...

Thanks again!


Even with ionization, if the internal standard is 4x as much mass as the unknown or 4x less than the unknown concentration, would it not also be the same problem? Since the internal and target would ionize at different rates due to their difference in mass in the detector? Also it is it just total mass of the two together, if the internal standard is 1ng and the target is 10ng, then the ionization would be different than if the target was 1ng, since one sample would have 11ng total and the other has 2ng total.
The past is there to guide us into the future, not to dwell in.
My understanding is that if we're talking about deuterated-X and unlabelled-X, the ionisation efficiency depends on the sum of the two, i.e. the total amount of X hitting the spray chamber irrespective of the proportion that's labelled.

So 1pmol internal standard + 1pmol unlabelled analyte will have a higher ionisation efficiency than 1pmol internal standard + 10pmol unlabelled analyte.

But it works, provided the ionisation efficiency is the same for the internal standard and the unlabelled analyte. The first case might give 1000 counts internal standard and 1000 counts unlabelled; the second might give 800 counts internal standard, but should give 8000 counts analyte. In the first case, the efficiency is 1000/counts per pmol, in the second 800/pmol.

This definitely happens. I almost always find that my internal standard peak area decreases up my calibration curve, even though I used the same amount of internal standard in all the calibration standards (i.e. unlabelled X definitely competes with labelled X for ionisation).

Of course I can't swear that the ionisiation efficiency of deuterated-X is definitely the same as unlabelled-X, but for it to be different, it'd need an isotopic effect on chemistry. Of course such effects exist, because bond vibrational energies are affected by the weights of the atoms on the ends of the bonds, and that's why in the back of my mind, I have a bias towards 13C and 15N rather than 2H (small fractional change in mass means small change in chemistry). But I sort of hope that the deuteration sites aren't in chemically exciting places, mostly. But I'm not a great chemist, and this all gets a bit scary for me. Organic chemistry mechanisms books refer to isotope effects as a way to determine mechanisms, so it does seem a bit alarming that I blithely assume that a deuterated internal standard will behave just like its 1H sibling...
Another thing to take into consideration is that sometimes the labeled and unlabeled compound will be separated by the column, even if only slightly, which could have an affect on the total concentration in the spray (for LCMS) at any given point in time.

On GCMS the labeled analyte is separated from the unlabeled most of the time since the labeling causes changes in boiling points and possibly other interactions with the stationary phase.
The past is there to guide us into the future, not to dwell in.
ah, you're better at chromatography than me!! Strictly LC here, and usually wishing I had more resolution...
Thanks for all the helpful discussion on this, I've learned a lot, thinking about the points you raised.
Well I am still learning on the LCMS, most of my time I have spent on GC/MS along with some ICP/MS. LCMS I have worked with for about 10 years but it has been simple things in EPA methods and off and on mostly. More practical not much theory.
The past is there to guide us into the future, not to dwell in.
8 posts Page 1 of 1

Who is online

In total there is 1 user online :: 0 registered, 0 hidden and 1 guest (based on users active over the past 5 minutes)
Most users ever online was 1117 on Mon Jan 31, 2022 2:50 pm

Users browsing this forum: No registered users and 1 guest

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry