Advertisement

Purge&Trap GCMS internal standard increase, new observations

Discussions about GC-MS, LC-MS, LC-FTIR, and other "coupled" analytical techniques.

19 posts Page 1 of 2
Subject line isn't really long enough to fully explain the topic but here it is.

There have been discussions here in the past about the problem we all have experienced at some time where we see the internal standard response increase with increasing calibration standard levels. Usually it is most prominent with the 1,4-Dichlorobenzene-d4 and the later eluting analytes. The problem is that as we increase the concentration of the calibration standards in the curve, the internal standard response increases instead of remaining steady as it should. For a long time it has been postulated that this increase comes from the amount of methanol added to the standards and that with more methanol added you get increased response for the internal standards. EPA and others have suggested that to combat the problem you use no more than 20ul of methanol per 100ml water when making a calibration curve which means you need multiple stock standards to prepare a curve over the normal range of calibration which in itself can introduce more points of variability into the calibration.

Over the years I have tried every thing from using multiple stock standards to adding extra methanol to make each standard have the same amount of methanol to using higher split ratios to see if it was water causing the problem to lowering the EM volts as one paper suggested. Some days it would seem successful then the next day the problem would return when doing the exact same thing.

Recently I have been working on a method using SIM with EPA 8260 to try to calibrate down to 0.05ppb for a project we have. To do this I was working on a calibration curve that covered the concentrations of 0.05, 0.25, 0.5, 1.0, 2.5, 5.0, 10, 20, 40ppb with 5ml purge volume and internal standard/surrogate concentrations of 5ppb. Through about 1.0ppb the internals would all have consistent responses, but above that the internal standards would rapidly increase in response, with the 1,4-Dichlorobenzene increasing to the point that it was nearly 4x the area counts in the 40ppb as it was in the 0.05ppb. I tried using the multiple stock standard concentrations but had the exact same results as using a single low concentration standard. Low standard required 1ul - 800ul per 100ml and the multiple standards had 1ul per 100ml low calibrator through 60ul for the high calibrator both having similar results. That would seem to rule out methanol as the culprit, so I also ran some blanks spiked with 800ul methanol per 100ml and they gave the same internal responses as the blanks without methanol that ran before the calibration standards thus ruling out methanol all together.

This got me thinking, what is the only other major variable in the standards? The concentration of the analytes! In this calibration the total analyte concentration of the 0.05ppb standard is 4.35ppb and the total analyte concentration of the 40ppb standard is 3480ppb. (87 analytes x concentration level). Now, if you have 7 internal standard/surrogate compounds at a constant concentration of 5ppb that gives you total concentrations ranging from 39.35ppb for the 0.05 standard to 3515ppb for the 40 standard. That is almost 9000% difference in concentration of analytes over the curve. If you increase the internal standard concentration to 50ppb then you have total concentrations ranging from 354.35ppb for the 0.05 standard to 3830ppb for the 40 standard. This gives an increase of only 1080% in total analyte concentration across the entire calibration range. Making this change the increase in the 1,4-Dichlorobenzene response across the calibration range was only 30%, instead of 200% as it was before.

I am going to run this a few more times and try to write up the findings a little more formally, and see if it will hold true and is not a one time fluke. The question now becomes what causes this effect? Since the analytes elute at different times the Mass Spec is not seeing them all at once and should not be affected by the concentrations as a total mass, therefore the concentrations of the standard analytes should not affect the response of the internal which is at a constant concentration eluting separately from the other compounds. If it is not the MS, then is it in the column, the inlet, the transfer line, the trap, the sparge tube, or a combination of all of the above?

Currently the fix would be to only calibrate from 0.05ppb to 1.0ppb which would mean needing to dilute most of the samples I run because many analytes will have concentrations above 1.0ppb while I still need the low detection limit for the others. The other fix is to use a very high concentration of internal standard which does not match up with the concentrations of the analytes quantified against those internal standards, which really isn't how good methods should work.

So, does anyone out there have any good theories as to what is actually going on with these purge and trap systems?

It was only a minor annoyance before when the shift of response was maybe 20-30% over a calibration curve that was using higher concentrations, but this ultra trace work seems to really amplify the problem.
The past is there to guide us into the future, not to dwell in.
My guess would be that there is a loss of analyte and i.s. to irreversible adsorbtion, with both the proportional loss and the absolute loss dependent on the concentration of both the individual compounds and all the compounds together.

Would I be right to predict that if you plot peak area (not as a ratio to i.s.) vs concentration that you see a convex curve ?

Peter
Peter Apps
My guess would be that there is a loss of analyte and i.s. to irreversible adsorbtion, with both the proportional loss and the absolute loss dependent on the concentration of both the individual compounds and all the compounds together.

Would I be right to predict that if you plot peak area (not as a ratio to i.s.) vs concentration that you see a convex curve ?

Peter
It seems to be analyte dependent. Some analytes will give and almost perfect linear curve either against the internal standard or alone. Others will be convex and some will be concave because they seem to have less recovery as concentration increases.

With the way it behaves where the compounds like vinyl chloride and bromomethane have good recoveries and linearity and compounds like dichlorobenzene and trichlorobenzene are very non linear and increasing with concentration, I have wondered if there is something that slightly retains the compounds and acts like a chromatographic column allowing the less retained compounds to pass through while the more retained compounds do not pass through until you overload the retentive surface residue? The effect seems to get worse with time, and on older instruments it gets really bad then goes away if you have the purge and trap refurbished and the flow path is replaced with new tubing.

Is there a possibility that char can form from organic residues that are heated to 150C in an inert atmosphere? It seems to get worse more quickly if you have the heated zones at 150C instead of 110C.
The past is there to guide us into the future, not to dwell in.
From those other symptoms it certainly sounds like irreversible adsorption - or adsorption where the kinetics are so slow that it might just as well be irreversible. I wouldn't have thought that 150C would make char, but whatever is there sucks up aromatics, so probably has aromatic rings in it, even if it is not actually graphite.

Peter
Peter Apps
I have always thought this was due to activity in the flow path - essentially Peter's explanation.
I have always thought this was due to activity in the flow path - essentially Peter's explanation.
That is essentially what it seems to be, but the question becomes what is causing the activity? This is happening even on a brand new Encon Evolution/Centurion combo connected to a brand new Agilent 7000QQQ. Flow path is all Silcosteel tubing, I even changed out the normal 1/16" transfer line for a water deactivated 0.53 Silcosteel guard column to see if that could be the problem with no change in the symptoms.

Since most people I know who run volatiles use Agilent systems I don't know if it affects other brands of mass specs, so I can't completely rule out the MS or detector. Since I have not done a well documented study on electron multipliers what I have experienced is just anecdotal in that I have noticed that the ones in which we have replaced the K&M horn with an ETP ribbon type multiplier show the problem to a lesser degree, but that could just be a coincidence.

It really wasn't that evident until we started doing ultra trace volatiles analysis, pushing detection limits down to ppt levels. The lower we push the detection limits, the worse the effect becomes. I will be doing the same thing soon with our 8270 semi volatiles analysis looking at ppt levels of PAH compounds so if it appears there then I will have to look more closely at the MS side as being the problem. It makes me wish for the days of megabore columns and jet separators again, well only just a little :) Back then I could calibrate from 1ppb - 1000ppb on a 5970 and everything but the ketones would be dead linear less than 10% RSD average response factors and internal standard areas varied less than 10%. I haven't seen that since we went to using the split inlet and 5973s.
The past is there to guide us into the future, not to dwell in.
I vote for an active site somewhere, could even be your injection port ( do you ever open it when it is above 70C ?).
I use an Agilent 6890/5973, Tekmar Stratum Vocarb 3000 trap.

I doubt it is the meoh. I do 524.2 calibrating 58 compounds, 2 surrogates and an IS. I spike the standards with a working stock of 50 ppm and calibrate from 0.5ppb--50ppb. The low standard contains about 0.25ul in the sparge vessel, and the 50 has about 25 ul. Each run adds 5ul containing the is and surrogates. So in short my low calibrator has 5.25ul, and high has 30ul total meoh. By my calculations my high standard contains 0.12% meoh. I remember hearing not to exceed 1%.

From my last calibration all RSD's were<10 except for trans 1,3- dichloropropene ( 10.5%).

The calibration area range for the BFB wass 89.5K --91.9K counts
for 1,2 DCB d4 78.1K----83.1K ( the high std yielded 81.5K)
I have always been suspicious of the alternate flow paths in modern P&Ts - the flow from the sparger goes to a tee - main path to the trap, alternate paths to valves that open for dry purge, bake bypass, backflush desorb - stuff like that.

These all have some length of tubing before they reach the valves - I have always been suspicious of what is happening in that bit of dead volume - is there an effect, but we don't notice it under pristine conditions at 1 to 1000 ppb levels? When a system is damaged, the normal flow path is often replaced, but these dead zones are normally left alone. Maybe that means they do not matter, but I am always suspicious.

Perhaps modern materials (coated mounts, tubing, etc) are good enough for these sub ppb levels, but we are normally purging HCL all through these pathways. Even well behaved standards and samples must degrade things quickly. This degradation I assume is not noticable at normal working levels, but it bugs me (though I recall purging with an HCL solution overnight used to be neeeded to deactivate Nickel tubing in its heyday, so this can go both ways).
I have always been suspicious of the alternate flow paths in modern P&Ts - the flow from the sparger goes to a tee - main path to the trap, alternate paths to valves that open for dry purge, bake bypass, backflush desorb - stuff like that.

These all have some length of tubing before they reach the valves - I have always been suspicious of what is happening in that bit of dead volume - is there an effect, but we don't notice it under pristine conditions at 1 to 1000 ppb levels? When a system is damaged, the normal flow path is often replaced, but these dead zones are normally left alone. Maybe that means they do not matter, but I am always suspicious.

Perhaps modern materials (coated mounts, tubing, etc) are good enough for these sub ppb levels, but we are normally purging HCL all through these pathways. Even well behaved standards and samples must degrade things quickly. This degradation I assume is not noticable at normal working levels, but it bugs me (though I recall purging with an HCL solution overnight used to be neeeded to deactivate Nickel tubing in its heyday, so this can go both ways).
I think this is especially true when the waters are purged in the sparger on the P&T and the soils are purged as a needle purge on the autosampler and a Tee is used to combine the two flow paths to the trap.

I haven't tried it yet on ours, but EST has come out with the option on the new Centurion that allows it to draw 5ml sample from a sample vial, spike it with internal and put it into an empty vial that is then sparged on the soil needle sparger, which allows you to do a calibration that would work for both waters and soils since it flows through the same flow path. Of course EPA and NELAC would require two separate calibrations because you must have sand or other blank matrix in there for the soil calibration which is totally bogus scientifically speaking since neither sand or glass beads or boiling chips would alter the purgeability of the sample in any way.
The past is there to guide us into the future, not to dwell in.
You also have to take into account that Henry's law is concentration dependent. Your purging efficiencies will change with increasing concentration, and it will be dependent upon both the total concentration of the dissolved analytes as well as the concentration of the individual components. One potential solution is to swamp the matrix with salt plus an addition of methanol that is significantly higher than that used to spike in the standards. You get the benefit of matrix matching plus the added benefit of increased purging efficiency due to the salt addition (salting out).
Mark Krause
Laboratory Director
Krause Analytical
Austin, TX USA
You also have to take into account that Henry's law is concentration dependent. Your purging efficiencies will change with increasing concentration, and it will be dependent upon both the total concentration of the dissolved analytes as well as the concentration of the individual components. One potential solution is to swamp the matrix with salt plus an addition of methanol that is significantly higher than that used to spike in the standards. You get the benefit of matrix matching plus the added benefit of increased purging efficiency due to the salt addition (salting out).
I can't really do the salting out since opening the vials is considered compromising the sample integrity under method 5035.

I have tried using the extra methanol before, but it doesn't seem to help the problem, even when using something like 1ml/40ml sample. I think the problem with that is because methanol is highly soluble in water and the mixture increases the solubility of some organics in the water matrix above what they would be in pure water. Just like how you can put 100ul of methylene chloride in 1L of water and it forms a bead that floats on top, but add 10 ml methanol to the water and the methylene chloride will mix with the water.

I think you hit on the key there with Henry's Law; since at such low concentrations as what I am working with currently, most of these organic compounds are soluble in water. The smaller molecules purge in a somewhat linear fashion across the range I am working with, but the larger substituted aromatics are what give problems and purge better as the total organic load increases. My next try will be to double the concentration of my internal standards which will make the difference in total organic concentration more constant across the calibration range and see if that helps. It will be odd to have internal standards at 100ppb and analytes ranging from 0.05ppb to 40ppb but if that works it is what I will run with.
The past is there to guide us into the future, not to dwell in.
Hi
we have same problem with atomix and 7890/5977 MS with extractor lens.
Until now, the only thing that solve -temporary -the problem is cleaning source, even if seems clean by tune.
On this machine frequency of cleaning is very high compared to older 5973 and 5975 without EX lens
I like to see if other ones has same feedback on this ms...
Thank you
------------------
daniele
------------------
Regarding source cleaning - this again seems related the the source adsorbing some of the analytes as it gets dirty (or activated by water?). Higher concentration standards would be less affected.
Regarding source cleaning - this again seems related the the source adsorbing some of the analytes as it gets dirty (or activated by water?). Higher concentration standards would be less affected.
I am thinking it isn't just a single point in the system that is causing the problem as a whole. Years ago when most systems were calibrated from 10ppb up to 200ppb for normal waste water/soil samples and from 1ppb up to 100ppb for drinking waters and we were using the 105m 0.53 columns with jet separators this never was a problem. Now that we are trying to calibrate into the sub ppb ranges plus accommodate the fact that many sample will have hits in the hundreds of ppb the problems is much more evident, all while trying to use the split injection port which puts even less of the analytes through to the detector.

I first ran into it in a major way when setting up for the UCMR3 drinking water project where they required volatiles detection limits down to 0.030ppb for some analytes and now I have the state here requiring 0.05ppb for EDB and 0.1ppb for DBCP in the landfill monitoring wells. I have a feeling the regulatory agencies are only going to make our work more difficult as they continue to push these limits lower while still holding us to using methods that are decades old and never meant for such work.
The past is there to guide us into the future, not to dwell in.
We calibrate from 0.1-50ppb and with a few exceptions (ketones start at 1.0) they look very good. We use OI system with 30 seconds desorb with 1:100 split on 5975 systems. We have learned to clean the OI every other week and replace some parts every 1-3 months to make it work. Our systems are working around the clock but we do not clean up the source very often.
19 posts Page 1 of 2

Who is online

In total there are 26 users online :: 0 registered, 0 hidden and 26 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: No registered users and 26 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry