Recommended Calibration Method

Discussions about GC and other "gas phase" separation techniques.

30 posts Page 2 of 2
I performed the propagation of error calculation on the unknown concentration using the method-of-standard addition equation derived in my previous post.

https://1drv.ms/i/s!AkH-uI0tnY5Ler-wfLLcXQVtYhc

If R1 is too close to Ro, the error in the measurement goes through the roof!
chemengineerd wrote:
James_Ball wrote:
With either calibration method, the mole% range you are looking for is near the degree of uncertainty in the measuring instrument itself when the level is so high. If you took 100 mole% halon and inject it three times into the instrument you will get a different area count under the peak each time, the difference in those areas is the lowest uncertainty possible for the analysis method(you should really use 3x that for true uncertainty 3xstandard deviation of the readings).

If you look at something that is 0.2mole% and you have a 10% uncertainty window then your numbers would vary from 0.18 to 0.22 mole%, which when subtracted from 100% would give a much smaller total uncertainty.

However, yes, you would need to calibrate for the impurities separately for the most accurate determination of purity. I know some places that will just inject an approximately pure substance and take the total area of all peaks found, and subtract the area of all unwanted peaks then divide the subtracted area by the total area to get an Area% number which they assume is equal to %purity. This is the quick and dirty way of assessing purity and many places use it. If tenths of mole% are crucial, I would look at calibrating for and quantifying for each impurity if they are known compounds.


The most accurate determination of purity is crucial to us. So it seems like I will be moving forward with calibrating for the impurities.

Would you prefer one method over the other for daily analysis? I'd prefer not to calibrate every single day... or for every single sample. I'm assuming I can run a known amount of an impurity several times, calculate a response factor from the average peak areas and concentration of injected impurity, then repeat this calibration every few weeks.


We would normally find what the average concentration of a target analyte is, and the approximate high and low around that concentration if it does vary a lot, or if it is rather constant just use maybe a 30% window. Make at least three standards, one at the average center concentration, then one at a low and one at a high value for each target. This will give you a three point linear calibration. Then before each shift or on each day inject the middle calibration standard as a check of instrument stability. If it is close enough to the known value to satisfy your needs then consider the instrument still in calibration and continue to inject the samples. After each 5-10 samples reanalyze the middle standard and reverify the calibration. This way you have a check standard bracketing your samples to verify the analysis.
The past is there to guide us into the future, not to dwell in.
chemengineerd wrote:
Goal: Determine the response factor of my analyte gas to determine mole % in samples.

Proposed methods: external calibration or Standard Addition.

Short background: The gas is not readily available at ≥99.95 mole % from any vendor. We purchase this gas from random sellers, process it to clean it up, and send it to a 3rd party vendor to certify it's purity. Sometimes it's beneficial to know the purity mid-processing and I would like to calibrate our GC-MS properly to do this. The range of mole % we typically work with and see is between 99.95 to 98.00, and 99.60 is the minimum we can send to production.

My initial thought is to use several of the certified samples we sent to our 3rd party vendor already, run it through our GC-MS, and create a calibration curve from those. I'm skeptical of that idea because I know I can send two samples of gas from the same batch, on the same day, and receive a variation in purity (+/- 0.05 mole % typically), but it might just be the nature of the beast... and I also feel like I leave a lot of room for error because I don't know if all of those samples were run on the same conditions, GC, etc...

The standard additions method would still require me to use a high purity sample that has been certified from our 3rd party vendor.

All comments, questions, and recommendations are gladly welcomed!


In the case under discussion, the most accurate results can be obtained on the basis of the internal standard method. In this case, the main component can be selected as the internal standard.
Image
Image

With best regards,
Siarhei
It seems to me that you have to know the RRF for each impurity anyway in this treatment. That means you'd have to calibrate for all of the impurities (someone said that above and I agree with them). The way I understand it, the original poster wanted to be able to get at the Halon 1301 without calibrating for the impurities. Even your treatment says you can't do it.

There's no easy way to determine the purity of a substance without determining the impurities.
CharapitsaS wrote:
chemengineerd wrote:
Goal: Determine the response factor of my analyte gas to determine mole % in samples.

Proposed methods: external calibration or Standard Addition.

Short background: The gas is not readily available at ≥99.95 mole % from any vendor. We purchase this gas from random sellers, process it to clean it up, and send it to a 3rd party vendor to certify it's purity. Sometimes it's beneficial to know the purity mid-processing and I would like to calibrate our GC-MS properly to do this. The range of mole % we typically work with and see is between 99.95 to 98.00, and 99.60 is the minimum we can send to production.

My initial thought is to use several of the certified samples we sent to our 3rd party vendor already, run it through our GC-MS, and create a calibration curve from those. I'm skeptical of that idea because I know I can send two samples of gas from the same batch, on the same day, and receive a variation in purity (+/- 0.05 mole % typically), but it might just be the nature of the beast... and I also feel like I leave a lot of room for error because I don't know if all of those samples were run on the same conditions, GC, etc...

The standard additions method would still require me to use a high purity sample that has been certified from our 3rd party vendor.

All comments, questions, and recommendations are gladly welcomed!


In the case under discussion, the most accurate results can be obtained on the basis of the internal standard method. In this case, the main component can be selected as the internal standard.
Image
Image

With best regards,
Siarhei


The main problems with this are; the assumption that response factors are the same for everything, and the near certainty that the linear range of the analysis will not be wide enough to cover both the main component and the impurities - a problem that becomes more acute as the purity of the sample rises.

Peter
Peter Apps
Assuming a frictionless surface, in vacuum and at 0c, the response factor will always be the same.

:) At least that is how all the basic physics class problems begin.

In the real world, response factors can change daily for a single analyte and sometimes even hourly as the instrument runs and the building temperature changes slightly.

If the impurities are in the 0.01 - 0.25% range then they are at concentrations of 100-2500ppm. As with most GC methods we would normally state accuracy to +/-5%( or as high as +/-20%) around the determined value. A value determined to be 100ppm +/-5ppm or 2500ppm +/- 125ppm would give more accurate results than measuring the Halon at 99.75% purity +/- 49,875 ppm(at 10,000ppm per 1%), or at least it would seem to be to me.
The past is there to guide us into the future, not to dwell in.
James_Ball wrote:
Assuming a frictionless surface, in vacuum and at 0c, the response factor will always be the same.

:) At least that is how all the basic physics class problems begin.

In the real world, response factors can change daily for a single analyte and sometimes even hourly as the instrument runs and the building temperature changes slightly.

If the impurities are in the 0.01 - 0.25% range then they are at concentrations of 100-2500ppm. As with most GC methods we would normally state accuracy to +/-5%( or as high as +/-20%) around the determined value. A value determined to be 100ppm +/-5ppm or 2500ppm +/- 125ppm would give more accurate results than measuring the Halon at 99.75% purity +/- 49,875 ppm(at 10,000ppm per 1%), or at least it would seem to be to me.


Image

Siarhei
CharapitsaS wrote:
James_Ball wrote:
Assuming a frictionless surface, in vacuum and at 0c, the response factor will always be the same.

:) At least that is how all the basic physics class problems begin.

In the real world, response factors can change daily for a single analyte and sometimes even hourly as the instrument runs and the building temperature changes slightly.

If the impurities are in the 0.01 - 0.25% range then they are at concentrations of 100-2500ppm. As with most GC methods we would normally state accuracy to +/-5%( or as high as +/-20%) around the determined value. A value determined to be 100ppm +/-5ppm or 2500ppm +/- 125ppm would give more accurate results than measuring the Halon at 99.75% purity +/- 49,875 ppm(at 10,000ppm per 1%), or at least it would seem to be to me.


Image

Siarhei


If the concentration of the main component which is near 100% is unknown and the concentration of the impurity is also unknown, can the main component still be used as an Internal Standard, since by definition an Internal Standard should have a known concentration?

In my example the 2500ppm and the 997500ppm are estimates of a concentration but their true values are unknown. If they are unknown then the uncertainty can't be calculated. Also if the instrument has a built in uncertainty of +/-5% around any determined value, how can we have an uncertainty of 35ppm which for a compound of 997500ppm would be within +/-0.0035%? If determining the value of the main component by measuring an impurity that is at 2500ppm then an uncertainty of 35ppm is still at 1.4% which would still be less than the inherent uncertainty of the instrument.
The past is there to guide us into the future, not to dwell in.
Is your project super secret? Can you share what you're actually trying to determine and how (the mechanics of your sampling, etc.)? I'm not completely clear on what you're saying here but I have a feeling that you might be expecting too much from your analysis.

gclub online
สูตรบาคาร่า
It was described on the previous page and those were my words exactly (2nd post).
James_Ball wrote:
CharapitsaS wrote:
James_Ball wrote:
Assuming a frictionless surface, in vacuum and at 0c, the response factor will always be the same.

:) At least that is how all the basic physics class problems begin.

In the real world, response factors can change daily for a single analyte and sometimes even hourly as the instrument runs and the building temperature changes slightly.

If the impurities are in the 0.01 - 0.25% range then they are at concentrations of 100-2500ppm. As with most GC methods we would normally state accuracy to +/-5%( or as high as +/-20%) around the determined value. A value determined to be 100ppm +/-5ppm or 2500ppm +/- 125ppm would give more accurate results than measuring the Halon at 99.75% purity +/- 49,875 ppm(at 10,000ppm per 1%), or at least it would seem to be to me.


Image

Siarhei


If the concentration of the main component which is near 100% is unknown and the concentration of the impurity is also unknown, can the main component still be used as an Internal Standard, since by definition an Internal Standard should have a known concentration?

In my example the 2500ppm and the 997500ppm are estimates of a concentration but their true values are unknown. If they are unknown then the uncertainty can't be calculated. Also if the instrument has a built in uncertainty of +/-5% around any determined value, how can we have an uncertainty of 35ppm which for a compound of 997500ppm would be within +/-0.0035%? If determining the value of the main component by measuring an impurity that is at 2500ppm then an uncertainty of 35ppm is still at 1.4% which would still be less than the inherent uncertainty of the instrument.


Image
“The world of chromatography is changing; laboratory scientists are moving away from being chromatographers to being chromatography users, and therefore there is a greater onus on the developers of chromatographic methods to make them more robust or at least have a greater understanding of what could cause them to fail” (Chromatography Today, Jun 14 2017).

With gest regards,
Siarhei
I understand your comment but I would argue that the posts and replies on the forum just reinforces the need for "chromatographers" and their experience- and will always be so - no matter how good the method protocols are.

Perhaps a discussion topic more suited for "Around the Water Cooler" ?
Regards

Ralph
"RRF is the relative response factor of the detector to the i-th analyzed compound relatively to the IS substance"

This is an assumption that the relative response factor between to compounds (IS and Unknown)remains constant for a given detector.

Theoretically this would be true, but in practice it is rarely the case. The more complex the instrument, the more variations over time will be encountered. A simple direct injection port, capillary column, and FID would probably be the most stable and less prone to changes in RRF. Once you advance to something like Purge and Trap, Split/Splitless injection port, capillary column, Mass Spectrometer, the changes in RRF become greater. We monitor changes in the RRF between internal standards and target analytes ever 12 hours and sometimes they can drift by more than the 20% allowed by the methods depending on the types of samples being analyzed and variables such as room temperature fluctuations, gas pressures, ect.

Statistical calculations give us a beginning to work from in developing a method of analysis, but often do not account for the unknown variables we encounter in the practice of chromatography on a daily basis.

Sometimes it reminds me of when a former QA officer developed correction factors for our thermometers that corrected to 4 decimal places and we had to apply them to thermometers that only read 2 decimal places (some only read 1 decimal place). The last two(or three) digits past the decimal in the recorded reading were unable to be proven accurate at best.
The past is there to guide us into the future, not to dwell in.
It's been a few days since I checked on this thread because I got busy with another project I'm working on, but wow. A lot of information and input going on here, I really appreciate everyone's comments so far.
I work in pharmaceutical industry. When we do assay, impurity or residual solvent, we use standard comparison. Of course there in difference, but it will be under 5% difference from the true value which is the calibration. I think you can try to do a method validation, see the %Y interception. This could give you an idea and save you from the tedious standard preparation.
30 posts Page 2 of 2

Who is online

In total there is 1 user online :: 0 registered, 0 hidden and 1 guest (based on users active over the past 5 minutes)
Most users ever online was 1117 on Mon Jan 31, 2022 2:50 pm

Users browsing this forum: No registered users and 1 guest

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry