-
- Posts: 3
- Joined: Wed Dec 28, 2011 11:47 am
In order to evaluate the efficiency of some catalysts I used to performed almost daily GC-FID analysis. Normally I used to add a solution of an internal standard to the sample I withdrew from the reaction vessel with (making so a new solution made of equal volumes of internal standard solution and the sample solution collected from the reaction vessel). Basically I diluited the real sample to 1:2. Obviously I made the previous calibration curves in the same way, that is preparing different solutions of the analyte ( A + B + C ) and adding to each, before the injection, the same quantity of the standard solution (D); practically injecting so new solutions ( A’ + B’ + C’).
In other research groups I saw a different procedure:
They just add a known amount of internal standard at the very beginning directly in the reaction vessel along with the reaction mixture. So they avoid adding the STD solution to the just collected sample ( because now the collected sample solution has the already the standard in it) Injecting directly inside the GC. Obviously they prepare calibration curves according to this procedure that is the standard solution and the analyte solution in the same flask...
I would like to know your opinion about these two methods of quantifying… which one you think is the more correct?
Many thanks
jurty
