-
- Posts: 43
- Joined: Thu Jan 05, 2012 11:35 pm
I am doing method development for a 105m Rtx-5 column starting from a method for a 30m HP-5 column to analyze the same kinds of samples (DCPD streams, C5's-C20's). We have GC's set up with each column and to validate the method that I developed for the 105m, we're looking for consistent area %'s between each system for a couple specific peaks (e.g. if peaks A and B are 49.06% and 50.04% respectively on the 30m, we're looking for those numbers on the 105m...within reason, whatever that is).
Thus far, the results of our preliminary tests have not been good and in general, have been off by a full 1-2% per peak comparing the runs on the different columns. It seems that some peaks have higher area%'s than others and some are pretty close.
My question is what can cause this discrepancy and what is an acceptable difference between columns? Our samples are pretty complex, so I'm not keen on the idea of running internal standards, but I'm thinking I might have to...what are my other options?
Thanks again, ChromForum Community.
