-
- Posts: 52
- Joined: Fri Sep 30, 2011 2:24 pm
im currently carrying out an investigation of a method we use in a QC lab. the method is used to quantify glycerine in an eyedrop formulation.
the problem is basically that the 6890N agilent model seems to give high sample bias compared to the 7890A model.
the 6890N was injected with sample and gave results of 1.69%w/v for first injection then 1.47% for second injection, 1.42% third and 1.38% for the fourth. the results show a downward trend from first injection to last injection.
however the same samples ran on the 7890A model gives consistent results at ~1.2% for all the same injections. Why is this so?
I set up both systems with exactly the same parameters and column and both went through the same equilibration before analysis
method - splitless, inlet: 250C, detector:300C.
temp program - initial 100C for 4.5min,
20C/min to 170C (hold 1min)
20C/min to 200C (hold 7.5min)
flow: 12ml/min constant flow
purge flow: 120ml/min after 0.5min, purge off 10min
wash solvents: methanol
sample is made up in methanol - 1g weighed diluted in 50ml of methanol to give conc ~0.02%w/v
column is DB-WAX 30m x 0.53mm id x 1um thickness. guard column 10m.
liner is split liner part number 5190-2294 (glass wool in the middle). I know this is a split liner and the method is not split but unfortunately this was written into the method (don't ask me how or why!)
standard injections seem to stay very consistent (%RSD <2%) but samples do not on the 6890N. is the sample matrix having an effect? can the guard column also have any effects? I noticed that one of the guard columns has a really shiny appearance while the other has the same appearance as the main column.
its probably quite easy to pick holes in this method (something I have no control over) but any ideas as to why response start off very high and start to decrease from injection to injection would be great.