-
- Posts: 7
- Joined: Sat Mar 01, 2025 12:40 am
- Location: USA
For an LC-MS method of mine, I use standards of the four analytes and include an external calibration curve during each run (plus periodic check standards throughout). The calibration curve is made by diluting our stock standard (which is kept in the freezer in amber glass and wrapped in foil and parafilm) and vialling various concentrations from 1 - 100 ppb. This is done fresh for every run.
I have tabulated the MS detector area response to the 100 ppb standard for all runs going back about a year, and I am noticing that the counts for each of the four analytes have significant drift (as much as doubling!) over a period of weeks-to-months. What is most puzzling to me is that the drift is generally *upward* rather than downward, and it is a fairly stable, almost linear, drift over time. I could understand a *downward* drift, e.g. if the standard were unstable, but it's hard for me to understand an upward drift. Again, these are standards that are re-diluted and re-vialled every run. The calibration curves are very nicely linear, for what that's worth.
Is this drift being caused by the instrument, or something else? What investigative or corrective steps could I take to pin down the cause of this drift?
Thank you!