-
- Posts: 8
- Joined: Sat Mar 01, 2025 12:40 am
At my company, we have two different laboratories, each with identical Shimadzu/Sciex LC-MS/MS triple quad instruments. We are testing production samples for four ppb-level analytes of interest in a complex biological matrix. Both labs are using the same external standards, identical extraction and sample prep protocols, and identical LC and MS methods. Both labs are able to produce excellent calibration curves, both labs get non-detect results in blank injections, and both labs get good results in check-standards every 10 injections. But when we inject extracted samples, one lab tests positive (10-50 ppb, generally, for each of the four analytes) while the other lab consistently tests negative for all analytes.
How could it be that both labs get nice, linear calibration curves, including very similar raw area counts for the standards, as well as low backgrounds in blanks--while getting radically different results for samples? The calibration curve is from 10-100 ppb, and one lab will consistently find, say, 50 ppb, while the other lab will find <LOD.
I understand that our sample matrix is very complex, and we are aware of matrix suppression and recovery issues, but to detect a very large amount of analyte on one instrument and none whatsoever on the other makes no sense to me when both are getting excellent and very similar calibration curve results.
I would greatly appreciate any feedback whatsoever on this. It is so puzzling to me, and my coworkers.
Thank you!