Two LC's...two sets of results

Discussions about HPLC, CE, TLC, SFC, and other "liquid phase" separation techniques.

11 posts Page 1 of 1
Hi all

I am running a fairly simple LC/UV isocratic method on two "equivalent" Waters LC's. When I prepare standards and samples for analysis 1 of my LC's calculates results that are roughly 5% (final calculated result of %Analyte) lower than the other. I'm convinced that the result difference is not due to method variability as we have data to support that 100% of the time we see this bias no matter who prepares the standards/samples etc....

I simply have never seen this and I am hoping someone in this group might be able to share a similar experience with me. It's always been my expectation that while two LC's might produce different response factors for a particular analyte, the final calculated result should be close but variable.
Did you check that each LC have the same method? The response factor (RF) can vary on each LC but the result is relative to the reference standard's (RS) since HPLC analysis is related to the RS mass/volume. Was the RS the same on each LC?
What's the shape/fit of your calibration curve like?

One thing that might be happening is that on one system you have poor linearity because something's gone wrong, and as a result, your samples are being read off a bit of the calibration curve where there's a 5% error.

My thinking is that the situation you describe is, at first sight, impossible. If I inject 5uM standard and get an area of 10,000 on one instrument, and 10,500 on the other, then when I inject it again, pretending it is now a sample, I should get areas of 10,000 and 10,500 again, and using my one-point calibration curve, both give the same answer. No bias.

But if one system is linear and the other has gone a bit curvy, and I fit a straight line through both, and then measure samples at a point where the deviation from the straight line is significant, then I will get a bias on that instrument (but not on the instrument with a nice straight calibration). 5% is not very much. In terms of linearity of fit, and R-squared values, you can get a 5% error while still thinking the fit is nice.

Do also check the quality of integration on both systems. A combination of events such as: (1) samples running as broader peaks than standards AND (2) one system being prone to even broader peaks than normal because of a dead volume AND (3) integrator settings that won't get the whole of a broad peak, or bias the base-line a bit on broad peaks, could lead to an underestimate in samples only on one instrument only. This is also possible if one system has dirt that elutes near a peak of interest, and combines with sample-related issues to give slightly questionable integration.
ikoiko86 wrote:
Hi all

I am running a fairly simple LC/UV isocratic method on two "equivalent" Waters LC's. When I prepare standards and samples for analysis 1 of my LC's calculates results that are roughly 5% (final calculated result of %Analyte) lower than the other. I'm convinced that the result difference is not due to method variability as we have data to support that 100% of the time we see this bias no matter who prepares the standards/samples etc....

I simply have never seen this and I am hoping someone in this group might be able to share a similar experience with me. It's always been my expectation that while two LC's might produce different response factors for a particular analyte, the final calculated result should be close but variable.


The cause could be almost anything - since you don't tell us anything at all about the method or the calibration and calculations beyond "simple LC/UV isocratic" - all you are going to get from us is speculation.

Peter
Peter Apps
Thanks to all who have replied.

Peter - Yes I realize that there is missing info that could help seed the conversation. I didn't want to write the novel of info and thought I would wait for specific questions. Simple calculation:
Asmp/Astd * wt std/wt smp * G.F. * Purity * dilution scheme

HPLC chemist - yes the same RS were used in both systems. Sample and standards were prepared and double poured...so the exact same solutions were used. I think this is the nature of your question.

Imh - for this analysis we are using a 1 point calibration and not a full curve. Samples are prepared such that areas for samples are equivalent to standards i.e. same analytical concentrations. So everything considered - Bias is just not expected...but it is seen. 5% isn't much....but when one instrument calculates an acceptable result while the other determines the sample to be OOS - what's a chemist to do.

Another piece of the puzzle is that LC#1 is using a uv/vis 2489 while LC#2 employs PDA 2996. The results calculated with the 2489 are the lower of the two result sets.
How were the results calculated? The PDA will have a lower response than a UV detector. Usually, the RS is used to bracket the sample and then to calculate the sample results (example below).

RS
Sample Prep 1
Sample Prep 2
RS
ikoiko86 wrote:
Thanks to all who have replied.

Peter - Yes I realize that there is missing info that could help seed the conversation. I didn't want to write the novel of info and thought I would wait for specific questions. Simple calculation:
Asmp/Astd * wt std/wt smp * G.F. * Purity * dilution scheme

HPLC chemist - yes the same RS were used in both systems. Sample and standards were prepared and double poured...so the exact same solutions were used. I think this is the nature of your question.

Imh - for this analysis we are using a 1 point calibration and not a full curve. Samples are prepared such that areas for samples are equivalent to standards i.e. same analytical concentrations. So everything considered - Bias is just not expected...but it is seen. 5% isn't much....but when one instrument calculates an acceptable result while the other determines the sample to be OOS - what's a chemist to do.

Another piece of the puzzle is that LC#1 is using a uv/vis 2489 while LC#2 employs PDA 2996. The results calculated with the 2489 are the lower of the two result sets.


Depending on the slope of the single point calibration line, any deviation in peak area could have a large difference in calculated concentration. Unless you are able to have the sample give the exact same peak area as the RS then you can expect differences since the detectors have different sensitivity.

To eliminate this, you would need to bracket your concentrations. If you expect 5ppm as a normal sample concentration, you should run a 2.5ppm and 7.5ppm standard and generate a calibration line between them. This way the slope will be more accurate within the range of expected sample concentrations, even better would be to use 2.5ppm, 5ppm and 7.5ppm so that you can see if there is any deviation from linearity within the range. Single point calibrations are good if you can accept a +/-10% or more error in calculation, if you need a narrower window of error, you need more points.
The past is there to guide us into the future, not to dwell in.
hi ikoiko86,
firstly , are your using external standard? , if you so , try to compare peaks of standard chromatograms of the two systems, is there a significant difference between the response of the two peaks ?
Do the same with sample chromatograms , are they differs ?
if you have same standards but different samples , i think it might be error in sample preparation techniques.
if the difference exsits in both sample and sdandard , and supposly you use the same column, it may be difference between the two systems, Re-check calibration and validation of both systems , i think this type of error can origin from either injection system or UV lamb , check for lamb energy and usage hours , check for injection volume by either plotting calibration curve or using internal standard ,
actually i prefer using internal standard , something stable like methyl or ethyl paraben , calculate its relative response factor related to the principle peak on both systems, it must be the same ,
but before all , do you have the same retention time ?
Since you have 2 detectors I assume you have compared peak areas, pressure, baseline from blanks, integration consistency- have you tried overlaying chromatograms between the 2 detectors?
Also consider the injection technique and settings, the alliance and acquity have many settings.
A 3 point curve with comparable R2 values would help too.
ikoiko86 wrote:
Hi all

I am running a fairly simple LC/UV isocratic method on two "equivalent" Waters LC's. When I prepare standards and samples for analysis 1 of my LC's calculates results that are roughly 5% (final calculated result of %Analyte) lower than the other. I'm convinced that the result difference is not due to method variability as we have data to support that 100% of the time we see this bias no matter who prepares the standards/samples etc....

I simply have never seen this and I am hoping someone in this group might be able to share a similar experience with me. It's always been my expectation that while two LC's might produce different response factors for a particular analyte, the final calculated result should be close but variable.


Difficult question. This should not be possible. Are your individual recoveries varying by 5%? I know many responses to this question are valid if data from one instrument is compared to another like area cts, RT or peak shape but if a sample has 100 ppb in it and you have 2 perfectly valid methods on 2 instruments why would it matter if one employed a C8 column and an acetonitrile mobile phase and the other a C18 and a methanol mobile phase? If the sample has 95 ppb in it the result should be the result. Response factors, area counts, peak shapes or RT 's may be slightly different but if both yield an r value of 0.999 which one is right?

I would use a 5 point calibration curve about an order of magnitude and put your sample in the middle. Run the identical vials on both instruments and you should get the same number. This also assumes you are using a method that can accurately tell the difference between a 95 and a 100% recovery. If 5 injections of the same vial give you a 95 +/- 8% recovery you don't have a problem, if they are 95.3 +/- 0.250 you do.
What concerns me is the potential of a matrix effect being in play here as a result of using a different detector.

With whatever changes in systems there are, ideally, they should result in similar changes to both standard and sample responses and hence results, meaning that the net assay values should be similar.
As a result of changing systems (or something else), your samples and standards are responding differently (assuming 5% is outside your typical RSD for standard & sample responses).

How and why this is happening are the concerns.

First thing I'd check is whether your PDA is using an excessively wide spectral bandwidth which might invite a sample matrix effect.
Thanks,
DR
Image
11 posts Page 1 of 1

Who is online

In total there is 1 user online :: 0 registered, 0 hidden and 1 guest (based on users active over the past 5 minutes)
Most users ever online was 1117 on Mon Jan 31, 2022 2:50 pm

Users browsing this forum: No registered users and 1 guest

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry