Advertisement

Zero calibrator, force-through-zero or....

Discussions about GC-MS, LC-MS, LC-FTIR, and other "coupled" analytical techniques.

14 posts Page 1 of 1
Hi!

I have a simple question on standard curves. Searching through this forums I see that calibration curves has been discussed quite a lot, but I can't find an answer to my particular question.

The method involves quantification in biological matrices. I have good linearity in the measuring range. (1/x weighted). Dilution experiments demonstrates this down to the quantification limit of 0.1 nmol/L (S:N>10-15). Expected values ranges from 90-110%. Linear statistics shows b+/-1.96xSD(b) (as in y=m*x+b) will include zero. The blank is a real unspiked biological matrix, and there is no trace of signal. I use 3 calibrators (+zero calibrator)

However, I find that the day-to-day precision is inferior if I include the lowest calibrator (0.2 nmol/L), compared to forcing the standard-curve through zero or including a zero calibrator. The CV% of the lowest calibrator is 15-20.

What is the correct procedure?

1. To force standard curve through zero.
2. Include a zero calibraror? (practicly does the same thing as 1).
3. Use the 0.2 nmol/L calibrator (and NOT 1 and/or 2)

Anna

You should keep you lowest calibrator - the reason that its repeatability is poorer than the other points is that you have some losses of analyte that are not consistent from run to run. This is useful information that tells you the truth about the performance of the method with very low levels of analyte. With a blank, or forcing through zero you cannot, for obvious reasons, see any effect of inconsistent analyte losses because there is no analyte to start with, and the performance of the method looks better than it really is.

Peter
Peter Apps

You should keep you lowest calibrator - the reason that its repeatability is poorer than the other points is that you have some losses of analyte that are not consistent from run to run. This is useful information that tells you the truth about the performance of the method with very low levels of analyte. With a blank, or forcing through zero you cannot, for obvious reasons, see any effect of inconsistent analyte losses because there is no analyte to start with, and the performance of the method looks better than it really is.
I see your point, although I think there is also convincing arguments to choose option 1 or 2.

First - the day-to-day precision of 3 different samples at the LLOQ are quite convincing when forcing through zero (CV 10-15%).

Second - Dilution of 5 different pools to blank consistently(each sample prepared individually 5 times) demonstrates linearity to LLOQ (and even further).

This absolutely isn't as simple as might be thought.

You have 3 calibration points; in my limited experience people tend to believe their calibration curves are straighter than they are. Frequently there is a slight tendency to level off at high concentration. This means that you will often see curves where the middle point is slightly above the line, and the top and bottom points just below the line. In a case such as this, values close to the lower limit of quantification (or close to zero if you have samples that fall below the limit of quantification) can be very badly underestimated, because the software is expecting a positive signal for no analyte whatsoever.

You need to have the best possible calibration curve for the region in which your samples fall, and you should be testing it with standards similar to the samples (and ideally choosing your points to span the proper range of concentrations).

Practically, you need to check whether you are getting random fluctuations in your lowest point, or whether it's always out by the same sort of amount. If you begin to suspect a curved calibration curve, then consider fitting a curve rather than a straight line. If any cali point is consistently above or below the line, then there is something wrong with the line (or, if you're really unlucky, you have losses of fixed amounts of analyte, "titrated out", rather than a general percentage loss during sample preparation).

If you have to handle retrospective data, and realise that most of the samples are right down at the low end, but you suspect that your cali curve is indeed curved, think too about whether it is the top point that is levelling off. It may actually be more sensible to delete the high-end point, even though it is repeatable.

If you choose Peter's advice of keeping the bottom point and ignoring zero, you absolutely must not, under any circumstances, consider quantifying anything that falls between zero and the bottom point. You have an accumulating error between the two, and you really have only the vaguest guess how big the error is. Some software provides a "join lowest point to the origin" option, but I find this a bit of a fudge. It just feels wrong to have a calibration curve with a kink in it.

If you choose a quadratic curve, remember that it will fit 3 points perfectly, whatever the points, so statistical checks on fit become meaningless. You can also force a quadratic curve through the origin; this can frequently be a good compromise between keeping all your cali points and making sure low values don't come out plain silly. After all, we know that no analyte = no signal.

Good luck!

For bioanalytical methods I would follow the FDA's guidance for industry

http://www.fda.gov/CDER/GUIDANCE/4252fnl.htm

They recommend a minimum of six STDs excluding the blanks for calibration.

Also. I have never forced through zero or used the zero in the calibration for this type of assay as there are acceptance criteria for response levels in blank solutions

(I've worked in the regulatory toxicology field for 20+ years)
Good judgment comes from bad experience, and a lot of that comes from bad judgment.

OK...this is actually quite interesting!


I would argue that if a dilusion experiment should be adequate to prove continued linearity down to LLOQ. Of course, this should be done by diluting in substance free matrix, otherwise identical to the matrix to be measured. It's simple to calculate the actual(theoretical) values in a dilution experiments, and compare them to what you actually measure. If the linearity doesn't hold up you should get deviation from predicted values.

I can not see that any of your arguments prove this logic to fail. Can anyone explain where I go wrong? (If I do... :P )

Btw: During the initial method developement we actually used a 11 point calibration curve, not including origo. Since the runtime is rather long this turned out to be impractical, and we reduced to number of calibrators to just 3. The regression line equation did not change. Visually evaluated, plotting 11 points linearity curves in log-log charts, the linearity is close to perfect. But if we extend the measurement range in the higher end, it's obvious that very high calibrators are slightly underestimated.

I emphatically go with Peter on this one: Fudging is fooling yourself and others.

Beware the assumption that no analyte equals no signal. While there will be no signal generated from the analyte when there is no analyte. BUT, I've seen small humps in the baseline integrated and areas generated from those humps. (Thus I use the procedure in the EPA methods - and avoid this trap.) The other thing that I have noticed when small humps are selected - the human eye seems to be good at picking out a failrly reproducable chunk of noise - and the RSD of the noise can be pretty good when compared to the RSD of the lowest "real" standard!

I won't tell you how I know... :roll:

(1) About picking up noise in the baseline as a signal:

Don's right, both about baselines and the human eye. For human-eye reasons, I don't trust manual integration (and haven't done it for years). But personally I also set my integrator less sensitive, so it won't find baseline noise. If you have it more sensitive, there will be peaks all along the baseline. This has the disadvantage that if your retention times wander (and I won't tell you why I'd worry about a thing like that... never happens, does it?), there is a myriad of spurious peaks just waiting to be misidentified...

(2) Do we include zero?

JGK correctly and usefully quotes the FDA guidelines. Now, these guidelines suggest we should use the simplest regression that we can for the calibration curve. If you force through zero, you are using the equation y = mx. If you don't force through zero, you are using y=mx+c. Therefore the guidelines actually hint we should force through zero, though I'm sure that wasn't in the minds of those who wrote them, nor is it necessarily routine practice.

But does it matter? The guidelines also state that the calibration points must be within +/-15%, and that the LLOQ point must be within +/- 20%. This means that however you arrived at your cali curve, any measured points should be within +/-15% unless they're near the LLOQ, in which case they may be +/-20%. Frankly, whether you forced through zero or not, you must arrive at a line that fits the points, or you won't fit the criteria! If forcing through zero makes a significant difference to the accuracy of the LLOQ standards, then there is a serious problem with the calibration curve. It's more important to solve the problem than to debate whether to include zero.

(3) Why is the day-to-day precision lower if you force through zero

If you measure something in the middle of a calibration curve, the precision of this measurement depends on errors in the measurement, and errors in the calibration curve. The error in the middle of a 2-point calibration curve is the average of the errors at either end. If you force through zero, then the error at one end becomes zero, so the precision in the middle is improved. But the fact remains, unless you have all the calibration points, you cannot be sure that the accuracy is good over the whole range. The standards at the LLOQ also provide a genuine measurement of the precision expected at the LLOQ, and important thing.

(4) FDA guidelines and ignoring points

The deeply scary bit of the FDA guidelines is the fact you are allowed to throw away (more or less at will) a third of calibration and QC points. This has two consequences. Firstly it means the +/-15% limit isn't really 15%. It can actually be substantially higher, because you can trim outliers and still pass. Secondly it has consequences for how many replicates you must do of each sample. If you expect 33% of QC samples to be outside an acceptable window, then 33% of real samples will also be outside the window. Unfortunately, for real samples, in contrast to QC samples, you don't know where the window should be. Scaled up to 100 real samples, how many replicates of each do you need to make sure that none of the 100 samples is outside a window of +/-25%?

In my view, this license to delete points is deeply mischievous, and undermines the guidelines completely. I sincerely hope no one is seriously doing it.

(1)
(4) FDA guidelines and ignoring points

The deeply scary bit of the FDA guidelines is the fact you are allowed to throw away (more or less at will) a third of calibration and QC points. This has two consequences. Firstly it means the +/-15% limit isn't really 15%. It can actually be substantially higher, because you can trim outliers and still pass. Secondly it has consequences for how many replicates you must do of each sample. If you expect 33% of QC samples to be outside an acceptable window, then 33% of real samples will also be outside the window. Unfortunately, for real samples, in contrast to QC samples, you don't know where the window should be. Scaled up to 100 real samples, how many replicates of each do you need to make sure that none of the 100 samples is outside a window of +/-25%?

In my view, this license to delete points is deeply mischievous, and undermines the guidelines completely. I sincerely hope no one is seriously doing it.
Yes the, exclusion of points is a thorny issue for GLP analysts and QA groups in the GLP field. However, it is not an arbitrary decision, exclusions must be justified and have to be approved (by management). Exclusion of QC's also comes with additional restrictions. It is also an area the regulatory authorities (FDA, OECD) are looking closely at as the regulations are updated in the trail of advancing technology.
Good judgment comes from bad experience, and a lot of that comes from bad judgment.

OK...thank you for your time and effort trying to explain this to me! You have all been very helpful!

As you probably have understood - i WANT my blank to go to zero, because this improves the CV% at the very low levels slightly. lmh brilliantly explaines why, ref point (2).

So here is my comprimize:
1. I include another calibration point at 0.2 nmol/L (LLOQ) This has to be within +-15(-20%) to pass qualification tests.
2. I keep the zero standard. This has to be blank to pass qualification tests.
3. Quantification below the lowest calibration point is obviously not allowd.

The signal:noise at defined LLOQ is about 15-20, so I don't think there is much chance to misinterpret noise as peak.

Now, I have got the "best of both worlds"! Or is this still wrong? :roll:

Hi Anna

A practical question - what is the lowest level of analyte that you find in your real samples ?

Peter
Peter Apps

The lowest level is yet unknown, but probably very close to zero.
I have no intention of going further down than 0.1-0.2 nmol/L.

Hi Anna

Since you are interested in results that are perilously close to your LLOQ you might need to tinker with the method a bit to improve its repeatbaility at the lower end of the calibration. This might be something as simple as increasing the injection volume.

Peter
Peter Apps
14 posts Page 1 of 1

Who is online

In total there are 16 users online :: 0 registered, 0 hidden and 16 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: No registered users and 16 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry