Page 2 of 2

Posted: Tue Apr 14, 2009 12:18 pm
by lmh
Hans' point is certainly important. Just to add to it: if you don't permit random negative results, then you can never return an average of zero. Given a set of samples that genuinely contain nothing, you will skew their distribution upwards, and return an estimate that is small but positive. The best you can hope for is that your answer will not be significantly different to zero, but it would still be better if it were closer to zero, and centred on the correct result.

If you are getting consistent (non-random) negative results then there is something wrong with your calibration curve. Perhaps the points lie on a curve and you're fitting a straight line? Of course the pragmatic thing to do is to use a range over which the line is a good approximation, hence the need to choose the places to put the standards.

Posted: Tue Apr 14, 2009 12:47 pm
by aceto_81
Do you also force through 100%?
How about a single point calibration at a concentration of 100%?
Force through zero AND through 100% :-D

Besides this:if you get negative values for your samples, your calibration curve doesn't bracket your sample concentrations.
When I perform method validation, I inject a few different concentrations, and test statistically wether or not use the intercept (trying to not use the intercept whenever possible).
If I have to use the intercept, I make a calibration curve from LOD to the desired concentration. Everything below is reported as <LLOD.
No negative values for me!

Ace

Posted: Tue Apr 14, 2009 1:07 pm
by Don_Hilton
I have even seen instrument software that “did the user a favorâ€

Posted: Fri Apr 17, 2009 6:05 am
by marina1111
But using different calibration curves for different ranges is legitimate, is not it? Thank you!

Posted: Sat Apr 18, 2009 1:48 am
by Don_Hilton
I see no reason why two curves for two ranges would not be legitimate. In a way it is like haveing two methods - one for each range. Every method has limitations - and a validation of a method documents the capabilities of the method, which also documents the limitations of the method. As long as the method is well documented and is validated, the only limitation would be a regulatory issue, if that would arise from some accrediting body.

Years ago I had a concern about some results from a particular method. I said something to my boss. He told me that as long as I reported the results as being by the particular method, I was good. The method is always part of the result. In many cases results are reported by a less than perfect method - not because we are sloppy, but rather because we do not know that there is a problem until technology advances. Other times, it the absolute best we can do - and we know that it is not perfect, -- just the limit of what we can do with the tools we have. So, we do the best we can do and show our work.

Posted: Mon Apr 20, 2009 12:21 am
by mbicking
It is important to remember the purpose of linear regression - to fit a mathematical model to empirical data, not the other way around.

If the regression fit says that the data don't go through zero at the intercept, then they don't go through zero! :D That tells you something about your calibration system.

If you are getting negative values from a calibration curve based on empirical data, then your regression line is not fitting the data properly. Forget r-squared; it's not always a good indicator of calibration quality. Look at zero intercept and residuals.

If your intercept is not different from zero, then congratulations; you probably have a good calibration system. But why would you then recalibrate with a forced intercept? You already have your system calibrated; why do it twice?

And finally, don't forget that you are not supposed to report values outside the range of the calibration standards. It is certainly true that many curves will generate negative values very close to the intercept, but if this is below your lowest standard, then it is "not detected."

Posted: Mon Apr 20, 2009 8:52 am
by aceto_81
It is important to remember the purpose of linear regression - to fit a mathematical model to empirical data, not the other way around.
This is one of the most important things to keep in mind when you try to do model fitting: The data is true, not your model!
If your intercept is not different from zero, then congratulations; you probably have a good calibration system. But why would you then recalibrate with a forced intercept? You already have your system calibrated; why do it twice?
I wouldn't recalibrate this with a forced intercept, but keep this in mind for the future: I don't have to prepare a lot of dilutions: 1 reference is sufficient, as the intercept goes through zero.

Ace

Posted: Mon Apr 20, 2009 2:18 pm
by sdegrace
When you develop a method you choose a means of calibration, presumably based on the type of sample (for example, to assay a final product which will fall into a known, fairly narrow range of concentrations, you might choose to bracket with a couple of standards around those concentrations and just draw a line between the two ignoring the intercept, or even use one standard and assume the line goes through zero). Then presumably during your method development and certainly during your validation you demonstrate that you can accurately recover your analyte using the method and the system, which includes the particular means of calibration. Then you commit to the means of calibration going forward, having shown that it works within the specifications that were set for the performance of the method in your protocol.

The calibration must show acceptable statistical predictability as concentration varies (linearity for linear fit) and accurately recover your analyte over the range of the method, but provided it does this it may for example ignore non-linear behaviours over wider ranges than the method is designed to cover - the calibration is a convenience designed for a specific purpose, not necessarily a rigourous study of the behaviour of the method over all possible ranges.

Linear fit itself is often purely a convenience. The example the comes to mind for me is the many UV protein determination kits that are available on the market. These are often marketed as providing a very linear response and indeed the end user often uses linear regression to determine results because that's by far the most convenient method. But in fact, many of these tests have a second order polynomial relationship between concentration and response, not linear, which is especially evident if a broad range of concentrations is examined. But as long as fitness of the method in terms of recovery and apparent linearity is demonstrated, we use linear regression anyway. E.g., USP<1057> Method 5 calls for least squares linear regression, when in my experience the Biuret test often actually shows (or probably better to say is better fit with) a second order polynomial relationship. Not all useful relationships are actually linear.

I guess for me the bottom line is that forcing through zero is just another tool that can potentially be used, that it is subject to the same demonstrations of fitness for its purpose as any other we might choose to use, and that provided it conforms to the usual demonstrations of fitness and that as with any method the method is used in its prescribed fashion for its prescribed purpose that there is nothing wrong with using this kind of calibration. I can think of circumstances where I certainly think it is desirable, but none where I think it is necessary, so if you don't like it don't use it.

Stephen

Posted: Mon Apr 20, 2009 4:29 pm
by HW Mueller
OK, if you use the Biuret test as an example of how one should see accuracy, etc., in HPLC. If I remember correctly the allowed standard deviation in the German Ringversuche is 10, or for ~95% (2xsigma) it´s + or - 20%. That is, the person who gets 60% of the real value still "passes". also, the Biuret gives you a rough estimate of total protein, whatever that is.
And lastly, if you state, lets say on samples your company has on the market, how you analyzed them . . . ok again. (Example statement: The purity of this product was ascertained by forcing the cal. plot through zero, because our analyst likes to do this).

Posted: Tue May 05, 2009 4:02 pm
by lmh
This thread started with John Dolan's article, Calibration curves part 1 (in LC/GC). Part 2 is now out: How are signal-to-noise and imprecision related, how do they relate to LOD/LOQ?

It's another good article (as you'd expect from this author), but opens new cans of worms. Here's my worms:

(1) Does it make sense to look at S:N ratio, generally based on peak height, if our quantification is based on peak area?

(2) How do you handle the situation where the signal is too small to quantify, but the noise is very nearly zero, giving an infinite S:N ratio? I'm thinking of SRM in a good MS system. Imagine an LC-MS with an efficiency of transfering ions of 1% from spray chamber to detector. Given a sample of 200 ions, the signal is likely to be 0, 1, 2, 3 ions. There is no way this can be measured with high accuracy in a single analysis, it's just a question of statistics: you are rolling a 100-sided die 200 times, and seeing how often you throw a "100". But there may be no noise whatsoever within a broad period of time around the "peak". I'd hate to think LC-MS SRM analyses claim a better LOD than they should, based on bad S:N estimates.

(3) John Dolan raises the possiblity of choosing detector time-constants carefully to provide "signal averaging, or smoothing". Is this acceptable in an environment that wouldn't allow numerical smoothing of the data after the event? How far can you smooth (how much loss of peak shape is acceptable in the quest for reduced noise)?

(4) And what do we do about Thermo!? Xcalibur Qualbrowser version 1.3 provides 3 integration algorithms, two of which quantify S:N ratio, but in totally different ways (and it's not clear to me from the Help file why they should differ, or what they are doing). Given one, typical, extracted ion chromatogram, "Genesis" gave me a value of 169, while "ISIS" gave me a value of 208468. Clearly 3 orders of magnitude is going to mess up a LOQ quite badly...

Posted: Tue May 05, 2009 5:26 pm
by biochemist
sdegrace,

I like the responses which mentioned the expanded uncertainty and confidence intervals. Which, humbly, I think are the most important points that have been brought up..

When you report something close to the LOQ, the uncertainty of the measurement is large, relative to the measurement itself. The uncertainty should make the difference between the values you would have obtained WITH and WITHOUT forcing the curve through zero - insignificant.

If you *do* find the difference to be significant, your method might not be linear in your desired range. In which case, you should create a multi-point curve, using not less than 6 standards, each time the assay is run. Which is what happens during a BCA or Biuret assay.

These factors are normally determined at the method validation stage... with lots of data from multiple calibration curves.

If the calibration data is linear [as shown by 1) the r-squared value, 2) a random distribution of the residuals AND 3) an insignificant y-intercept], then a single-point calibration can be used for routine analysis. i.e. The curve can be forced through the origin - coz you have tested the y-intercept and found it to be insignificant compared to the errors due to precision....

If the curve is NOT linear (for any of the reasons mentioned), then a multi-point calibration curve MUST be used in routine analysis. In this case, no forcing! Just use the standard data to create a curve each time. From the validation, you'll be able to choose the type of curve fit that best suits your data (e.g. linear, polynomial, 4-PL, etc), making sure you run a sufficient number of standards each time.

Everything being relative to the uncertainty, I wouldn't place too much faith, or base any decisions, on results with differing values close to the LOQ. Remember, a little less and your analyte is actually not even quantifiable...

The bottom line is - know the uncertainty. Then you'll be able to interpret your data in a meaningful way.

Just my two cents.

Posted: Wed May 06, 2009 12:00 am
by Don_Hilton
For what it's worth:

The S/N rato on height. Even though we use area for quantitation, it is the fact that the trace rises significantly that gives us a peak. Thus we are testing to see if the deviation from baseline is real. I have seen baseline integrated and numbers reported.

Which brings us to the next question of a signal too small to quantify and realy small noise. A small matter of reality is that any detector that uses electrons has electronic noise in the circuit. There will be noise. The chromatographic trace that I've seen with no noise in the baseline is one that has used a detection threshold which sets any value below that threshold to zero. The noise is there, but ignored. If the peak is detectable as a peak, it is because the signal has risen above the surrounding signal by some observable (thus measurable) amount.

On delecting detector time constants: every detector has a time constant. And it is a result of the design. So, adjusting the time constant is adjusting the design of the electronics? It might get an argument - but the time constant is always there.

When you get to that sample of 200 ions, you need to be careful. The statisital justifiation for signal to noise comes from things like applications of Student's t. This is for normally distributed data. When you are down to counting individual ions and at that low a rate, you have the same problems as they have in the radiochem lab - and the statistics are a bit different because you can not assum a normal distribution. Now if I can find that old textbook....


And on using a single point calibration for routine analysis. It is always good to bracket your range when you calibrate or check your calibration. It is also good to pick up at least one point in the middle of the range somewhere. Even if you develop a method that you show is absolutely linear and wonderfully reproducable -- you still need to check to be sure that your instrument is working across the range and remains linear. Because some day a multiplier or op-amp will die. And it would be good to know that your instrument has taken to writing fiction.

Posted: Wed May 06, 2009 12:18 am
by mbicking
If the calibration data is linear [as shown by 1) the r-squared value, 2) a random distribution of the residuals AND 3) an insignificant y-intercept], then a single-point calibration can be used for routine analysis. i.e. The curve can be forced through the origin - coz you have tested the y-intercept and found it to be insignificant compared to the errors due to precision....
I hate to belabor the point, but ... why bother? You do a fit with slope and intercept, which is the best "fit" through the data. That will give you the best answer based on your calibration. Now, the checks on the data, as described, are absolutely necessary. They indicate that your system and standards are working properly. Why then go with a different fit that is "not different" but certainly not better. It's just a re-calibration, which is more work. I can't see any advantage, unless you are doing the calculations manually.

Regarding time constants:
Filtering and smoothing remove information from the raw data. Often it is just noise, which is fine, and can improve S/N without any compromise in data quality. But too much filtering and suddenly you start losing important information. The line is fuzzy on exactly when this happens, so beware....!

Posted: Wed May 06, 2009 5:42 am
by aceto_81
If the calibration data is linear [as shown by 1) the r-squared value, 2) a random distribution of the residuals AND 3) an insignificant y-intercept], then a single-point calibration can be used for routine analysis. i.e. The curve can be forced through the origin - coz you have tested the y-intercept and found it to be insignificant compared to the errors due to precision....
I hate to belabor the point, but ... why bother? You do a fit with slope and intercept, which is the best "fit" through the data. That will give you the best answer based on your calibration. Now, the checks on the data, as described, are absolutely necessary. They indicate that your system and standards are working properly. Why then go with a different fit that is "not different" but certainly not better. It's just a re-calibration, which is more work. I can't see any advantage, unless you are doing the calculations manually.
just my 50 cent:

Point 1 and 2 is something you should elaborate in your method validation.
Assume a non linear relationship. If you fit a linear model y = ax + b, you have the slope and intercept which is the best "fit" to your data, but not the correct one, and certainly not the best answer based on the data.
I remember a quote I read somewhere: "the data is right, not your model". So if your use a model to fit your data, you should always verify how your model performs. This step is done in the method validation.
Here you also choose if you use the intercept or not, based on the significance.
Based on the results of these tests, you can choose your method of calibration which should be used in routine analyses.
So no recalibration at QC, but using the right method.


Ace

Posted: Thu May 07, 2009 10:37 am
by Peter Apps
The signal:noise on the chromatogram (how ever you decide to measure it) represents the minimum quantity of analyte that can ever be reliably detected with that particular instrument and operating conditions. Real methods, even if they only involve injecting a clean solution, always give higher rsds than the chromatogram signal:noise, because they introduce additional sources of variability.

In trace analysis in particular the limiting factor is not instrument signal:noise but variability in the sample preparation, in other words what variable fraction of the analyte in the orignal sample ends up being introduced to the instrument ?

In addition, in trace analyses where it is not possible to use a selective detector the dominant source of baseline noise is not from the instrument but from interfering components of the sample and reagents.

Peter