Advertisement

Method Validation - Linearity

Discussions about HPLC, CE, TLC, SFC, and other "liquid phase" separation techniques.

65 posts Page 3 of 5

Dear Mr. Muller,
Do you have experience using vervahrensstandarabweichung and verfahrenvariationkoeffeizient* to determine linearity? Could you please explain the method to us?

*)Funk W, Dammann V, Donnevert G (1992) Qualitatsschirung in der Analytische Chemie. VCH. pp 5 - 39.

Sorry, never heard of it, but since I was curious I looked at a sample chapter on the Wiley-VCH homepage where they gave mathematical formulars, the Verfahrensstandardabweichung was a SD divided by some sensitivity?? function of a curved standard plot (F-test). At that point I lost interest.

At that point I lost interest.
Dear Mr. Muller,
What's the matter with the formula?
My lecturer told me that it is used widely in Germany to substitute coefficient correlation. Is it true?

syx, the "Reststandardabweichung" (residual standard deviation) is the scattering of all points around the regression line (linear or quadratic...). The "Verfahrensstandardabweichung" (procedural standard deviation?) is made by divison of the residual standard deviation through the slope of the regression line.
They both are measures how good your model (linear, second order...) fits your data points.

So: r, r^2, residual SD and procedural SD give you measure, how good your model fits the data points. if the fit is better with a linear function than with second or third order functions, than I would consider linearity as given.

Alex

So far I have managed to stay away from non-linear standard curves (by choosing the appropriate conditions) , also, I have used statistics only for puplication´s sake, Excel´s ANOVA was sufficient. The day I let statistics interpret my work, I will quit as a chemist.
syx, I lost interest for two main reasons: a. I didn´t see anything that would help me notice a deviation from randomness more sensitive than my eye, and b. re-deriving the formulars to fully understand them, were judged to be out of proportion to their usefulness.

Just remembered, the above statement on non-linear curves is not 100% correct, I once tried to have a mathematician fit a curve to experimental FA concentration changes in blood after the infusion of glucose. I had hoped to find a parameter akin to the slope of a straight line in order to compare curves of different patients. The mathematician gave up, so did I....when you need them........

The day I let statistics interpret my work, I will quit as a chemist.
You may be right, but my point was that statistics is a indispensable tool for support of your interpretation and conclusions, based on analytical data. In industry, many guidelines that we need to follow require sound statistics, also for analytical method validation. And with statistics, you even might notice things, your eyes didn't see........

regards Bert

Ok,
a. If one is prevented from heeding Ernest Rutherford´s statement
b. you need to standardize a statement like "significant" in a publication
one needs statistics.
Did I miss anything?

But, when someone points out the need for statistics a warning is in order. Just one example: Before I came to a clinic (in Giessen) data collected on studies performed with patients were given to a mathematician without physicians looking at them. Thus, C-14 glucose turnover in people who had been operated on the brain was correlated to turnover of patients with disc operations. There was a statistically significant difference between the two groups, the ones with brain operations having the higher turnover. Henceforth, all brain operated were given glucose infusions, postoperatively. Later it was noticed that only those which were motorically more active (operated near the motor area) burned more glucose. The infusions were stopped immediatly in Giessen (motorically active persons were given sedatives), but since the study was published it took another ten years, worldwide, to get rid of this possibly very dangerous procedure.

Hans,

My conclusion is that the statistician did a perfect job. The idiots were the physicians who did not interpret the data correctly.

Example: a statistician tests the amount of nutrients consumed by different patients, and he is supposed to correlate it with their well being. He finds that there is a large variation in nutrient consumption. On one extreme, there are patients that do not consume any nutrients at all. He correlates this to the number of complaints about the food in the hospital, and he finds that those that do not consume anything are not complaining either. He communicates this data to the hospital management, who, in turn, recommends to close down the kitchen, which the hospital does. Later it was found that the patients that did not eat and did not complain were the ones that were dead, and the statistitician was blamed for giving bad advice....

Once again, the statistician gave very good advice, based on the information that was available to him and based on the question asked.

So: r, r^2, residual SD and procedural SD give you measure, how good your model fits the data points. if the fit is better with a linear function than with second or third order functions, than I would consider linearity as given.
Mr. Alex, based on your opinion, which is the best method to describe linearity?

Yes, Uwe, the mistake was made by the physicians who transferred part, but not all, of their responsibility to the mathematician, more exactly: they took the data out of the hands of the laboratory, shunned the advice of the lab head, fed the data to the mathematician as if he were merely a machine, drew the conclusions on their own. When I arrived there I shortcircuited with the mathematician, together we could prevent some idiocies.
This misapplication is not the worst of the "sins" committed in the name of medical research.

(Incidentally,the mathematician, of course, eventually noticed what was going on, was very bitter about it, and sarcastically used to ask the physicians "how do you want it?")

First a few words on statistics: Using statistics you will end up with a result that says: there is / is not a statistically significant difference / effect /....
How is that result related to your validation and method properties?
While statistics itself is completly rational, results still depend on the input data. Let me give an example: You do a solution stability test. Every 6 h a solution is injected. After lets say 48 h you collect the results (from all 9 runs) and do a Neumann trend test. No trend is observed. A collegue does the same test test on equipment with somewhat higher precision and injects every hour. Based on the trend test of his 49 precise results there is clearly a trend.
So the question (for use in validation) should be: Is the solution stable enough to give consistent results?

For linearity the question is not: is calibration plot completly and perfectly linear?
The question is : Can I use one-point (three- or five-point) calibration and a linear calibration function.
If you ask the question in that way, and you have a r=0.999 for an assay and your residuals plot looks o.k. then I would say its linear enough...

There are other methods: Calculate the r (or r^2, or on of the above mentioned standarddeviations) for first order, second, third fourth order calibration functions and look what fits best.
Then there is the Mandel-Test with comparison of linear and quadratic residual standard deviations, some mathematics and finally a F-test.

Going through all the posts, I realized that until now nobody has touched the real issue with calibration curves. JI2002 touched on it a little bit, but not enough.

The most common approach to generate a calibration curve is to do serial dilutions and then inject a constant volume. If I ignore for a moment the error of the serial dilution, the error stemming from this approach is directly proportional to the amount injected. Thus you have a larger error at the top of your calibration curve than at the low end. This should be taken into account, and the standard correlation coefficient does not do that.

In addition, the serial dilution creates another error, actually not only a randon error, but also potentially a systematic error or accumulating error.

Does anybody ever worry about these things?

To avoid having the r² value inflated, I try not to include concentrations any higher than 1.25 to 1.5x for anticipated sample concentrations - although sometimes, we want those as high as possible to allow us to see related substances with relatively weal chromophores. In these cases, it can be helpful to look at the r² of the lower 4-5 points covering no more than 2 orders of magnitude (eg: from 0.1-10µg/mL even though you may run as high as 1mg/mL).

To minimize errors from serial dilutions, I try not to prepare more than 2 dilutions from any 1 stock (nothing I inject is the result of a 3rd dilution going back to a stock). I prepare a stock solution, make a few solutions of different concentrations from that (1 dilution ea.) and then make another set from those (2d serial dilution).
Thanks,
DR
Image

Was there something there on detecting nonrandom error (which would not be evidenced even if the experimnter did his job correctly)? I missed it.

Now in my example of the carryover which produced a nonrandom error, I could have gone through the lit, find some statistics to correct that, making good or not so good assumptions, or I could have quantitatively ascertained the carryover and cleaned the Rheodyne for further experiments (for which I opted).
For the correction of the mistakes like those made with the interpretation of the brain surgery (there are plenty of opportunities to do this in chem. analytics as well, for instance if dirt overlaps with analytes one has lumped things together which should not be) I don´t see any possibility to correct them with statistics.
Conclusion, which is somewhat reenforced by Uwe´s links: Statistics is overemphasized in chromatography.
65 posts Page 3 of 5

Who is online

In total there are 11 users online :: 2 registered, 0 hidden and 9 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: Bing [Bot], Google [Bot] and 9 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry