Page 1 of 2
y=mx or y=mx+b
Posted: Wed May 25, 2005 5:08 pm
by mtnshawn
Fellow chromatographers-
When evaluating recoveries (off of a standard curve) at or near the LOQ for a method, when I use y=mx recoveries are approximately 88-90%. When I use y=mx+b recoveries are approximately 97-99%.
This is a pharma method and I have never had to use y=mx +b before.
The method analyzes an API degradant (small molecule) @ 210 nm. I assume that the better recoveries for y=mx+b is a function of noise due to the low wavelength.
Question(s):
Is the use of y=mx+b an acceptable practice?
Are there literature references (I have looked on the 'net') to support the use of y=mx+b?
If I had to justify the use of y=mx+b in the method to the FDA would they laugh at me?
thanks in advance any/all comments.
Shawn
Posted: Thu May 26, 2005 3:06 am
by Uwe Neue
I am not an expert on this, so take this only as my opinion.
It appears to me that the issue is related to the peak start and peak end parameter being set too high above the noise. This will cut off a part of your integration, and this effect is stronger, as you get to peak heights that approach the noise level.
I am sure that your colleagues are dealing with this all the time and will give you good advice how to fix this.
Posted: Thu May 26, 2005 7:07 am
by Alex Buske
What does the method say? If it utilises a one point calibration or y=mx type curve, then a y=mx calibration has to be used in validation.
as we said, check the integration and look at the linearity. Is there an offset in the linearity curve?
While it doesnt make much sense, people often compare peak aeras from injections with and without matrix/placebo.
Alex
Posted: Thu May 26, 2005 4:53 pm
by adam
First of all nobody will laugh at you. Both of these equations are perfectly valid. It is a matter of which one better models your curve.
This is really a validation issue. One would normally validate linearity by running 4 or 5 point curve. At the same time you would evaluate the intercept. If the intercept is small you can either use a single point calibration or a multipoint calibration with y = mx. You could also use the y = mx + b form but the y = mx form (or single point) is preferable if you can justify it. The reason for this is a long story, but basically it is because the 0,0 point is known with perfect confidence and, by fixing it you drastically reduce the well known effect of the high level points throwing off the curve at the low end. This issue is especially problematic with related substances assays where a component at a high level is used to quantitate peaks at low levels.
Whatever you choose I agree that it should be specified in the method.
Posted: Thu May 26, 2005 5:45 pm
by mtnshawn
Thanks for the input thus far. Some answers to your questions and some more information.
Uwe- The peak itself is well resolved and has nice symmetry (>0.95). When I overintegrate (try to pull a little more area out from the peak) the result is negligible.
Alex- the method is currently under development. Linearity is >0.999 (expressed as R2) when regressed with or with out incorporating the origin (0,0). There isn't an inflection ("+" or "-") in the placebo at the elution time for this component. I don't quite understand what you mean by an offset in the curve.
Adam-thanks for not laughing too hard at me. I am preparing to send this method to a GMP laboratory for validation. So I am performing a method qualification. I am running a 5 point curve. The y-intercept is -4.
More information
1. y=mx provides ~ 90% recoveries throughout the working range and not just at the LOQ.
2. y = mx+b provides >97% recoveries throughout the working range of the method (0.004 mg/mL to 0.04 mg/mL).
Another Q.
How do I justify the use of y=mx+b?
Thanks again
Shawn
Posted: Thu May 26, 2005 7:05 pm
by DR
Practically speaking, to use y=mx, you would have to demonstrate linearity down to nearly 0. In real linearity data, it is typical to see that r² for y=mx+b > r² for y=mx (this is your justification for using the +b). This is a function of skew away from the origin by the trend line that is typical of LC assays (or any other techique that exploits Beer's law as far as I know). I'm not sure why this is, but I would guess that it has to do with noise interfering with low level responses in a significant way and adding to response throughout the linear portion of the response curve (albeit in a insignificant manner).
Posted: Thu May 26, 2005 7:11 pm
by adam
Your original method says that you're analyzing a degradation product. It seems clear, given everything you've said, that you are using an external standard.
Is the standard concentration much higher than the concentation of your degradant? I am only guessing this, since there is such a big difference between the 2 methods of calibration.
If there is a big difference between the standard concentration and the analyte concentration, one way to minimize these problems is to dilute the standard down.
At any rate, you don't need any justification to use the y = mx + b approach. Just pick what works best, then prove that you can pass all the validation criteria using that approach.
Posted: Thu May 26, 2005 8:42 pm
by Daren
I have always found the best way to justify using y=MX+b is to calculate a 95% confidence interval for your 5-point curve. If 0,0 does not fall within the y-intercept confidence interval for your curve then you have justified the need to use y=MX+b for accurate quantitation at those low levels.
Posted: Thu May 26, 2005 9:51 pm
by Daren
just to add/clarify my previous post. What you're really doing is trying to justify the use of y=mx. So you start out using y=mx+b to create your curve, take 95% confidence interval on that curve, and then if 0,0 falls within the y-intercept C.I. you have justified being able to use y=mx. So I kind of always approach it oppositely, start out with using y=mx+b and then see if I can justify doing a single point force through zero.
Posted: Thu May 26, 2005 10:45 pm
by mtnshawn
Thanks again to all who have responded.
Daren-You have hit the nail on the head for what I was fishing for. A mathematical/scientific means by which I could justify determining my concentration using y=mx+b instead of y=mx!
I have never calc'd CI for a standard curve. Could you lend some insight on that?
If you wish, my email is
scook@rxkinetix.com
Posted: Fri May 27, 2005 1:02 am
by Daren
Hi Shawn,
I'm glad I could help. There are various softwares that can calculate the confidence interval for you. You can plot your curve on xcell, do the linear regression analysis and then use the data analysis tab ( I believe that is its name) which will have confidence interval as an option, it should automatically set it to 0.05 which is for 95%. Your version of xcell though will have to have the statistical package, most of my employers and schools have had it, all of my personal PC's have not. In addition to xcell any statistics software can do it for you, I have used JMP for this purpose as well.
Posted: Fri May 27, 2005 2:33 am
by JI2002
Shawn,
From statistical point of view, by looking at the data you presented, you don't need to justify why you need to use y=mx + b, but need to justify why you want to use y = mx because the recoveries are better across the working range by using the y = mx + b model. I agree with some other members that you need to use the model that works best. Also, although I haven't done this before, you need to have replica data for at least some of the concentrations for you calculate confidence interval for the curve.
From chemistry standpoint of view, it's interesting to see a set of data like this. Usually by using y = mx model, you can have a curve biased high or low either in the high concentration or in the low concentration, but not across the linear range.I'm curious to know what the x intercept is for the y =mx + b model and if you inject a std at that concentration, what is the response? No response at all?
Posted: Fri May 27, 2005 6:47 am
by HW Mueller
What´s the fuzz here? As students we learned that y = mx+b is the "slope intercept" form of a straight line, m is the slope, b the intercept (with the y axis, ie, the point where the line crosses the y axis, or the value with x = 0). Now, if the intercept is zero (line goes through the origin of the plot) b = 0 and you have y = mx. If the curve does not go through the origin then b does not = 0. If you then use y = mx you are fudging (in that case with the connotation of cheating).
One should always inject a blank, if that comes out as 0 (we used to say here "within the error....") but your curve does not, you do indeed have a problem.
Posted: Fri May 27, 2005 2:18 pm
by DR
re:
Hi Shawn,
Your version of xcell though will have to have the statistical package, most of my employers and schools have had it, all of my personal PC's have not. In addition to xcell any statistics software can do it for you, I have used JMP for this purpose as well.
They pretty well all have it, you just may have to hit
Tools>Add-
Ins and make sure the analysis tookpack is among the selected items (by default, it is not).
Posted: Wed Jun 01, 2005 11:57 pm
by Ron
If your data system allows this, you might want to try weighting of the regression points. When you use the weighting approach to calibration you will in many cases get more accurate concentration values for data points near the origin, especially if the upper calibration points are significantly higher in concentration. Weighting of the regression points helps to minimize the problem pointed out by Adam in his posting.