Page 1 of 1
					
				Residual Sum of Squares
				Posted: Thu Mar 22, 2007 12:08 pm
				by Parsival
				Within linearity validation, ICH guidelines ask for reporting of the “residual sum of squaresâ€
			 
			
					
				
				Posted: Thu Mar 22, 2007 10:59 pm
				by tom jupille
				It provides a measure of the scatter in the data. I'm not enough of a statistician to give an opinion as to whether it's better/worse/equivalent to r^2 in that regard.
			 
			
					
				Residual sum of square
				Posted: Fri Mar 23, 2007 4:58 am
				by shah_sunil
				It gives an idea about how close the experimental values are to the true value.
			 
			
					
				
				Posted: Fri Mar 23, 2007 7:01 am
				by mbulentcakar
				I'm not an expert on statistics, nor familiar with ICH protocols, but to elaborate a bit (and share my bits of info from a recent regression study) ; residual sum of squares is also known as "unexplained sum of squares", or "sum of squared errors". Well known Pearson's r squared is calculated as 1-("sum of squared errors"/"sum of squares").
Sum of squares is easily calculated by adding up squared "each y value minus mean of y values".
From my understanding residual sum of squares is no better than Pearson r as being a measure of scatter.
I can also repeat beforesaid warnings: one of the main assumptions of least square is that errors are "homoscedastic". That means reasonably similar along the range, which is not the case for chromatography. But if we study a narrow range, this assumption may be validated.
Hope, didn't make things more confusing.
Best luck,
Bulent
			 
			
					
				
				Posted: Fri Mar 23, 2007 9:11 am
				by Parsival
				Thanks for your answers so far.
What I was referring to with my expression “absolute valueâ€
			 
			
					
				
				Posted: Fri Mar 23, 2007 9:58 am
				by Markus Laeubli, Metrohm
				The sum of residual squares gives a better measure for the quality of a calibration curve than r^2. Especially in cases where the calibration curve is not strictly linear. 
If the slope of the calibration curve is continuously increasing or decreasing with increasing concentration, you may get a perfect r^2=0.9999 but the sum of the residual squares is quite large.  In such a calibration the calibration points are not statistically spread around the curve. But the highest and lowest points are below and the points in between are above the curve (or vice versa).
			 
			
					
				
				Posted: Mon Mar 26, 2007 4:57 pm
				by Mark Tracy
				The difficulty lies in the interpretation of sum-squared-residuals. What do you compare it against? How do you know if it is "too large?" SSR will increase with the number of data points. It depends on the units of Y (try converting from mAU to AU and see what happens). The correlation R^2 is an attempt to normalize the SSR by dividing it by the degrees of freedom (n-2) and the variance of Y. This removes the two effects and allows comparisons to be made between different calibrations. Unfortunately, neither SSR nor R^2 is a satisfactory measure of linearity.
			 
			
					
				
				Posted: Wed Mar 28, 2007 3:24 pm
				by H.Thomas
				> Unfortunately, neither SSR nor R^2 is a satisfactory measure of 
> linearity.
As a measure of linearity, we use the relative standard deviation of the response factor (that is peak area divided by concentration).
Do you think this is a good measure or is there a better parameter?
Hartmut
			 
			
					
				
				Posted: Wed Mar 28, 2007 4:20 pm
				by Mark Tracy
				The RSD of the response factor suffers from the same defect that R^2 does: it treats non-linearity and noise as the same. In other words it can tell you that the line is a poor fit to the data, but it can't say what is wrong. The poor fit can be due to either the data being too scattered or the calibration function is the wrong choice (or both).
At the moment, a sharp eye and a trained mind are the best way to recognize nonlinearity.