**Linear calibration curve – two common mistakes**

Generally speaking, linear regression is used to establish or confirm a relationship between two variables. In analytical chemistry, it is commonly used in the construction of calibration functions required for techniques such as GC, HPLC, AAS, UV-Visible spectrometry, *etc*., where a linear relationship is expected between the instrument response (dependent variable) and the concentration of the analyte of interest.

The word ‘dependent variable’ is used for the instrument response because the value of the response is dependent on the value of concentration. The dependent variable is conventionally plotted on the *y*-axis of the graph (scatter plot) and the known analyte concentration (independent variable) on *x*-axis, to see whether a relationship exists between these two variables.

In chemical analysis, a confirmation of such relationship between these two variables is essential and this can be establish in terms of an equation. The other aspects of the calibration can then be proceeded.

The general equation which describes a fitted straight line can be written as:

*y = a + bx*

where *b* is the gradient of the line and *a*, its intercept with the *y*-axis. The least-squares linear regression method is normally used to establish the values of *a* and *b*. The ‘best fit’ line obtained from the squares linear regression is the line which minimizes the sum of the squared differences between the observed (or experimental) and line-fitted values for *y*.

The signed difference between an observed value (*y*) and the fitted value (*ŷ*) is known as a residual. The most common form of regression is of *y* on *x*. This comes with an important assumption, i.e. the* x* values are known exactly without uncertainty and the only error occurs in the measurement of *y*.

Two mistakes are so common in routine application of linear regression that it is worth describing them so that they can be well avoided:

*Incorrectly forcing the regression through zero*

Some instrument software allows a regression to be forced through zero (for example, by specifying removal of the intercept or ticking a “Constant is ‘zero’ option”).

This is valid *only* with good evidence to support its use, for example, if it has been previously shown that *y*-the intercept is not significant after statistical analysis. Otherwise, interpolated values at the ends of the calibration range will be incorrect. It can be very serious near zero.

*Including the point *(0,0)* in the regression when it has not been measured*

Sometimes it is argued that the point (*x *= 0, *y *= 0) should be included in the regression, usually on the grounds that *y *= 0 is an expected response at *x* = 0. This is actually a bad practice and not allowed at all. It seems that we simply cook up the data.

Adding an arbitrary point at (0,0) will cause the fitted line to be more closer to (0,0), making the line fit the data more poorly near zero and also making it more likely that a real non-zero intercept will go undetected (because the calculated *y*-intercept will be smaller).

The only circumstance in which a point (0,0) can be validly be added to the regression data set is when a standard at zero concentration has been included and the observed response is either zero or is too small to detect and can reasonably be interpreted as zero.

## Recent Comments