When we repeat analysis of a sample several times, we get a spread of results surrounding its average value. This phenomenon gives rise to data precision, but provides no clue as to how close the results are to the true concentration of the analyte in the sample.
However, it is possible for a test method to produce precise results which are in very close agreement with one another but are consistently lower or higher than they should be. How do we know that? Well, this observation can be made when we carry out replicate analysis of a sample with a certified analyte value. In this situation, we know we have encountered a systematic error in the analysis.
The term “trueness” is generally referred to the closeness of agreement between the expectation of a test result or a measurement result and a true value or an accepted reference value. And, trueness is normally expressed in terms of bias. Hence, bias can be evaluated by comparing the mean of measurement results and an accepted reference value, as shown in the figure below.
Therefore, bias can be evaluated by carrying out repeat analysis of a suitable material containing a known amount of the analyte (i.e. reference value) mu, and is calculated as the difference between the average of the test results and the reference value:
We often express bias in a relative form, such as a percentage:
or as a ratio when we assess ‘recovery’ in an experiment: