A few words about Measurement Bias
In metrology, error is defined as “the result of measurement minus a given true value of the measurand”.
What is ‘true value’?
ISO 3534-2:2006 (3.2.5) states that “Value which characterizes a quantity or quantitative characteristic perfectly defined in the conditions which exist when that quantity or quantitative characteristic is considered.”, and the Note 1 that follows suggests that this true value is a theoretical concept and generally cannot be known exactly.
In other words, when you are asked to analyze a certain analyte concentration in a given sample, the analyte present has a value in the sample, but what we do in the experiment is only trying to determine that particular value. No matter how accurate is your method and how many repeats you have done on the sample to get an average value, we would never be 100% sure at the end that this average value is exactly the true value in the sample. We bound to have a measurement error!
Actually in our routine analytical works, we do encounter three types of error, known as gross, random and systematic errors.
Gross errors leading to serious outcome with unacceptable measurement is committed through making serious mistakes in the analysis process, such as using a reagent titrant with wrong concentration for titration. It is so serious that there is no alternative but abandoning the experiment and making a completely fresh start.
Such blunders however, are easily recognized if there is a robust QA/QC program in place, as the laboratory quality check samples with known or reference value (i.e. true value) will produce erratic results.
Secondly, when the analysis of a test method is repeated a large number of times, we get a set of variable data, spreading around the average value of these results. It is interesting to see that the frequency of occurrence of data further away from the average value is getting fewer. This is the characteristic of a random error.
There are many factors that can contribute to random error: the ability of the analyst to exactly reproduce the testing conditions, fluctuations in the environment (temperature, pressure, humidity, etc.), rounding of arithmetic calculations, electronic signals of the instrument detector, and so on. The variation of these repeated results is referred to the precision of the method.
Systematic error, on the other hand, is a permanent deviation from the true result, no matter how many repeats of analysis would not improve the situation. It is also known as bias.
A color deficiency technician might persistently overestimate the end point in a titration, the extraction of an analyte from a sample may only be 90% efficient, or the on-line derivatization step before analysis by gas chromatography may not be complete. In each of these cases, if the results were not corrected for the problems, they would always be wrong, and always wrong by about the same amount for a particular experiment.
How do we know that we have a systematic error in our measurement?
It can be easily estimated by measuring a reference material a large number of times. The difference between the average of the measurements and the certified value of the reference material is the systematic error. It is important to know the sources of systematic error in an experiment and try to minimize and/or correct for them as much as possible.
If you have tried your very best and the final average result is still significantly different from the reference or true value, you have to correct the reported result by multiplying it with a certain correction factor. If R is the recovery factor which is calculated by dividing your average test result by the reference or true value, the correction factor is 1/R.
Today, there is another statistical term in use. It is ‘trueness’.
The measure of truenessis usually expressed in terms of bias.
Trueness in ISO 3534-2:2006 is defined as “The closeness of agreement between the expectation of a test result or a measurement result and a true value.” whilst ISO 15195:2018 defines trueness as “Closeness of agreement between the average value obtained from a large series of results of measurements and a true value.”. The definition of ISO 15195 is quite similar to those of ISO 15971:2008 and ISO 19003:2006. The ISO 3534-2 definition includes a note that in practice, an “accepted reference value” can be substituted for the true value.
Is there a difference between ‘accuracy’ and ‘trueness’?
The difference between ‘accuracy’ and ‘trueness’ is shown in their respective ISO definition.
ISO 3534-2:2006 (3.3.1) defines ‘accuracy’ as “closeness of agreement between a test result or measurement result and true value”, whilst the same standard in (3.2.5) defines ‘trueness’ as “closeness of agreement between the expectation of a test result or measurement result and true value”. What does the word ‘expectation’ mean here? It actually refers to the average of the test result, as given in the definition of ISO 15195:2018.
Hence, accuracy is a qualitative parameter whilst trueness can be quantitatively estimated through repeated analysis of a sample with certified or reference value.
ISO 3534-2:2006 “Statistics – Vocabulary and symbols – Part 2: Applied statistics”
ISO 15195:2018 “Laboratory medicine – Requirements for the competence of calibration laboratories using reference measurement procedures”
In the next blog, we shall discuss how the uncertainty of bias is evaluated. It is an uncertainty component which cannot be overlooked in our measurement uncertainty evaluation, if present.
Leave a Reply