Method validation: the role of CRMs
Introduction
ISO/IEC 17025:2017 accreditation standard requires laboratories to adopt standard test methods after verification of their performance, or in-house/non-standard methods after full validation. The main difference between verification and validation is the approaches to make sure the test methods adopted are fit for intended purpose.
Method verification is usually carried on standardized or internationally recognized methods which have been duly studied for their suitability. So, the laboratory needs only to show its technical competence in that it can meet the repeatability and reproducibility criteria laid down by the standard method concerned. Method validation, on the other hand, has to be conducted with many statistical parameters to confirm the suitability of the test method used.
Irrespective to either verification or validation, we must satisfy ourselves that the test methods adopted are precise and accurate. As we never know the true or native value of an analyte in a given sample sent for analysis, how can we be sure that the results presented to our customer are accurate or correct? How do we have confidence that the test method used in our laboratory is reliable enough for its purpose?
Routinely you may have carried out duplicates or triplicates in the analysis, but by doing so, you are actually studying the precision of the method based on the spread of test results in these replicated analyses. To know the accuracy of the method, you need to carry out analysis on some kind of samples with known or assigned value of the analyte to see if the recovery data are acceptable statistically. You may use a certified reference material (CRM) for this purpose.
Certified reference materials
ISO defines certified reference material (CRM) as a reference material characterized by a metrologically valid procedure for one or more specified properties, accompanied by a certificate that provides the value of the specified property, its associated uncertainty, and a statement of metrological traceability, while reference material (RM) is a material, sufficiently homogeneous and stable with respect to one or more specified properties, which has been established to be fit for its intended use in a measurement process.
Hence, one of its important usage in method validation is to assess the trueness (bias) of a method, although with careful planning of experiments, other useful information such as method precision can also be collected at the same time.
To know the accuracy of a test method is to monitor its biasness and recovery. Ideal samples in which analyte levels are well characterized, e.g.: matrix CRMs, are necessary. This is because pure analyte standards do not really test the method in the same way that matrix-based samples do. However, matrix CRMs may not be always available in the markets. If not available, then a reference material prepared in-house is our next best option.
In the absence of suitable reference materials, it is also possible to carry out recovery on spiked samples where known amounts of analyte are added to so-called ‘blank’ samples. However, the analyte in this case tends to be bound less closely in spiked samples than in real samples for analysis, and consequent recoveries tend to be over-estimated.
Measurement errors
To understand the bias associated with an analytical method, we need first of all discuss measurement errors, which include random error and systematic error which lead us to bias (trueness).
We notice that repeated laboratory analyses always generate different results. This is due to many uncontrollable random effects during the experimentation. They can be assessed through replicate testing. However, experimental work is invariably subject to possible systematic effects too. A method can be considered ‘validated’ only if any systematic effects are duly studied and confirmed.
It is important to point out that under the current ISO definitions, accuracy is a property of a result and comprises both bias and precision, whilst the trueness is the closeness of agreement between the average value obtained from a large set of test results and an accepted reference value. ISO further notes that the measure of trueness is normally expressed in terms of bias.
The following figure gives an illustration of analytical bias and its relationship with precision of replicate analysis.

How to measure bias against CRMs?
From the definitions of bias given, we know that:
- any measure of bias should constitute an average reading
- a test for bias must be made on a test item with a known or accepted reference value, e.g.: a CRM
It therefore follows that tests to measure bias need:
- sufficient precision to detect practically significant bias through replicate testing for finding the maximum acceptable bias for the method to be fit for purpose
- use of the most appropriate reference materials and certified values available
- tests covering the scope of the method adequately (i.e. range of analyte concentrations and matrices specified in the scope).
Bias can be expressed in one of the two ways:
- As an absolute value, i.e. x – xo, where a positive bias means a higher observed value
- As a fraction or percentage for analytical recovery, x/xo or 100x/xo
The difference between a test result and its certified reference value does not tell us much about result bias. To know any significance between the difference, we have to carry out a series of replicate experiments and apply statistical treatment on the test data collected.
When conducting a bias study comparing the certified value for a reference material with the results obtained with the particular test method, we carry out a Student’s t -test statistic to interpret the results. We apply the mean value and its standard deviation of n replicates of the experiments in the following equation:

If t-value is greater than the t -critical value at alpha a error of 0.05, the bias is statistically significant with 95% confidence.
How to use bias information?
Bias information obtained during method development and validation is primarily intended to initiate any further method development and study. If a significant effect is found, action is normally required to reduce it to insignificance level. Typically, further study and corrective actions are to be taken to identify the source(s) of the error and bring it under control. Corrective actions might involve, for example, some changes to the test procedure or additional training.
However, it is quite unusual to correct an entire analytical method just for the sake of observed bias. If minor changes to the test protocols are not able to improve the accuracy of the results, we may resort to do a correction for recovery. If R is the average recovery noted in the experiments, a recovery correction factor of 1/R can be applied to the test results in order to bring them back to a 100% recovery level. However, there is no current consensus on such correction for recovery.
The Harmonized IUPAC Guidelines for the Use of Recovery Information in Analytical Measurement have recognized a rationale for either standardizing on an uncorrected method or adopting a corrected method, depending on the end use. Its recommendation is: It is of over-riding importance that all data, when reported, should (a) be clearly identified as to whether or not a recovery correction has been applied and (b) if a recovery correction has been applied, the amount of the correction and the method by which it was derived should be included with the report. This will promote direct comparability of data sets. Correction functions should be established on the basis of appropriate statistical considerations, documented, archived and available to the client.
Recent Comments