Training and consultancy for testing laboratories.

Archive for the ‘Measurement uncertainty’ Category

ISO/IEC 17025:2017 & Decision Rule

Lab Picture 3

New ISO 17025 and Decision Rule

 

 

MU, Specification Compliance & Decision Rule

Spec compliance

MU Compliance n Decision Rule

 

ISO FDIS 17025:2017 – Impacts on accredited laboratories

ISO FDIS 17025:2017 – Impacts on accredited laboratories

As noted in my previous article on the final draft international standard FDIS 17025:2017 (https://consultglp.com/2017/10/30/iso-fdis-170252017-sampling-sampling-uncertainty/ ), the current ISO/IEC 17025:2005 version widely used by accredited laboratories around the world will soon be replaced by this new standard, expected to be published very soon.

The ILAC (International Laboratory Accreditation Cooperation), a formal cooperation to promote establishing an international arrangement between member accreditation bodies based on peer evaluation and mutual acceptance with a view to develop and harmonize laboratory and inspection body accreditation practices, has recommended a 3-year transition to fully implement this new standard from the date of its publication. At the end of the transition period, laboratories not accredited to the ISO/IEC 17025:2017 will not be allowed to issue endorsed test or calibration reports and will not be recognized under the ILAC MRA terms.

Today, there are over 90 member accreditation bodies from over 80 economies have signed the ILAC Mutual Recognition Arrangement (ILAC MRA). This new ISO standard therefore has a tremendous impact on all accredited calibration and testing laboratories of which their national accreditation bodies are signatory members of the ILAC MRA.

Each national accreditation body is expected to work out its own transition plan with actions to be taken to help the laboratories under its charge to smoothly migrate to the new practices. These actions might include, but not limit to, effective communication, scheduled seminars/training courses for laboratory managers and technical assessors, and mapping out a time table and policies to achieve the ultimate goal.

In the nutshell, the new standard has standardized and aligned its structure and contents with other recently revised ISO standards, and the ISO 9001:2015, in particular. It reinforces a process-based model and focuses on outcomes rather than prescriptive requirements such as the absence of familiar terms like quality manual, quality manager, etc. and giving less description on other documentation. It will allow more flexibility for laboratory operation as long as the laboratory’s technical competence can be assessed and recognized by the standard.

The following notes highlight major significant changes in the new revision as compared with those in the 2005 version:

  1. Standard format

Many requirements under the 2005 version remain unchanged but appear in different places of the document, under headlines like general requirements (Clause 4), structural requirements (Clause 5), resource requirements (Clause 6), process requirements (Clause 7) and management system requirements (Clause 8). Also, there are certain language updates to reflect the current standard practices and technologies.

  1. Laboratory activities

Under Clause 3 on Terms and Definitions, the term “laboratory activities” in sub-clause 3.6 has included “sampling, associated with subsequent testing or calibration” in addition to the existing “testing” and “calibration” activities. This is a major scope expansion of the laboratory activity for accreditation and will be a challenge for most testing laboratories which are engaged in field sampling. Sub-sampling of test sample in a laboratory prior to analysis is considered to be part of the test procedure.

  1. Risk-based thinking

The revision has incorporated a new “risk-based thinking” which requires the laboratory to plan and implement actions to address possible risks and opportunities associated with the laboratory activities. The laboratory is responsible for deciding which risks and opportunities need to be addressed. The aims are to:

a) give assurance that the management system achieves its intended results;

b) enhance opportunities to achieve the purpose and objectives of the laboratory;

c) prevent, or minimize, undesired elements

The word of ‘risk’ can be found in the following requirements:

Clause 4.1.4:   Identifying risk to impartiality

Clause 7.8.6.1:  “When a statement of conformity to a specification or standard is provided, the laboratory shall document the decision rule employed, taking into account the level of risk (such as false accept and false reject and statistical assumptions) associated with the decision rule employed and apply the decision rule.”

Clause 7.10:  Actions taken for nonconforming work based upon risk levels established by the laboratory

Clause 8.5:   Actions to address risks and opportunities

Clause 8.7:   Updated risk and opportunities when corrective action is taken

Clause 8.9:   Management review agenda to include results of risk identification

  1. Impartiality

The new standard has stressed on the laboratory’s impartiality.  Under Clause 4 General Requirements, the Sub-clause 4.1 requires the laboratory to identify risks to its impartiality on an on-going basis and if a risk to impartiality is identified, the laboratory shall be able to demonstrate how it will eliminate or minimize such risk.

  1. Decision rule

The term “decision rule” is new to this ISO standard. It first appears in Clause 3.7 under Terms and Definitions which states that “rule that describes how measurement uncertainty is accounted for when stating conformity with a specified requirement”. This is in relation to Sub-clause 7.8.6 on providing “Reporting statements of conformity”.

Before the laboratory provides any statement of conformity to a specification, it is required  to first assess the level of risk (such as false acceptance, false rejection and statistical assumptions) involved in the decision rule employed which has to be documented.  See Sub-clause 7.8.6.1.

  1. External provided products and services

The new standard combines current Sub-contracting, Supplies and External Services which affect laboratory’s activities under a new headline with requirements, controls and communication guidance given under Sub-clause 6.6

  1. Evaluation of measurement uncertainty (MU)

Clause 7.6.1 requires laboratories to identify the contributions to measurement uncertainty. When evaluating MU, all contributions which are of significance, including those arising from sampling, shall be taken into account using appropriate methods of analysis.

The standard in its Clause 7.6.3 Note 3 states that “For further information, see ISO/IEC Guide 98-3, ISO 5725 and ISO 21748”.  It is inferred that the laboratory has a choice in the MU evaluation methods, i.e. using either the conventional GUM (bottom up) or the holistic method performance (top down) approaches can be applied in the MU evaluation.

  1.   Options in management system requirements

The new standard allows the laboratory to implement a management system in accordance with Option A or Option B after meeting the requirements of Sub-clauses 4 to 7.

Option A asks the laboratory to address all its sub-clauses 8.2 to 8.9 as the minimum requirements, and Option B is for a laboratory which has already established and is maintaining a management system in accordance with the requirements of ISO 9001, and which is capable of supporting and demonstrating the consistent fulfillment of the requirement of Clauses 4 to 7, whilst fulfilling at least the intent of the management system requirements specified in Sub-clauses 8.2 to 8.9.

In conclusion, in addition to aligning with the other current international standards in its structural forms and wordings, the new version of ISO 17025 to be implemented introduces new laboratory activity scope on sampling and new requirements such as risks and opportunities, decision rule, sampling uncertainty and two management system options.

Laboratories will need to acquire new skills in carrying out risk assessment, making decision rule, evaluating sampling uncertainty and learning how to incorporate this uncertainty into the overall measurement uncertainty evaluation.

Now, accredited laboratories await for their respective national accreditation body to provide new laboratory accreditation guidelines and directives in this significant migration of ISO/IEC 17025 standards from the existing 2005 version to the latest one during this 3-year transition period.

ISO FDIS 17025:2017, sampling & sampling uncertainty

17025 Process

The international standards for accrediting laboratory’s technical competence has evolved over the past 30 over years, started from ISO Guide 25: 1982 to ISO Guide 25:1990, to ISO 17025:1999, to ISO 17025:2005 and now to the final draft international standard FDIS 17025:2017, which is due to be published before the end of this year to replace the 2005 version. We do not anticipate much changes to the contents other than any editorial amendments.

The new draft standard aims to align its structure and contents with other recently revised ISO standards, and the ISO 9001:2015 in particular. It is reinforcing a process-based model and focuses on outcomes rather than prescriptive requirements such as eliminating familiar terms like quality manual, quality manager, etc. and giving less description on other documentation. It attempts to introduce more flexibility for laboratory operation.

Although many requirements remain unchanged but appear in different places of the document, it has added some new concepts such as:

–  focusing on risk-based thinking and acting,

–  decision rule for measurement uncertainty to be accountable for when stating

conformity with a specification,

–  sampling as another laboratory activity apart from testing, and calibration, and,

–  sampling uncertainty to be a significant contributing factor for the evaluation of

measurement uncertainty.

The purpose of introducing sampling as another activity is understandable, as we know that the reliability of test results is hanged on how representative the sample drawn from the field is. The saying “The test result is no better than the sample that it is based upon” is very true indeed.

If an accredited laboratory’s routine activity is also involved in the field sampling before carrying out laboratory analysis on the sample(s) drawn, the laboratory must show evidence of a robust sampling plan to start with, and to evaluate the associated sampling uncertainty.

It is reckoned however that in the process of carrying out analysis, the laboratory has to carry out sub-sampling of the sample received and this is to be part of the SOP which must devote a section on how to sub-sample it. If the sample received is not homogeneous, a consideration of sampling uncertainty is to be taken into account.

Although FDIS states that when evaluating the measurement uncertainty (MU), all components which are of significance in the given situation shall be identified and taken into account using appropriate methods of analysis, its Clause 7.6.3 Note 2 further states that “for a particular method where the measurement uncertainty of the results has been established and verified, there is no need to evaluate measurement uncertainty for each result if it can demonstrate that the identified critical influencing factors are under control”.

To me, it means that all identified critical uncertainty influencing factors must be continually monitored. This will have a pressured work load for the laboratory concerned to keep track with many contributing components over time if the GUM method is used to evaluate its MU.

The main advantage of the top down MU evaluation approach based on holistic method performance using the daily routine quality control data, such as intermediate precision and bias estimation is also appreciated as stated in Clause 7.6.3.  Its Note 3 refers to the ISO 21748 which uses accuracy, precision and trueness as the budgets for evaluation of MU, as an information reference.

Secondly, this clause in the FDIS suggests that once you have established an uncertainty of a result by the test method, you can estimate the MU of all test results in a predefined range through the use of relative uncertainty calculation.

 

Verifying Eurachem’s example A1 on sampling uncertainty

Eurachem/CITAC Guide (2007) “Measurement Uncertainty arising from Sampling” provides examples on estimating sampling uncertainty.  The Example A1 on Nitrate in glasshouse grown lettuce  shows a summary of the classical ANOVA results on the duplicate method in the MU estimation without detailed calculations.

The 2007 NORDTEST Technical Report TR604 “Uncertainty from Sampling” gives examples on how to use relative range statistics to evaluate the double split design method which is similar to the duplicate method suggested in Eurachem.

We have used an Excel spreadsheet to verify the Eurachem’s results using the NORDTEST approach and found them satisfactory. The Two-Way ANOVA by the Excel Analytical Tool also shows similar results, but we have to combine the sum of squares of between-duplicate samples and sum of squares of interaction.

Verifying Eurachem Example A1 by NORDTEST method

 

 

 

 

 

 

 

A step-by-step ANOVA example on Sampling and Analysis

2017-10-08 11.35.30

A step-by-step ANOVA example on Sampling and Analysis

 

Recall basic ideas of ANOVA (Analysis of Variance)

 

There is a growing interest in sampling and sampling uncertainty amongst laboratory analysts.  This is mainly because the newly revised ISO/IEC 17025 accreditation standards to be implemented soon has added in new requirements for sampling and estimating its uncertainty, as the standard reckons that the test result is as good as the sample that is based on, and hence the importance of representative sampling cannot be over emphasized.

Like measurement uncertainty, appropriate statistical methods involving the analysis of variance (frequently abbreviated to ANOVA) have to be applied to estimate the sampling uncertainty. Strictly speaking, the uncertainty of a measurement result has two contributing components, i.e. sampling uncertainty and analysis uncertainty.  We have been long ignoring this important contributor for all these years.

ANOVA indeed is a very powerful statistical technique which can be used to separate and estimate the different causes of variation.

It is simple to compare two mean values obtained from two samples upon testing to see whether they differ significantly by a Student’s t-test.  But in analytical work, we are often confronted with more than two means for comparison. For example, we may wish to compare the mean concentrations of protein in a sample solution stored under different temperature and holding time; we may also want to compare the concentration of an analyte by several test methods.

In the above examples, we have two possible sources of variation. The first, which is always present, is due to the inherent random error in measurement.  This within-sample variation can be estimated through series of repeated testing.

The second possible source of variation is due to what is known as controlled or fixed-effect and random-fixed factors: in the above example on protein analysis, the controlled factors are respectively the temperature, holding time and the method of analysis used for comparing test results.  ANOVA then statistically analyzes the between-sample variation.

If there is one factor, either controlled or random, the type of statistical analysis is known as one-way ANOVA.  When there are two or more factors involved, there is a possibility of interaction between variables.  In this case, we conduct two-way ANOVA or multi-way ANOVA.

On this blog site, several short articles on ANOVA have been previously presented.  Valuable comments are always welcome.

https://consultglp.com/2017/04/04/anova-variance-testing-an-important-statistical-tool-to-know/

https://consultglp.com/wp-content/uploads/2017/01/analysis-of-variance-anova-revisited.pdf

https://consultglp.com/wp-content/uploads/2016/10/the-arithmetic-of-anova-calculations.pdf

https://consultglp.com/wp-content/uploads/2017/01/how-to-interpret-an-anova-table.pdf