Training and consultancy for testing laboratories.

Archive for the ‘Measurement uncertainty’ Category

A few words about Measurement Bias

A few words about Measurement Bias

In metrology, error is defined as “the result of measurement minus a given true value of the measurand”. 

What is ‘true value’?

ISO 3534-2:2006 (3.2.5) states that “Value which characterizes a quantity or quantitative characteristic perfectly defined in the conditions which exist when that quantity or quantitative characteristic is considered.”, and the Note 1 that follows suggests that this true value is a theoretical concept and generally cannot be known exactly.   

In other words, when you are asked to analyze a certain analyte concentration in a given sample, the analyte present has a value in the sample, but what we do in the experiment is only trying to determine that particular value. No matter how accurate is your method and how many repeats you have done on the sample to get an average value, we would never be 100% sure at the end that this average value is exactly the true value in the sample.  We bound to have a measurement error!

Actually in our routine analytical works, we do encounter three types of error, known as gross, random and systematic errors.

Gross errors leading to serious outcome with unacceptable measurement is committed through making serious mistakes in the analysis process, such as using a reagent titrant with wrong concentration for titration. It is so serious that there is no alternative but abandoning the experiment and making a completely fresh start. 

Such blunders however, are easily recognized if there is a robust QA/QC program in place, as the laboratory quality check samples with known or reference value (i.e. true value) will produce erratic results.

Secondly, when the analysis of a test method is repeated a large number of times, we get a set of variable data, spreading around the average value of these results.  It is interesting to see that the frequency of occurrence of data further away from the average value is getting fewer.  This is the characteristic of a random error.

There are many factors that can contribute to random error: the ability of the analyst to exactly reproduce the testing conditions, fluctuations in the environment (temperature, pressure, humidity, etc.), rounding of arithmetic calculations, electronic signals of the instrument detector, and so on.  The variation of these repeated results is referred to the precision of the method.

Systematic error, on the other hand, is a permanent deviation from the true result, no matter how many repeats of analysis would not improve the situation. It is also known as bias.

A color deficiency technician might persistently overestimate the end point in a titration, the extraction of an analyte from a sample may only be 90% efficient, or the on-line derivatization step before analysis by gas chromatography may not be complete. In each of these cases, if the results were not corrected for the problems, they would always be wrong, and always wrong by about the same amount for a particular experiment.

How do we know that we have a systematic error in our measurement?  

It can be easily estimated by measuring a reference material a large number of times.  The difference between the average of the measurements and the certified value of the reference material is the systematic error. It is important to know the sources of systematic error in an experiment and try to minimize and/or correct for them as much as possible.

If you have tried your very best and the final average result is still significantly different from the reference or true value, you have to correct the reported result by multiplying it with a certain correction factor.  If R is the recovery factor which is calculated by dividing your average test result by the reference or true value, the correction factor is 1/R.

Today, there is another statistical term in use.  It is ‘trueness’.  

The measure of truenessis usually expressed in terms of bias.

Trueness in ISO 3534-2:2006 is defined as “The closeness of agreement between the expectation of a test result or a measurement result and a true value.” whilst ISO 15195:2018 defines trueness as “Closeness of agreement between the average value obtained from a large series of results of measurements and a true value.”. The definition of ISO 15195 is quite similar to those of ISO 15971:2008 and ISO 19003:2006.  The ISO 3534-2 definition includes a note that in practice, an “accepted reference value” can be substituted for the true value.

Is there a difference between ‘accuracy’ and ‘trueness’?

The difference between ‘accuracy’ and ‘trueness’ is shown in their respective ISO definition.

ISO 3534-2:2006 (3.3.1) defines ‘accuracy’ as “closeness of agreement between a test result or measurement result and true value”, whilst the same standard in (3.2.5) defines ‘trueness’ as “closeness of agreement between the expectation of a test result or measurement result and true value”.  What does the word ‘expectation’ mean here?  It actually refers to the average of the test result, as given in the definition of ISO 15195:2018.

Hence, accuracy is a qualitative parameter whilst trueness can be quantitatively estimated through repeated analysis of a sample with certified or reference value.

References:

ISO 3534-2:2006   “Statistics – Vocabulary and symbols – Part 2: Applied statistics”

ISO 15195:2018   “Laboratory medicine – Requirements for the competence of calibration laboratories using reference measurement procedures”

In the next blog, we shall discuss how the uncertainty of bias is evaluated. It is an uncertainty component which cannot be overlooked in our measurement uncertainty evaluation, if present.

Why uncertainty is important in analytical chemistry?

Why measurement uncertainty is important in analytical chemistry?

Conducting a laboratory analysis is to make informed decisions on the samples drawn.  The result of an analytical measurement can be deemed incomplete without a statement (or at least an implicit knowledge) of its uncertainty.  This is because we cannot make a valid decision based on the result alone, and nearly all analysis is conducted to inform a decision.

We know that the uncertainty of a result is a parameter that describes a range within which the value of the quantity being measured is expected to lie, taking into account all sources of error, with a stated degree of confidence (usually 95%).  It characterizes the extent to which the unknown value of the targeted analyte is known after measurement, taking account of the given information from the measurement.

With a knowledge of uncertainty in hand, we can make the following typical decisions based on analysis:

  • Does this particular laboratory have the capacity to perform analyses of legal and statutory significance?
  • Does this batch of pesticide formulation contain less than the maximum allowed concentration of an impurity?
  • Does this batch of animal feed contain at least the minimum required concentration of profit (protein + fat)?
  • How pure is this batch of precious metal?

The figure below shows a variety of instances affecting decisions about compliance with externally imposed limits or specifications.  The error bars can be taken as expanded uncertainties, effectively intervals containing the true value of the concentration of the analyte with 95% confidence.

We can make the following observations from the above illustration:

  1. Result A clearly indicates the test result is below the limit, as even the extremity of the uncertainty interval is below the limit,
  2. Result B is below the limit but the upper end of the uncertainty is above the limit, so we not sure if the true value is below the limit.
  3. Result C is above the limit but the lower end of the uncertainty is below the limit, so we are not sure that the true value is above.
  4. What conclusions can we draw from the equal results D and E? Both results are above the limit but, while D is clearly above the limit, E is not so because the greater uncertainty interval extends below the limit. 

In short, we have to make decisions on how to act upon results B, C and E.  What is the level of risk that can be afforded to assume the test result is in conformity with the stated specification or in compliance with the regulatory limit?

By making such a decision rule, we must be serious in the evaluation of measurement uncertainty, making sure that the uncertainty obtained is reasonable.  If not, any decision made on conformity or compliance will be meaningless.

A few words on sampling

A few words on sampling

  1. What is sampling

Sampling is a process of selecting a portion of material (statistically termed as ‘population’) to represent or provide information about a larger body or material.  It is essential for the whole testing and calibration processes.

The old ISO/IEC 17025:2005 standard defines sampling as “a defined procedure whereby a part of a substance, material or product is taken to provide for testing or calibration of a representative sample of the whole.  Sampling may also be required by the appropriate specification for which the substance, material or product is to be tested or calibrated. In certain cases (e.g. forensic analysis), the sample may not be representative but is determined by availability.” 

In other words, sampling, in general, should be carried out in random manner but so-called judgement sampling is also allowed in specific cases.  This judgement sampling approach involves using knowledge about the material to be sampled and about the reason for sampling, to select specific samples for testing. For example, an insurance loss adjuster acting on behalf of a cargo insurance company to inspect a shipment of damaged cargo during transit will apply a judgement sampling procedure by selecting the worst damaged samples from the lot in order to determine the cause of damage.

2. Types of samples to be differentiated

Field sample      Random sample(s) taken from the material in the field.  Several random samples can be drawn and compositing the samples is done in the field before sending it to the laboratory for analysis

Laboratory sample       Sample(s) as prepared for sending to the laboratory, intended for inspection or testing.

Test sample       A sub-sample, which is a selected portion of the laboratory sample, taken for laboratory analysis.

3. Principles of sampling

Randomization

Generally speaking, random sampling is a method of selection whereby each possible member of a population has an equal chance of being selected so that unintended bias can be minimized. It provides an unbiased estimate of the population parameters on interest (e.g. mean), normally in terms of analyte concentration.

Representative samples

“Representative” refers to something like “sufficiently like the population to allow inferences about the population”.  By taking a single sample through any random process may not be necessary to have representative composition of the bulk.  It is entirely possible that the composition of a particular sample randomly selected may be completely unlike the bulk composition, unless the population is very homogeneous in its composition distribution (such as drinking water).

Remember the saying that the test result is no better than the sample that it is based upon.  Sample taken for analysis should be as representative of the sampling target as possible.  Therefore, we must take the sampling variance into serious consideration. The larger the sampling variance, the more likely it is that the individual samples will be very different from the bulk.

Hence, in practice, we must carry out representative sampling which involves obtaining samples which are not only unbiased, but which also have sufficiently small variance for the task in hand. In other words, we need to decide on the number of random samples to be collected in the field to provide smaller sampling variance in addition to choosing randomization procedures that provide unbiased results.  This is normally decided upon information such as the specification limits and uncertainty expected.

Composite samples

Often it is useful to combine a collection of field samples into a single homogenized laboratory sample for analysis. The measured value for the composite laboratory sample is then taken as an estimate of the mean value for the bulk material.

It is important to note also that the importance of a sound sub-sampling process in the laboratory cannot be over emphasized. Hence, there must be a SOP prepared to guide the laboratory analyst to draw the test sample for measurement from the sample that arrives at the laboratory.

4. Sampling uncertainty

Today, sampling uncertainty is recognized as an important contributor to the measurement uncertainty associated with the reported results.

It is to be noted that sampling uncertainty cannot be estimated as a standalone identity. The analytical uncertainty has to be evaluated at the same time.  For a fairly homogeneous population, a one-factor ANOVA (Analysis of Variance) method will be suffice to estimate the overall measurement uncertainty based on the between- and within-sample variance.  See https://consultglp.com/2018/02/19/a-worked-example-to-estimate-sampling-precision-measuremen-uncertainty/

However, for heterogeneous population such as soil in a contaminated land, sample location variance in addition to sampling variance to be taken into account. More complicated calculations involve the application of the two-way ANOVA technique.  An EURACHEM’s worked example can be found at the website:  https://consultglp.com/2017/10/10/verifying-eurachems-example-a1-on-sampling-uncertainty/

Decision rule in conformance testing with a given tolerance limit

Today there is a dilemma for an ISO/IEC 17025 accredited laboratory service provider in issuing a statement of conformity with specification to the clients after testing, particularly when the analysis result of the test sample is close to the specified value with its upper or lower measurement uncertainty crossing over the limit. The laboratory manager has to decide on the level of risk he is willing to take in stating such conformity.

However, there are certain trades which buy goods and commodities with a given tolerance allowance against the buying specification. A good example is in the trading of granular or pelletized compound fertilizers which contain multiple primary nutrients (e.g. N, P, K) in each individual granule.  A buyer usually allows some permissible 2- 5% tolerance on the buying specification as a lower limit to the declared value to allow variation in the manufacturing process. Some government departments of agriculture even allow up to a lower 10% tolerance limit in their procurement of compound fertilizers which will be re-sold to their farmers with a discount.

Given the permissible lower tolerance limit, the fertilizer buyer has taken his own risk of receiving a consignment that might be below his buying specification. This is rightly pointed out in the Eurolab’s Technical Report No. 01/2017 “Decision rule applied to conformity assessment” that by giving a tolerance limit above the upper specification limit, or below the lower specification limit, we can classify this as the customer’s or consumer’s risk.  In hypothesis testing context, we say this is a type II (beta-) error. 

What will be the decision rule of test laboratory in issuing its conformity statement under such situation?

Let’s discuss this through an example. 

A government procurement department purchased a consignment of 3000 bags of granular compound fertilizer with a guarantee of available plant nutrients expressed as a percentage by weight in it, e.g. a NPK of 15-15-15 marking on its bag indicates the presence of 15% nitrogen (N), 15% phosphorus (P2O5) and 15% potash (K2O) nutrients.  Representative samples were drawn and analyzed in its own fertilizer laboratory. 

In the case of potash (K2O) content of 15% w/w, a permissible tolerance limit of 13.5% w/w is stated in the tender document, indicating that a fertilizer chemist can declare conformity at this tolerance level. The successful supplier of the tender will be charged a calculated fee for any specification non-conformity.

Our conventional approach of decision rules has been based on the comparison of single or interval of conformity limits with single measurement results.  Today, we have realized that each test result has its own measurement variability, normally expressed as measurement uncertainty with 95% confidence level.

Therefore, it is obvious that the conventional approach of stating conformity based on a single measurement result has exposed the laboratory to a 50% risk of having the true (actual) value of test parameter falling outside the given tolerance limit, rendering it to be non-conformance! Is the 50% risk bearable by the test laboratory?

Let say the average test result of K2O content of this fertilizer sample was found to be 13.8+0.55%w/w.  What is the critical value for us in deciding on conformity in this particular case with the usual 95% confidence level? Can we declare the result of 13.8%w/w found to be in conformity with specification referencing to its given tolerance limit of 13.5%w/w?

Let us first see how the critical value is estimated.  In hypothesis testing, we make the following hypotheses:

Ho :  Target tolerance value > 13.5%w/w

H1 :  Target tolerance value < 13.5%w/w

Use the following equation with an assumption that the variation of the laboratory analysis result agrees with the normal or Gaussian probability distribution:

where

mu is the tolerance value for the specification, i.e. 13.5%, 

x(bar) , the critical value with 95% confidence (alpha- = 0.05),   

z, the z -score of -1.645 for H1’s one-tailed test, and

u, the standard uncertainty of the test, i.e. U/2 = 0.55/2 or 0.275

By calculation, we have the critical value x(bar)  = 13.95%w/w, which, statistically speaking, was not significantly different from 13.5%w/w with 95% confidence.

Assuming the measurement uncertainty remains constant in this measurement region, such 13.95%w/w minus its lower uncertainty U of 0.55%w/w would give 13.40% which has (13.5-13.4) or 0.1%w/w K2O amount below the lower tolerance limit, thus exposing some 0.1/(2×0.55) or 9.1% risk.

When the reported test result of 13.8%w/w has an expanded U of 0.55%w/w, the range of measured values would be 13.25 to 14.35%w/w, indicating that there would be (13.50-13.25) or 0.25%w/w of K2O amount below the lower tolerance limit, thus exposing some 0.25/(2×0.55) or 22.7% risk in claiming conformity to the specification limit with reference to the tolerance limit given.

Visually, we can present these situations in the following sketch with U = 0.55%w/w:

The fertilizer laboratory manager thus has to make an informed decision rule on what level of risk that can be bearable to make a statement of conformity. Even the critical value of 13.95%w/w estimated by the hypothesis testing has an exposure of 9.1% risk instead of the expected 5% error or risk.  Why?

The reason is that the measurement uncertainty was traditionally evaluated by two-tailed (alpha- = 0.025) test under normal probability distribution with a coverage factor of 2 whilst the hypothesis testing was based on the one-tailed (alpha- = 0.05) test with a z-score of 1.645.

To reduce the risk of testing laboratory in issuing statement of conformity to zero, the laboratory manager may want to take a safe bet by setting his critical reporting value as (13.5%+0.55%) or 14.05%w/w so that its lower uncertainty value is exactly 13.5%w/w.  Barring any evaluation error for its measurement uncertainty, this conservative approach will let the test laboratory to have practically zero risk in issuing its conformity statement. 

It may be noted that the ISO/IEC 17025:2017 requires the laboratory to communicate with the customers and clearly spell out its decision rule with the clients before undertaking the analytical task.  This is to avoid any unnecessary misunderstanding after issuance of test report with a statement of conformity or non-conformity.

Dilemmas in making decision rules for conformance testing

Dilemmas in making decision rules for conformance testing

In carrying out routine testing on samples of commodities and products, we normally encounter requests by clients to issue a statement on the conformity of the test results against their stated specification limits or regulatory limits, in addition to standard reporting.

Conformance testing, as the term suggests, is testing to determine whether a product or just a medium complies with the requirements of a product specification, contract, standard or safety regulation limit.  It refers to the issuance of a compliance statement to customers by the test / calibration laboratory after testing.  Examples of statement can be:  Pass/Fail; Positive/Negative; On specification/Off specification. 

Generally, such statements of conformance are issued after testing, against a target value with a certain degree of confidence.  This is because there is always an element of measurement uncertainty associated with the test result obtained, normally expressed as X +/- U with 95% confidence.

It has been our usual practice in all these years to make direct comparison of measurement value with the specification or regulatory limits, without realizing the risk involved in making such conformance statement.

For example, if the specification minimum limit of the fat content in a product is 10%m/m, we would without hesitation issue a statement of conformity to the client when the sample test result is reported exactly as 10.0%m/m, little realizing that there is a 50% chance that the true value of the analyte in the sample analyzed lies outside the limit!  See Figure 1 below.

In here, we might have made an assumption that the specification limit has taken measurement uncertainty in account (which is not normally true), or, our measurement value has zero uncertainty which is also untrue. Hence, by knowing the fact that there is a presence of uncertainty in all measurements, we are actually taking some 50% risk to allow the actual true value of the test parameter to be found outside the specification while making such conformity statement.

Various guides published by learned professional organizations like ILAC, EuroLab and Eurachem have suggested various manners to make decision rules for such situation. Some have proposed to add a certain estimated amount of error to the measurement uncertainty of a test result and then state the result as passed only when such error added with uncertainty is more than the minimum acceptance limit.  Similarly, a ‘fail’ statement is to be made for a test result when its uncertainty with added estimated error is less than the minimum acceptance limit. 

The aim of adding an additional estimated error is to make sure “safe” conclusions concerning whether measurement errors are within acceptable limits.   See Figure 2 below.

Others have suggested to make decision consideration only based on the measurement uncertainty found associated with the test result without adding an estimated error.  See Figure 3 below:

This is to ensure that if another lab is tasked with taking the same measurements and using the same decision rule, they will come to the similar conclusion about a “pass” or “fail”, in order to avoid any undesirable implication.

However, by doing so, we are faced with a dilemma on how to explain to the client who is a layman on the rationale to make such pass/fail statement.

For discussion sake, let say we have got a mean result of the fat content as 10.30 +/- 0.45%m/m, indicating that the true value of the fat lies between the range of 9.85 – 10.75%m/m with 95% confidence. A simple calculation tells us that there is a 15% chance that the true value is to lie below the 10%m/m minimum mark.  Do we want to take this risk by stating the result has conformed with the specification? In the past, we used to do so.

In fact, if we were to carry out a hypothesis (or significance) testing, we would have found that the mean value of 10.30%m/m found with a standard uncertainty of 0.225% (obtained by dividing 0.45% with a coverage factor of 2) was not significantly different from the target value of 10.0%m/m, given a set type I error (alpha-) of 0.05.  So, statistically speaking, this is a pass situation.  In this sense, are we safe to make this conformity statement?  The decision is yours!

Now, the opposite is also very true.

Still on the same example, a hypothesis testing would show that an average result of 9.7%m/m with a standard uncertainty of 0.225%m/m would not be significantly different from the target value of 10.0%m/m specification with 95% confidence. But, do you want to declare that this test result conforms with the specification limit of 10.0%m/m minimum? Traditionally we don’t. This will be a very safe statement on your side.  But, if  you claim it to be off-specification, your client may not be happy with you if he understands hypothesis testing. He may even challenge you for failing his shipment.

In fact, the critical value of 9.63%m/m can be calculated by the hypothesis testing for the sample analyzed to be significantly different from 10.0%.  That means any figure lower than 9.63%m/m can then be confidently claimed to be off specification!

Indeed, these are the challenges faced by third party testing providers today with the implementation of new ISO/IEC 17025:2017 standard.

To ‘inch’ the mean measured result nearer to the specification limit from either direction, you may want to review your measurement uncertainty evaluation associated with the measurement. If you can ‘improve’ the uncertainty by narrowing the uncertainty range, your mean value will come closer to the target value. Of course, there is always a limit for doing so.

Therefore you have to make decision rules to address the risk you can afford to take in making such statement of conformance or compliance as requested. Also, before starting your sample analysis and implementing these rules, you must communicate and get a written agreement with your client, as required by the revised ISO/IEC 17025 accreditation standard.

Basis of decision rule on conformity testing

There are three fundamental types of risks associated with the uncertainty approach through making conformity or compliance decisions for tests which are based on meeting specification interval or regulatory limits.  Conformity decision rules can then be applied accordingly.

In summary, they are:

  1. Risk of false acceptance of a test result
  2. Risk of false rejection of a test result
  3. Shared risk

The basis of the decision rule is to determine an “Acceptance zone” and a “Rejection zone”, such that if the measurement result lies in the acceptance zone, the product is declared compliant, and, if it is in the rejection zone, it is declared non-compliant.  Hence, a decision rule documents the method of determining the location of acceptance and rejection zones, ideally including the minimum acceptable level of the probability that the value of the targeted analyte lies within the specification limits.

A straight forward decision rule that is widely used today is in a situation where a measurement implies non-compliance with an upper or lower specification limit if the measured value exceeds the limit by its expanded uncertainty, U

By adopting this approach, it should be emphasized that it is based on an assumption that the uncertainty of measurement is represented by a normal or Gaussian probability distribution function (PDF), which is consistent with the typical measurement results (being assumed the applicability of the Central Limit Theorem),

Current practices

When performing a measurement and subsequently making a statement of conformity, for example, in or out-of-specification to manufacturer’s specifications or Pass/Fail to a particular requirement, there can be only two possible outcomes:

  • The result is reported as conforming with the specification
  • The result is reported as not conforming with the specification

Currently, the decision rule is often based on direct comparison of measurement value with the specification or regulatory limits.  So, when the test result is found to be exactly on the dot of the specification, we would gladly state its conformity with the specification. The reason can be that these limits are deemed to have taken into account the measurement uncertainty (which is not normally true) or it has been assumed that the laboratory’s measurement value has zero uncertainty!  But, by realizing the fact that there is always a presence of uncertainty in all measurements, we are actually taking a 50% risk to have the actual or true value of the test parameter found outside the specification.  Do we really want to undertake such a high risky reporting? If not, how are we going to minimize our exposed risk in making such statement?

Decision rule and conformity testing

What is conformity testing?

Conformance testing is testing to determine whether a product, system or just a medium complies with the requirements of a product specification, contract, standard or safety regulation limit.  It refers to the issuance of a compliance statement to customers after testing.  Examples are:  Pass/Fail; Positive/Negative; On specs/Off specs, etc. 

Generally, statements of conformance are issued after testing, against a target value of the specification with a certain degree of confidence. It is usually applied in forensic, food, medical pharmaceutical, and manufacturing fields. Most QC laboratories in manufacturing industry (such as petroleum oils, foods and pharmaceutical products) and laboratories of government regulatory bodies regularly check the quality of an item against the stated specification and regulatory safety limits.

Decision rule involves measurement uncertainty

Why must measurement uncertainty be involved in the discussion of decision rule? 

To answer this, let us first be clear about the ISO definition of decision rule.  The ISO 17025:2017 clause 3.7 defines that: “Rule that describes how measurement uncertainty is accounted for when stating conformity with a specified requirement.”

Therefore, decision rule gives a prescription for the acceptance or rejection of a product based on consideration of the measurement result, its uncertainty associated, and the specification limit or limits.  Where product testing and calibration provide for reporting measured values, levels of measurement decision risk acceptable to both the customer and supplier must be prepared. Some statistical tools such as hypothesis testing covering both type I and type II errors are to be applied in decision risk assessment.

Decision rule and ISO/ IEC17025:2017

Notes on decision rule as per ISO/IEC 17025:2017 requirements

Introduction

The revised ISO/IEC 17025:2017 laboratory accreditation standard introduces a new concept, i.e., “risk-based thinking” which requires the operator of an accredited laboratory to plan and implement actions to address possible risks and opportunities associated with the laboratory activities, including issuance a statement of conformity to product specification or a compliance statement against regulatory limits.

The risk-based approach to management system implementation is one in which the breadth and depth of the implementation of particular clauses is varied to best suit the perceived risk involved for that particular laboratory activity.

Indeed, the laboratory is responsible for deciding which risks and opportunities need to be addressed. The aims as stated in the ISO standard clause 8.5.1 are:

  1. to give assurance that the management system achieves its intended results;
  2. to enhance opportunities to achieve the purpose and objectives of the laboratory;
  3. to prevent, or minimize, undesired impacts or interfering elements to cause failures in the laboratory activities, and
  • to achieve improvement of the activities.

The decision rule as required in ISO/IEC 17025:2017

On the subject of decision rule for conformity testing, the word of ‘risk’ can be found in the following relevant clauses of this international standard:

Clause 7.1.3

When the customer requests a statement of conformity to a specification or standard for the test or calibration (e.g. pass/fail, in-tolerance/out-of-tolerance), the specification or standard and the decision rule shall be clearly defined.  Unless inherent in the requested specification or standard, the decision rule selected shall be communicated to, and agreed with the customer.”

Clause 7.8.6.1:

When a statement of conformity to a specification or standard is provided,  the laboratory shall document the decision rule employed, taking into account the level of risk (such as false accept and false reject and statistical assumptions) associated with the decision rule employed and apply the decision rule.”

Clause 7.8.6.2

The laboratory shall report on the statement of conformity, such that the statement clearly identified:

  1.  to which results the statement of conformity applies;
  2. Which specifications, standards or part therefor are met or not met;
  3. The decision rule applied (unless it is inherent in the requested specification or standard).

From these specified requirements, it is obvious that clearly defined decision rules must be in place when the laboratory’s customer requests for inclusion of a statement of conformity on the specification in the test report after laboratory analysis.  Therefore, the tasks in front of the accredited laboratory operator are how the decision rules are going to be for a tested commodity or product, based on the laboratory’s own measurement uncertainty estimated, and how to communicate and convince the customers on its choice of reporting limits against the given specification or regulatory limits when issuing such conformity statement.

Examples on how to calculate combined standard uncertainty (edited)

Uncertainty calculation

It is very important for anyone interested in the evaluation of measurement uncertainty to fully understand the very basic principles in calculating the combined standard uncertainty.  Let’s look at some worked examples ….

Calculating standard uncertainties for each uncertainty contribution

In evaluating the combined uncertainty of a testing method from various sources of uncertainty, we need to ensure that we work on a platform of standard uncertainties expressed as standard deviations throughout, because in addition to the standard uncertainty (u) values obtained by our own evaluation (Type A uncertainty), we may also encounter the so-called Type B uncertainty contributions which are uncertainty (U) values given by a third party or from experience and other information in different forms.   Read on … How to calculate standard uncertainties for each source of uncertainty