Training and consultancy for testing laboratories.

Archive for June, 2019

Why uncertainty is important in analytical chemistry?

Why measurement uncertainty is important in analytical chemistry?

Conducting a laboratory analysis is to make informed decisions on the samples drawn.  The result of an analytical measurement can be deemed incomplete without a statement (or at least an implicit knowledge) of its uncertainty.  This is because we cannot make a valid decision based on the result alone, and nearly all analysis is conducted to inform a decision.

We know that the uncertainty of a result is a parameter that describes a range within which the value of the quantity being measured is expected to lie, taking into account all sources of error, with a stated degree of confidence (usually 95%).  It characterizes the extent to which the unknown value of the targeted analyte is known after measurement, taking account of the given information from the measurement.

With a knowledge of uncertainty in hand, we can make the following typical decisions based on analysis:

  • Does this particular laboratory have the capacity to perform analyses of legal and statutory significance?
  • Does this batch of pesticide formulation contain less than the maximum allowed concentration of an impurity?
  • Does this batch of animal feed contain at least the minimum required concentration of profit (protein + fat)?
  • How pure is this batch of precious metal?

The figure below shows a variety of instances affecting decisions about compliance with externally imposed limits or specifications.  The error bars can be taken as expanded uncertainties, effectively intervals containing the true value of the concentration of the analyte with 95% confidence.

We can make the following observations from the above illustration:

  1. Result A clearly indicates the test result is below the limit, as even the extremity of the uncertainty interval is below the limit,
  2. Result B is below the limit but the upper end of the uncertainty is above the limit, so we not sure if the true value is below the limit.
  3. Result C is above the limit but the lower end of the uncertainty is below the limit, so we are not sure that the true value is above.
  4. What conclusions can we draw from the equal results D and E? Both results are above the limit but, while D is clearly above the limit, E is not so because the greater uncertainty interval extends below the limit. 

In short, we have to make decisions on how to act upon results B, C and E.  What is the level of risk that can be afforded to assume the test result is in conformity with the stated specification or in compliance with the regulatory limit?

By making such a decision rule, we must be serious in the evaluation of measurement uncertainty, making sure that the uncertainty obtained is reasonable.  If not, any decision made on conformity or compliance will be meaningless.

Initial Data Analysis IDA

Initial data analysis IDA

Data analysis is a systematic process examining datasets in order to draw valid conclusions about the information they contain, increasingly with the aid of specialized systems and software, leading to discovering useful information to make informed decisions to verify or disapprove some scientific or business models, theories or hypotheses.

As a researcher or laboratory analyst, we must have the drive to obtain quality data in our work. A careful plan in database design and statistical analysis with variable definitions, plausibility checks, data quality checks and ability to identifying likely errors in data and resolving data inconsistencies, etc. has to be established before embarking the full data collection.  More importantly, the plan should not be altered without agreement of the project steering team in order to reduce the extent of data dredging or hypothesis fishing leading to false positive studies.  Shortcomings in initial data analysis may result in adopting inappropriate statistical methods or making incorrect conclusions.

Our first step of initial data analysis is to check consistency and accuracy of the data, such as looking up for any outlying data. This can be visualized through plotting the data against time of data collection or other independent parameters.  This should be done before embarking on more complex analyses.

After having satisfied that the data are reasonably error-free, we should get familiar with the collected data and examine them for any consistency of data formats, number and patterns of missing data, the probability distributions of its continuous variables, etc.  For more advanced initial analysis, decisions have to be made about the way variables are used in further analyses with the aid of data analytics technologies or statistical techniques.  These variables can be studied in their raw form, transformed to some standardized format, categorized or stratified into groups for modeling.

Replication and successive dilution in constructing calibration curve

Replication and successive dilution in constructing calibration curve

An analytical instrument generally needs to be calibrated before measurements made on prepared sample solutions, through construction of a linear regression between the analytical responses and the concentrations of the standard analyte solutions. A linear regression is favored over quadratic or exponential curve as it incurs minimum error.   

Replication

Replication in standard calibration is found to be useful if replicates are genuinely independent. The calibration precision is improved by increasing the number of replicates, n, and provides additional checks on the calibration solution preparation and on the precision of different concentrations.

The trend of its precision can be read from the variance of these calibration points. A calibration curve might be found to have roughly constant standard deviations in all these plotted points, whilst others may show a proportional increase in standard deviation in line with the increase of analyte concentration. The former behavior is known as “homoscedasticity” and the latter, “heteroscedasticity”.  

It may be noted that increasing the number of independent concentration points has actually little benefit after a certain extent. In fact, after having six calibration points, it can be shown that any further increase in the number of observations in calibration has relatively modest effect on the standard error of prediction for a predicted x value unless such number of points increases very substantially, say to 30 which of course is not practical.

Instead, independent replication at each calibration point can be recommended as a method of improving uncertainties. Indeed, independent replication is accordingly a viable method of increasing n when the best performance is desired. 

However, replication suffers from an important drawback. Many analysts incline to simply injecting a calibration standard solution twice, instead of preparing duplicate standard solutions separately for the injection. By injecting the same standard solution twice into the analytical instrument, the plotted residuals will appear in close pairs but are clearly not independent. This is essentially useless for improving precision. Worse, it artificially increases the number of freedom for simple linear regression, giving a misleading small prediction interval.

Therefore ideally replicated observations should be entirely independent, using different stock calibration solutions if at all possible. Otherwise it is best to first examine replicated injections to check for outlying differences and then to calculate the calibration based on the mean value of y for each distinct concentration.

There is one side effect of replication that may be useful. If means of replicates are taken, the distribution of errors in the mean tend to be the normal distribution as the number of replicates increases, regardless of parent distribution. The distribution of the mean of as few as 3 replicates is very close to the normal distribution even with fairly extreme departure from normality.  Averaging three or more replicates can therefore provide more accurate statistical inference in critical cases where non-normality is suspected.

Successive dilutions

A common pattern of calibration that we usually practice is doing a serial dilution, resulting in logarithmically decreasing concentrations (for example, 16, 8, 4. 2 and 1 mg/L). This is simple and has the advantage of providing a high upper calibrated level, which may be useful in analyzing routine samples that occasionally show high values.

However, this layout has several disadvantages. First, errors in dilution are multiplied at each step, increasing the volume uncertainties, and perhaps worse, increasing the risk of any undetected gross dilution error (especially if the analyst commits the cardinal sin of using one of the calibration solutions as a QC sample as well!).

Second, the highest concentration point has high leverage, affecting both the gradient and y-intercept of the line plotted; errors at the high concentration will cause potentially large variation in results.

Thirdly, departure in linearity are easier to detect with fairly even spaced points.  In general, therefore, equally spaced calibration points across the range of interest should be much preferred.

A few words on sampling

A few words on sampling

  1. What is sampling

Sampling is a process of selecting a portion of material (statistically termed as ‘population’) to represent or provide information about a larger body or material.  It is essential for the whole testing and calibration processes.

The old ISO/IEC 17025:2005 standard defines sampling as “a defined procedure whereby a part of a substance, material or product is taken to provide for testing or calibration of a representative sample of the whole.  Sampling may also be required by the appropriate specification for which the substance, material or product is to be tested or calibrated. In certain cases (e.g. forensic analysis), the sample may not be representative but is determined by availability.” 

In other words, sampling, in general, should be carried out in random manner but so-called judgement sampling is also allowed in specific cases.  This judgement sampling approach involves using knowledge about the material to be sampled and about the reason for sampling, to select specific samples for testing. For example, an insurance loss adjuster acting on behalf of a cargo insurance company to inspect a shipment of damaged cargo during transit will apply a judgement sampling procedure by selecting the worst damaged samples from the lot in order to determine the cause of damage.

2. Types of samples to be differentiated

Field sample      Random sample(s) taken from the material in the field.  Several random samples can be drawn and compositing the samples is done in the field before sending it to the laboratory for analysis

Laboratory sample       Sample(s) as prepared for sending to the laboratory, intended for inspection or testing.

Test sample       A sub-sample, which is a selected portion of the laboratory sample, taken for laboratory analysis.

3. Principles of sampling

Randomization

Generally speaking, random sampling is a method of selection whereby each possible member of a population has an equal chance of being selected so that unintended bias can be minimized. It provides an unbiased estimate of the population parameters on interest (e.g. mean), normally in terms of analyte concentration.

Representative samples

“Representative” refers to something like “sufficiently like the population to allow inferences about the population”.  By taking a single sample through any random process may not be necessary to have representative composition of the bulk.  It is entirely possible that the composition of a particular sample randomly selected may be completely unlike the bulk composition, unless the population is very homogeneous in its composition distribution (such as drinking water).

Remember the saying that the test result is no better than the sample that it is based upon.  Sample taken for analysis should be as representative of the sampling target as possible.  Therefore, we must take the sampling variance into serious consideration. The larger the sampling variance, the more likely it is that the individual samples will be very different from the bulk.

Hence, in practice, we must carry out representative sampling which involves obtaining samples which are not only unbiased, but which also have sufficiently small variance for the task in hand. In other words, we need to decide on the number of random samples to be collected in the field to provide smaller sampling variance in addition to choosing randomization procedures that provide unbiased results.  This is normally decided upon information such as the specification limits and uncertainty expected.

Composite samples

Often it is useful to combine a collection of field samples into a single homogenized laboratory sample for analysis. The measured value for the composite laboratory sample is then taken as an estimate of the mean value for the bulk material.

It is important to note also that the importance of a sound sub-sampling process in the laboratory cannot be over emphasized. Hence, there must be a SOP prepared to guide the laboratory analyst to draw the test sample for measurement from the sample that arrives at the laboratory.

4. Sampling uncertainty

Today, sampling uncertainty is recognized as an important contributor to the measurement uncertainty associated with the reported results.

It is to be noted that sampling uncertainty cannot be estimated as a standalone identity. The analytical uncertainty has to be evaluated at the same time.  For a fairly homogeneous population, a one-factor ANOVA (Analysis of Variance) method will be suffice to estimate the overall measurement uncertainty based on the between- and within-sample variance.  See https://consultglp.com/2018/02/19/a-worked-example-to-estimate-sampling-precision-measuremen-uncertainty/

However, for heterogeneous population such as soil in a contaminated land, sample location variance in addition to sampling variance to be taken into account. More complicated calculations involve the application of the two-way ANOVA technique.  An EURACHEM’s worked example can be found at the website:  https://consultglp.com/2017/10/10/verifying-eurachems-example-a1-on-sampling-uncertainty/

What is the P-value for in ANOVA?

What is the P-value for in ANOVA?

In the analysis of variance (ANOVA), we study the variations of between- and within-groups in terms of their respective mean squares (MS) which are calculated by dividing each sum of squares by its associated degrees of freedom.  The result, although termed a mean square, is actually a measure of variance, which is the squared standard deviation.

The F-ratio is then obtained as the result of dividing MS(between) and MS(within).  Even if the population means are all equal to one another, you may get an F-ratio which is substantially larger than 1.0, simply because of sampling error to cause a large variation between the samples (group). Such F-value may get even larger than the F-critical value from the F-probability distribution at given degrees of freedom associated with the two MS at a set significant Type I (alpha-) level of error.

Indeed, by referring to the distribution of F-ratios with different degrees of freedom, you can determine the probability of observing an F-ratio as large as the one you calculate even if the populations have the same mean values.

So, the P-value is the probability of obtaining an F-ratio as large or larger than the one observed, assuming that the null hypothesis of no difference amongst group means is true.

However, under the ground rules that have been followed for many years by inferential statistics, this probability must be equal to, or smaller than, the significant alpha- (type I) error level that we have established at the start of the experiment, and such alpha-level is normally set at 0.05 (or 5%) for test laboratories.  Using this level of significance, there is, on average, a 1 in 20 chance that we shall reject the null hypothesis in our decision when it is in fact true.

Hence, if we were to analyze a set of data by ANOVA and our P-value calculated was 0.008, which is much smaller than alpha-value of 0.05, we can confidently say that we would be committing just an error or risk of 0.8% to reject the null hypothesis which is true.  In other words, we are 99.2% confident to reject the hypothesis which states no difference among the group means.