Well said. The PT provider must ensure the samples sent out for the interlaboratory comparison program are homogeneous and stable.

]]>The work of making a decision rule starts from what level of risk you would like to take by claiming a non-conformance when the actual situation is that it is within specification (so-called Type I error). Generally we consider a maximum 5% error or minimum 95% confidence to reject the correct situation. For example, let your actual measurement with uncertainty is X +/- U and the specification limit is S(Maximum). As your uncertainty U comes from “2 x standard uncertainty u” where 2 is a coverage factor for 95% confidence, the (X + U) is the uppermost limit with a 2.5% error (right-hand or one-sided tail of a normal probability distribution). If you check the normal distribution table, you will find that a 5% (alpha) error gives a coverage factor of 1.645, instead of 1.96 (or 2 as a whole number). Hence, the realistic critical value for you to claim conformance is (X + 1.645 x u). Any value below this critical value can be claimed as a PASS and any value between this critical value and your specification upper limit is to be claimed as “conditional pass”. When the test value crosses over the upper specification limit, you shall call it a FAIL. This is being considered a conservative approach to the decision rule, where the laboratory risk is kept to the minimum.

]]>Plz provide the %t of these rules if you can.

(Please assist me on this as well, thank you) ]]>

Both control chart estimation of standard deviation based on moving range and the critical range factor f in ISO 5725-6 are assuming the same underlying normal distribution.

The critical range is usually fixed at 95% confidence where the f critical range factor value is 1.96. For differences between two test results, the combined standard deviation is sigma x SQRT(2). Therefore the critical range R = 1.96 x SQRT(2) x sigma or 2.77 x sgima which is the maximum bound of variation with 95% confidence.

In a control chart when we have a series of data, the first range is taken to be the second data minus the first data, and the second range is the third data minus the second data, and so on. We can then calculate the mean of such moving ranges, say MR(Bar). The standard deviation of these set of data = MR(Bar)/1.128 as d2 stated in ISO 8258.

If the sigma is derived from this whole set of data, we have then R/2.77 = MR(Bar)/1.128. Therefore R = 2.46 x MR(bar). It is obvious that the critical range and the moving range have a relationship.

]]>Reply to your Paragraphs 2 and 3

I think you may want to conduct a study on the average of standard uncertainties of results obtained by one-point calibration against the average of those from the linear regression on the same sample of course. A F-test for the ratio of their variances will show if these two variances are significantly different or not.

Reply to your Paragraph 4

1. For one-point calibration, one cannot be sure that if it has a zero intercept.

2. The least squares regression has made an important assumption that the uncertainties of standard concentrations to plot the graph are negligible as compared with the variations of the instrument responses (i.e. y-values).

Another question not related to this topic:

Is there any relationship between factor d2(typically 1.128 for n=2) in control chart for ranges used with moving range to estimate the standard deviation(σ=R/d2) and critical range factor f(n) in ISO 5725-6 used to calculate the critical range(CR=f(n)*σ)?

I found they are linear correlated, but I want to know why.

Thanks again and again!

]]>Thanks for your detailed explanation.

What if I want to compare the uncertainties came from one-point calibration and linear regression? As I mentioned before, I think one-point calibration may have larger uncertainty than linear regression, but some paper gave the opposite conclusion, the same method was used as you told me above, to evaluate the one-point calibration uncertainty.

In one-point calibration, the uncertaity of the assumption of zero intercept was not considered, but uncertainty of standard calibration concentration was considered.

In linear regression, uncertainty of standard calibration concentration was omitted, but the uncertaity of intercept was considered.

So it’s hard for me to tell whose real uncertainty was larger.

Two more questions:

1. For the case of one-point calibration, is there any way to consider the uncertaity of the assumption of zero intercept?

2. For the case of linear regression, can I just combine the uncertainty of standard calibration concentration with uncertainty of regression, as EURACHEM QUAM said? Because this is the basic assumption for linear least squares regression, if the uncertainty of standard calibration concentration was not negligible, I will doubt if linear least squares regression is still applicable. I don’t have a knowledge in such deep, maybe you could help me to make it clear.

Sorry to bother you so many times. Thanks!

]]>