Training and consultancy for testing laboratories.

Determining Outliers

In your routine laboratory analysis, you tend to do a repeat when the duplicate results are far apart. However, when the third figure seems to be much closer to one of the duplicates, you might want to discard the other result and take the average of those closer to one another for reporting.  Is this statistically justifiable?

Do you know there are many outlier statistic tests to determine an outlying data from a series of replicated data?  Which one would you like to use?

I shall be conducting our 7th one-hour free webinar on Determination of Outliers on Monday 11 January 2021 at 8.00pm via Zoom to discuss the pros and cons of these methods. 

You are most welcome to join the Webinar.  We shall send you a Zoom link upon receipt of your expression of interest to my email: guanhuah.yeoh@consultglp.com

Let’s  study applied statistics together!

I’m a Course Creator NOW!!!

Last weekend, I was approached by a business associate Shaun to become an online course creator. During chatting, he got to know that I was actually in the process of creating my online training business. 

I also shared my 8-hour on-line course of “e-Learning the Basic Statistical Tools for Test Laboratories” in 2 sessions on how I can help laboratory personnel achieve basic technical competence in evaluating measurement uncertainty, method validation and data analysis by sharing with them how to apply statistical methods appropriately.

🌟These skills and knowledge have allowed me to achieve running several successful public and in-house training workshops in this region for the last 15 years.

Shaun asked me if I would like to launch my own e-course about “e-Learning the Basic Statistical Tools for Test Laboratories”.  I thought for a while, and said to myself, why not?  I had been sharing such knowledge through physical workshops.  Why not share this information with more people in this region through an e-course, so they can benefit as well?

So here I am, discussing about my upcoming e-course “e-Learning the Basic Statistical Tools for Test Laboratories”.

I shared this good news with a few of my ex-colleagues and friends, and they asked if they could enjoy the course at a discount.  I am very pleased to announce that the FIRST 20 of those who would reply now can have a good 60% discount!

The virtual Zoom e-Course dates are scheduled on:  19th and 20th December 2020;  Time:  0900 hrs – 1300hrs (Singapore time) with electronic course notes provided. e-Certificates of Participation will be issued to the registered participants after the course.

Course contents cover, by not limit to:

*  Measurement errors, random and bias

*  The concept of measurement uncertainty

*  Sample and population

*  Outlier tests

*  Confidence interval for population mean – Central Limit Theorem, Student’s t-distribution

*  Hypothesis or significance testing

*  Basic ANOVA

*  Linear regression

*  Concept of detection limit

💥Please reply  “𝐈 𝐚𝐦 𝐢𝐧𝐭𝐞𝐫𝐞𝐬𝐭𝐞𝐝!”💬💥  to express your interest without obligation for the e-course: “e-Learning the Basic Statistical Tools for Test Laboratories”.   

Retail Price: SGD 79.90 (USD 57.00)   

After 60% discount: SGD 31.90 (USD 22.80) only

My email address: guanhuah.yeoh@consultglp.com

I shall follow it up with the Zoom link and payment methods.

This is part of my presentation at an annual scientific meeting of a medical laboratory association in Singapore.

In hypothesis testing, many find difficulty to understand what Type II error is all about although they do appreciate the reason for fixing a Type I error with a probability (alpha) of 0.05 as a norm for 95% confidence in not rejecting the null hypothesis Ho. Here we are trying to explain the Type II error in a plainer language.

Sharing a PPT presentation at one of the Webinars

Sharing my PPT presented at one of the Webinars.

A linear regression line showing linear relationship between independent variables (x’s) such as concentrations of working standards and dependable variables (y’s) such as instrumental signals, is represented by equation y = a + bx where a is the y-intercept when x = 0, and b, the slope or gradient of the line. The slope of the line becomes y/x when the straight line does pass through the origin (0,0) of the graph where the intercept is zero.  The questions are: when do you allow the linear regression line to pass through the origin? Why don’t you allow the intercept float naturally based on the best fit data? How can you justify this decision?

In theory, you would use a zero-intercept model if you knew that the model line had to go through zero. Most calculation software of spectrophotometers produces an equation of y = bx, assuming the line passes through the origin. In my opinion, this might be true only when the reference cell is housed with reagent blank instead of a pure solvent or distilled water blank for background correction in a calibration process.  However, we must also bear in mind that all instrument measurements have inherited analytical errors as well.

One of the approaches to evaluate if the y-intercept, a, is statistically significant is to conduct a hypothesis testing involving a Student’s t-test.  This is illustrated in an example below.

Another approach is to evaluate any significant difference between the standard deviation of the slope for y = a + bx and that of the slope for y = bx when a = 0 by a F-test.

Example:

In a study on the determination of calcium oxide in a magnesite material, Hazel and Eglog in an Analytical Chemistry article reported the following results with their alcohol method developed:

The graph below shows the linear relationship between the Mg.CaO taken and found experimentally with equationy = -0.2281 + 0.99476x for 10 sets of data points.

The following equations were applied to calculate the various statistical parameters:

Thus, by calculations, we have a = -0.2281; b = 0.9948; the standard error of y on x, sy/x= 0.2067, and the standard deviation of y-intercept, sa = 0.1378.

Let’s conduct a hypothesis testing with null hypothesis Ho and alternate hypothesis, H1:

                Ho :  Intercept a equals to zero

                H1 :  Intercept a does not equal to zero

The Student’s t– test of y-intercept, 

The critical t-value for 10 minus 2 or 8 degrees of freedom with alpha error of 0.05 (two-tailed) = 2.306

Conclusion:  As 1.655 < 2.306, Ho is not rejected with 95% confidence, indicating that the calculated a-value was not significantly different from zero.  In other words, there is insufficient evidence to claim that the intercept differs from zero more than can be accounted for by the analytical errors.  Hence, this linear regression can be allowed to pass through the origin.