Data analysis allows us to answer questions about the data or about the population that the sample data describes.
When we ask questions like “is the alcohol level in the suspect’s blood sample significantly greater than 50 mg/100 ml?” or “does my newly developed TEST method give the same results as the standard method?”, we need to determine the probability of finding the test data given the truth of a stated hypothesis (e.g. no significant difference) – hence “hypothesis testing” or also known as “significance testing”.
A hypothesis, therefore, is an assumptive statement which might, or might not, be true. We test the truth of a hypothesis, which is known as a null hypothesis, H_{o}, with parameter estimation (such as mean, µ or standard deviation, s) and a calculated probability for making a decision about whether the hypothesis is to be accepted (high p value) or rejected (lower p value) based on a preset confidence level, such as p = 0.05 for 95% confidence.
Whilst making a null hypothesis, we must also be prepared for an alternative hypothesis, H_{1}, to fall back in case the H_{o} is rejected after a statistic test, such as Ftest or Student’s ttest. The H_{1} hypothesis can be one of the following statements:
H_{1}: s_{a} ≠ s_{b} (2sided or 2tailed)
H_{1}: s_{a} > s_{b} (1 right sided or 1 right tailed)
H_{1}: s_{a} < s_{b} (1 left sided or 1 left tailed)
Generally a simple hypothesis test is one that determines whether or not the difference between two values is significant. These values can be means, standard deviations, or variances. So, for this case, we actually put forward the null hypothesis H_{o }that there is no real difference between the two s’s, and the observed difference arises from random effects only. If the probability that the data are consistent with the null hypothesis falling below a predetermined low value (e.g. p = 0.05 or 0.01), then the hypothesis is rejected at that probability.
For an illustration, let’s say we have obtained a t observed value after the Student’s tstatistic testing. If the pvalue calculated is small, then the observed tvalue is higher than the tcritical value at the predetermined pvalue. So, we do not believe in the null hypothesis and reject it. If, on the other hand, the p value is large, then the observed value of t is quite likely acceptable, being below the critical tvalue based on the degrees of freedom at a set confidence level, so we cannot reject the null hypothesis.
We can use the MS Excel builtin functions to find the critical values of F– and ttests at prescribed probability level, instead of checking them from their respective tables.
In the Ftest for p=0.05 and degrees of freedom v = 7 and 6, the following critical onetail inverse values are found to be the same (4.207) under all the old and new versions of the MS Excel spreadsheet since 2010:
“=FINV(0.05,7,6)”
“=F.INV(0.95,7,6)”
“=F.INV.RT(0.05,7,6)”
But, for the ttest, the old Excel function “=TINV” for the onetail significance testing has been found to be a bit awkward, because this function giving the tvalue has assumed that it is a twotail probability in its algorithm.
To get a onetail inverse value, we need to double the probability value, in the form of “=TINV(0.05*2, v)”. This make explanation to someone with lesser knowledge of statistics difficult to apprehend.
For example, if we want to find a tvalue at p=0.05 with v = 5 degrees of freedom, we can have the following options:


=TINV(0.05,5) 
2.5705 
=TINV(0.05*2,5) 
2.0150 
=T.INV(0.05,5) 
2.0150 
=T.INV(0.95,5) 
2.0150 
=T.INV.2T(0.05*2,5) 
2.0150 
So, it looks like better to use the new function “=T.INV(0.95,5)” or absolute value of “=T.INV(0.05,5)” for the onetail test at 95% confidence.
The following thus summarizes the use of T.INV for one or twotail hypothesis testing:
 To find the tvalue for a rightsided or greater than H_{1} test, use =T.INV(0.95, v)
 To find the tvalue for a leftsided or less than H_{1} test, use =T.INV(0.05, v)
 To find the tvalue for a twosided H_{1} test, use =T.INV.2T(0.05, v)
Recent Comments