When is a one-sided hypothesis required?
Author: Georgi Z. Georgiev, Published: Aug 6, 2018
In many cases in both applied and theoretical science the decisions, claims and conclusions made depend on the direction of the observed effect in an experiment. When one wants to estimate the error statistics that best describe the data that warrants the decision, claim or conclusion, the error probability, usually a p-value or confidence interval, should be calculated under a one-sided null and complimentary one-sided alternative hypothesis.
Put simply, if the data-based claim is directional, the appropriate statistic to report must be based on a directional hypothesis. If one is not satisfied in reporting "there is a discrepancy |δ|" without caring if it was – δ or δ, then one needs a one-sided statistical hypothesis. Using a two-sided calculation would mean there is a disconnect between the research hypothesis (claim) and the reported probability, resulting in nominal probability which overestimates the actual probability (reported p-value is larger than actual p-value), increasing the risk of type II errors.
For confidence intervals, if one is comparing a value to the upper or lower boundary of an interval and claiming that it is above or below it, then it should be a one-sided interval. Only if one is only interested if it is between the bounds or outside the bounds (regardless in which direction) are they in need of a two-sided interval.
Example 1: Straightforward case for one-tailed significance test and CI
If the experiment is testing the efficacy of a new drug or treatment, the research hypothesis is often that it is more effective than placebo or an existing drug or treatment. The results which would reject the corresponding null hypothesis that the drug has no effect or is in fact harming the patient’s recovery would come from one side of the distribution of the outcome variable.
Constructing a statistical hypothesis is therefore predicated on the one-sidedness of the claim we would like to make, e.g.:
"drug X is decreasing the risk of cardiovascular events, observed relative risk difference δ = -0.3 (p=0.01; H0 : δ ≥ 0), 95%CIhigh: -0.06"
or
"treatment Y increases survival rate of patients with Z, observed increase is δ = 0.25 (p=0.004; H0 : δ ≤ 0), 95%CIlow: 0.15"
I would encourage reporting p-values for more precise claims than H0 : δ ≤ 0 or H0 : δ ≥ 0, probably additional to these conventional ones. The claims should be informed by practical and scientific considerations and the wider context of the experiment. Alternatively confidence intervals at different significance levels or SEV(claim) for different claims can be reported. The goal in doing so is using the data in the most-informative way.
Example 2: A less obvious case for a one-sided hypothesis
Let us say we are studying whether height is correlated with lifetime earnings. It is an observational study so causal claims would not be allowed, but we would be happy with claims that there is some connection between the two (there are 5 different possible explanations for even the strongest statistically significant correlation).
We would be happy to detect both a positive or a negative correlation, even though one of the results is likely predicted and explained to some extent by existing literature while the other would be unexpected. In this case the overwhelming majority of the literature seems to support a small to moderate positive correlation.
Once the data gathered and t-values are calculated using the observed direction of the effect, if one is to state that, for example "each one-inch increase in height results in $789 increase in yearly income", then the corresponding p-value (p1) should be one-tailed with a null hypothesis that increases in height either have no relation or an inverse relation to income. If the outcome was in the unexpected direction and we want to speak about decrease in height being related with an increase in yearly income (or increase in height being related with a decrease in income) then the one-sided test in the other direction should be reported.
If the null of "no effect" is of interest, then the claim "we observed some correlation between height and income (p = p1 ⋅ 2)" can be made (p1 is the p-value of a simple one-sided test as described above). If the size of the discrepancy is to be reported it has to be stated that it is an absolute value (without a sign) to avoid confusion. As with Example #1 specifying the null hypothesis next to the p-value is recommended.
And now here is an example in which a two-sided p-value or CI might seem like the only choice, but under a more critical examination it seems a one-sided p-value and CI should at least be considered.
Example 3: Quality control in a production environment
This seemed to me one of the intuitive situations in which a two-sided test would be warranted and there would be no need to report a one-sided value or CI. I now think differently.
If we are producing a certain good, it usually has parameters it must satisfy before it is released to the market.
In acceptance sampling we would randomly inspect a subset of the products in a batch for conformity to standard and if defects are discovered in more than the allowed proportion, the whole batch will need to be discarded. This is a one-sided scenario since our null hypothesis is "no more than X% of the batch is defective" and in rejecting it we would claim that greater than X% of the batch is defective.
In statistical process control we would monitor the produce as it comes out by inspecting random samples from it to see if they deviate from the specification by more than an allowable amount. If they do, then production might need to be halted until the error is fixed (changing source material, machinery settings, broken or malfunctioning parts, etc.). This is usually done by setting boundaries for deviations from the standard in both directions, e.g. a product is too thick or too thin, too short or too long, weighs too much or too little, contains too much or too little of a given ingredient.
Despite the usage a two-sided test when deciding whether there is an abnormal deviation in the production process, when we make statements about the direction of those deviations it would still be incorrect to use the two-sided CIs or p-values in some cases.
For example, if we claim "such a deviation is below the lower 95% two-sided CI bound hence we would expect to see such a negative deviation in only 5% of inspections" we would be wrong. In fact, we would only see it in 2.5% of inspections if the distribution of errors is symmetrical around zero. I can see such statements being made when an argument is put forth to make a certain adjustment in a production line to compensate for an unacceptable deviation in a certain direction.
I have to admit my very limited knowledge about quality control and thus I will not speculate how often such claims are made and how important it is for business decision-makers to have exact probabilities for them, but I think the scenario above in which the wrong type of statistical hypothesis is used is certainly possible.
Starting from the clear-cut case of claims for efficacy, improvement, increases or decreases of key performance indicators and going through less obvious cases I hope they provide a good idea about what claims require the supporting statistical analysis (p-value, CI, etc.) to be based on a one-sided hypothesis.
Enjoyed this article? Please, consider sharing it where it will be appreciated!
Cite this article:
If you'd like to cite this online article you can use the following citation:
Georgiev G.Z., "When is a one-sided hypothesis required?", [online] Available at: https://www.onesided.org/articles/when-to-use-one-sided-hypothesis.php URL [Accessed Date: 21 Nov, 2024].