|The Right Hypothesis
In business, or any other discipline, once the question has been asked there must be a statement as to what will or will not occur through testing, measurement, and investigation. This process is known as formulating the right hypothesis. Broadly defined a hypothesis is a statement that the conditions under which something is being measured or evaluated holds true or does not hold true. Further, a business hypothesis is an assumption that is to be tested through market research, data mining, experimental designs, quantitative, and qualitative research endeavors. A hypothesis gives the businessperson a path to follow and specific things to look for along the road.
If the research and statistical data analysis supports and proves the hypothesis that becomes a project well done. If, however, the research data proved a modified version of the hypothesis then re-evaluation for continuation must take place. However, if the research data disproves the hypothesis then the project is usually abandoned.
Hypotheses come in two forms: the null hypothesis and the alternate hypothesis. As a student of applied business statistics you can pick up any number of business statistics textbooks and find a number of opinions on which type of hypothesis should be used in the business world. For the most part, however, and the safest, the better hypothesis to formulate on the basis of the research question asked is what is called the null hypothesis. A null hypothesis states that the research measurement data gathered will not support a difference, relationship, or effect between or amongst those variables being investigated. To the seasoned research investigator having to accept a statement that no differences, relationships, and/or effects will occur based on a statistical data analysis is because when nothing takes place or no differences, effects, or relationship are found there is no possible reason that can be given as to why. This is where most business managers get into trouble when attempting to offer an explanation as to why something has not happened. Attempting to provide an answer to why something has not taken place is akin to discussing how many angels can be placed on the head of a pin—everyone’s answer is plausible and possible. As such business managers need to accept that which has happened and not that which has not happened.
Many business people will skirt the null hypothesis issue by attempting to set an alternative hypothesis that states differences, effects and relationships will occur between and amongst that which is being investigated if certain conditions apply.Unfortunately, however, this reverse position is as bad. The research investigator might well be safe if the data analysis detects differences, effect or relationships, but what if it does not? In that case the business manager is back to square one in attempting to explain what has not happened. Although the hypothesis situation may seem confusing there is light at the end of the tunnel.
The best-fit hypothesis strategy in business situations is to set a null hypothesis stating that no differences, effects, and or relationship occur, collect the measurement data, subject the data to statistical analysis and if differences, effects, and relationships are detected explain the possible reasons as to why. As stated earlier, if no differences, effects, and or relationships were detected then a decision must be made as to possibly revamping the research situation or abandoning the program altogether.
In the preceding paragraph the phrase “. . . if differences, effects, and/or relationships between and amongst variables were detected. . . ” is, to the unsuspecting business manager conducting research, an accident waiting to happen! In any given research situations differences, effects, and relationships will be found whether minimal and obscure or by chance alone. The lingering question is, therefore, what does the business research investigator do to avoid the trap of having to explain and give reason to every possible difference, effect, or relationship found? The answer is to set statistical limits of acceptance through the use of confidence intervals.
In general, hypothesis testing involves the following steps:
· Identify the Null Hypothesis (H0). For example H0: Mean = 0
· Stipulate the Alternative Hypothesis (HA). For example HA: Mean <> 0
· Calculate the (appropriate) test statistic based on the sample data. The sampling distribution (if the Null Hypothesis is true) is assumed to be known.
· Select the acceptance region (Confidence Interval – .05 or .01) or rejection region based upon the sampling distribution.
· Accept or Reject the Null Hypothesis H0 and draw the appropriate conclusions.
We revisit Chapter 9 by learning about the two hypotheses that make up the structure of a hypothesis test. The null hypothesis is the statement being tested. Usually it represents the status quo and it is not rejected unless there is convincing sample evidence that it is false. The alternative, or, research, hypothesis is a statement that is accepted only if there is convincing sample evidence that it is true and that the null hypothesis is false. In some situations, the alternative hypothesis is a condition for which we need to attempt to find supportive evidence. We will also learn that two types of errors can be made in a hypothesis test. A Type I error occurs when we reject a true null hypothesis, and a Type II error occurs when we do not reject a false null hypothesis.
We study two commonly used ways to conduct a hypothesis test. The first involves comparing the value of a test statistic with what is called a critical value, and the second employs what is called a p-value. The p-value measures the weight of evidence against the null hypothesis. The smaller the p-value, the more we doubt the null hypothesis. We will learn that, if we can reject the null hypothesis with the probability of a Type I error equal to α, then we say that the test result has statistical significance at the α level. However, even if the result of a hypothesis test tells us that statistical significance exists, we must carefully assess whether the result is practically important. One good way to do this is to use a point estimate and confidence interval for the parameter of interest.
The specific hypothesis tests we will cover in this chapter all dealt with a hypothesis about one population parameter. First, we will study a test about a population mean that is based on the assumption that the population standard deviation σ is known. This test employs the normal distribution. Second, a test about a population mean that assumes that σ is unknown. We will learn that this test is based on the t distribution. Figure 9.18 presents a flowchart summarizing how to select an appropriate test statistic to test a hypothesis about a population mean. Then we will present a test about a population proportion that is based on the normal distribution. Next we will study Type II error probabilities, and how we can find the sample size needed to make both the probability of a Type I error and the probability of a serious Type II error as small as we wish. We conclude by discussing the chi-square distribution and its use in making statistical inferences about a population variance.
In chapter 19, we begin by discussing Bayes’ Theorem. We will learn that this theorem is used to revise prior probabilities to posterior probabilities, which are revised probabilities based on new information. We also see how to use a probability revision table (and Bayes’ Theorem) to update probabilities in a decision problem. In Section 19.2 we present an introduction to decision theory. We see that a decision problem involves states of nature, alternatives, payoffs, and decision criteria, and we will consider three degrees of uncertainty—certainty, uncertainty, and risk. In the case of certainty, we learn which state of nature will actually occur. Here we simply choose the alternative that gives the best payoff. In the case of uncertainty, we have no information about the likelihood of the different states of nature.
Here we will discuss two commonly used decision criteria—the maximum criterion and the maximax criterion. In the case of risk, we are able to estimate the probability of occurrence for each state of nature. In this case we will learn how to use the expected monetary value criterion. We also learn how to construct a decision tree in Section 19.2, and we will see how to use such a tree to analyze a decision problem. In Section 19.3 we will learn how to make decisions by using posterior probabilities. We will also see how to perform a posterior analysis to determine the best alternative for each of several sampling results. Then we will see how to carry out a preposterior analysis, which allows us to assess the worth of sample information. In particular, we see how to obtain the expected value of sample information. This quantity is the expected gain from sampling, which tells us the maximum amount we should be willing to pay for sample information. We conclude by introducing the utility theory to help make decisions.