Monday, April 30, 2007

Chapter 13 Inferences for regression

Here are the answers from today's activities:
Please forgive formatting. Note that the values of SECoef, T, and P for the constant are not used in our inference calculations.


Regression Analysis: C6 versus C5
The regression equation is
C6 = 26.7 + 57.0 C5

Predictor Coef SECoef T P
Constant 26.75 19.44 1.38 0.263
C5 57.00 42.17 1.35 0.269

S = 9.42956 R-Sq = 37.8% R-Sq(adj) = 17.1%

Regression Analysis: C9 versus C8
The regression equation is
C9 = 32.4 + 37.5 C8

Predictor Coef SECoef T P
Constant 32.38 25.87 1.25 0.299
C8 37.50 60.19 0.62 0.577

S = 14.1877 R-Sq = 11.5% R-Sq(adj) = 0.0%


Regression Analysis: C12 versus C11
The regression equation is
C12 = 17.5 + 90.3 C11

Predictor Coef SECoef T P
Constant 17.50 10.45 1.68 0.192
C11 90.31 14.96 6.04 0.009

S = 8.52974 R-Sq = 92.4% R-Sq(adj) = 89.9%
--------------------------------------------
Regression Analysis: C15 versus C14
The regression equation is
C15 = 52.7 - 14.0 C14

Predictor Coef SECoef T P
Constant 52.67 18.74 2.81 0.067
C14 -14.00 42.50 -0.33 0.764

S = 7.76030 R-Sq = 3.5% R-Sq(adj) = 0.0%
-------------------------------------------

Descriptive Statistics: C19, C20
Variable N N* Mean SE Mean StDev
C19 5 0 0.5000 0.0791 0.1768
C20 5 0 61.00 8.34 18.64


Regression Analysis: C20 versus C19
The regression equation is
C20 = 9.00 + 104 C19

Predictor Coef SECoef T P
Constant 9.000 5.279 1.70 0.187
C19 104.00 10.07 10.33 0.002

S = 3.55903 R-Sq = 97.3% R-Sq(adj) = 96.4%

--------------------------------------------

Now, can you generate a confidence interval for the slope of the REAL regression line from one of your estimates? What does your (large) interval tell you about the strength of the relationship between x and y?

What were the three types of evidence you used to answer the questions about the model? Match the evidence to the question online at the quizplace. http://www.proprofs.com/quiz-school/quizview.php?id=968





Friday, April 13, 2007

Chapter 13 Inferences using Chi-square procedures

Hang in there; we're getting close to the end of the race.

This chapter introduces us to chi-square procedures. These methods are generally used to analyze tables of counts from samples which are separated into CELLS based on one or more categorical variables. The advantage of these methods is that you can perform many comparisons at once, instead of just two as in our previous procedures using z and
t. Most students like chi-square procedures better than z and t procedures because we will be using counts rather than continuous data and our tests are automatically two-tailed.

For instance, we might analyze the COUNTS of M&Ms of each of the six usual colors in a bag or the distribution of the COUNTS of teachers at each combination of YEARS OF EXPERIENCE and HIGHEST DEGREE ATTAINED. Each element counted must be placed in exactly one CELL. We will compare the OBSERVED counts from a sample or samples to the EXPECTED counts in a way that will quantify the likelihood of this size error so we can make inferences.

There are two common versions of this test, one for situations where there is a set of guidelines or percentages that your sample data should match and one where the observations themselves determine the expected counts using the independence principle. In order to make an inference about the population or populations involved, the samples used must be SRS.

Another condition that must be met is that each expected count must be at least one. Furthermore, at least 80% of the expected counts must be at least 5. Although the observed counts must be integer values, the expected counts (just like expected values) do not need to be integers. [Error alert: Many students INCORRECTLY use the observed counts instead of the expected counts to determine whether the test is appropriate.]

Depending on the type of test we are performing there will be one of two different methods for calculating the expected counts. For both types of tests, once you have found the expected counts, you calculate chi-square components for each pair of observed and expected counts:

chi-square component = (observed - expected)^2/expected. (Of course, these are all non-negative.)


You add up all of the chi-square components to get the chi-square statistic, X^2.

You compare this X^2 value to the chi-square distribution with the appropriate number of degrees of freedom to find the p-value, or probability that you could get a X^2 value at least this large, randomly, when the null hypothesis is true. If this p-value is small, we reject the null hypothesis. If the p-value is large, we do not have sufficient evidence to conclude that the alternative is preferred.

So, I haven't addressed the hypotheses. . ..

Chi-square Goodness of Fit Test (GOF)

This is the test you use to compare a sample set of observed counts to a model that is defined somewhere else by a higher authority. Some examples:

Comparing your bag of M&Ms to the distribution of colors posted on the M&M/Mars website.

Comparing your bag of M&Ms to a uniform distribution by color (1/6th of the bag / color).

Comparing the age distribution of your town to the U.S. Census proportions.

Comparing the number of students at your school making 1, 2, 3, 4, or 5 on the AP exam compared to the global distribution.


Your null hypothesis states that the distribution matches the expected distribution. The alternative is that the distributions do not match. It is important that you write the first statement in context.


If the null hypothesis says something like p1 = p2 = p3, the alternative hypothesis SHOULD NOT be "p1 is not equal to . . . " because some of the pairs of proportions could still be equal yet the numbers do not match the distribution you wanted. Instead, use verbal descriptions like the distribution does not match the model.


To find each expected count, you take the proportions from the higher authority and multiply them by the total of all observations. You will generally get non-integer values.

Check the expected counts to make sure that all of them are at least one and check a second time to make sure that at least 80% are 5 or more.

Perform the calculations described, computing the chi-square components, adding them up to get the chi-square statistic, using that statistic to find the p-value, and making a decision in the context of the problem. If you choose to reject the null hypothsis, go back through the components to find the greatest contributor to the high chi-square statistic and cite that in your decision.


Chi-square Tests of Association: Independence and Homogeneity

When you have two or more samples from one population or two or more samples from two or more populations that you are comparing against each other with respect to categorical variables you will generally perform a chi-square test of association on the two-way table that you create to summarize the samples.


Use the words of the problem to generate the Ho and Ha for this test. The null hypothesis will customarily follow the pattern there is no association between [characteristic one] and [characteristic two].


The method for finding the expected values is different from the method described for the goodness of fit test. Otherwise, the tests are virtually the same.

To find each expected value for the cells of the two-way table, multiply the row total by the column total for that cell and divide by the grand total. Again, you will likely get non-integer numbers. Check the expecteds to see if they are at least 1 and at least 5 as described above. Calculate the components and chi-square statistic using the same formulas as in the goodness of fit test and evaluate the statistic in the same way.

Setting up the hypotheses

One of the hardest problems for students seems to be figuring out what the null and alternative hypotheses should be. Consider the test itself. Whenever the observed count matches the expected count you get a chi-square component equal to zero--something that does not contribute anything to our chi-square statistic. If ALL of the numbers matched, then our statistic would be zero and it would be graphed on the far left end of our distribution, leaving 100% of the probability to the right--a p-value of 1. (Fail to reject the null!!!!)


On the other hand, if our observed values are far from the expecteds, then the chi-square components will contribute to a larger statistic and, ultimately, a smaller p-value. (If p is small enough, reject the null!!!!!)

How does this help us to generate our hypotheses? For our null hypothesis, our observeds must be close to our expecteds. When does that happen? When our idea of what should have happened actually DID happen, for instance, when we expected the distribution to be practically uniform and it was.


This is just a little trickier when we are talking about association. The null hypothesis is that the characteristics listed along the top of the two-way table have nothing to do with the characteristics listed on the side of the table. If we proposed that video-gaming and gender were independent, then we would expect the same proportion of boys to be gamers as the girl gamers. Even though the wording of the problem may be ambiguous (Are gaming and gender independent? vs Is there a relationship between gaming and gender?), the test is still the same. The comparison that you make is between the observed counts and what the counts should be if the two characteristics are independent.