I have but three requirements between now and graduation: a 450h practicum (halfway done), a thesis (kind of started), and… a final stats assignment. This post is a tutorial for this last one. First though, two quick notes facilitating all that follows. First, you can easily toggle from viewing names to labels via “Options”, making finding variables much easier. Secondly, computing a new variable is easily managed via the “Transform” menu.
First named in 1951 by the American educational psychologist, Lee Cronbach, he called this reliability test coefficient just “α” with the intention of continuing describing further coefficients after it. He never got around to this, probably because of his busy schedule teaching at Stanford and presiding over the APA. Still, this coefficient eventually took on his name.
Chronbach’s α measures if several items on a questionnaire all are measuring the same characteristic. This is based on the assumption that, if indeed they are, their answers will be related to each other. Running this test through SPSS it is not only possible to see the Chronbach’s α measurement for the whole group together, but also how this value would change should any individual item be removed from the calculation. General consensus says an adequate value is anything over .7, while an optimal value is anything over .8. Here is how you would do it in SPSS:
As an example, you may have five questions meant to measure anxiety. You run a Chronbach’s α reliability test and find the result to be .65. You then scroll through the results and see that deleting number three would boost your α score to .85. This means you should consider not using question three in your analysis because it doesn’t seem to be measuring anxiety. If your α score is negative, then this shows a very week correlation between all scale items, meaning they may actually be measuring different things.
The analysis of variance was first used in an 1918 paper on genetics by the English biologist, Ronald Fisher. Prior to this, statistics only compared raw values for correlations, and not how these values’ variations from their means might also match up.
A one-way ANOVA is used to measure the mean effects of one independent variable across three or more groups divided by one dependent variable (“way” refers to the number of independent variables involved). Eg: A class of students is divided into three groups and each group is asked to mentally recite one of three self-talk scripts for a minute: “I can do it”, “I should do it”, and “I must do it”. After this, each student runs 100m and results are gathered to see which self-talk script resulted in what average finishing time.
Factorial ANOVAs, on the other hand, measure a combination of independent variables on a dependent variable (only two-way ANOVAs are really used because of the complexity of interpreting the results of anything beyond this). Factorial ANOVAs also determine the presence of interaction effects, or the incidences of one independent variable’s input changing based on that of the second independent variable. In our example above, a two-way ANOVA could be used to measure the effects of self-talk scripts and gender on completing the sprint. It is possible that on average, everyone performs best with the first script, but that (upon closer analysis) men perform best with the last script, while women perform best with the first (voilà interaction effect). Again, SPSS instructions follow:
And this is all for now. By some miracle I finished the assignment, with the help of a great friend and a great book: Julie Pallant’s SPSS Survival Manual. Andy Fields’ Discovering Statistics using IBM SPSS Statistics was also recommended, but I just didn’t have the time or energy to properly dive into that one.