Quant #26: Choosing a Parametric Statistical Test

One of the tasks in the statistical analysis consists of choosing the right parametric statistical test to conduct. Typically, the decision of the type of test to conduct comes down to identifying the types of variables and considering whether the data meet certain assumptions. Common statistical assumptions include the absence of autocorrelation (independence of observations), homogeneity of variance in the variables, and normal distribution of the data (only applicable to quantitative data).

If your data meet the assumptions of normality or homogeneity of variance, you may be able to perform a parametric statistical test. The most common types of parametric tests are regression tests, comparison tests, and correlation tests. In regression tests, a researcher is interested in assessing the cause-and-effect relationships. Regressions are often used to estimate the effect of one or more continuous variables on another variable.

Unlike regressions, comparison tests focus on differences among group means. They can be used to test the effect of a categorical variable on the mean value of some other characteristic. Examples include T-tests that are used to compare the means of only two groups (e.g. the average income of men and women); and ANOVA (analysis of variance) tests that are used when comparing the means of more than two groups (e.g. the average weight of group x, group y, and group z).

In addition to regressions and comparison tests, correlation tests constitute another type of parametric test. Correlations are employed to investigate the relationship among variables without paying attention to whether there is a cause-and-effect relationship. An example of a correlation is Pearson’s r, which measures the linear correlation between two sets of data (usually continuous data).