- Home
- Appointment Information
- Workshops
- Information for Staff
- Stats ResourcesToggle Dropdown
- Stats TestsToggle Dropdown
- Independent-Samples t-Test
- Paired-Samples t-Test
- Mann Whitney U-Test
- Wilcoxon Signed-Ranks Test
- One-Way Between-Subjects ANOVA
- One-Way Repeated-Measures ANOVA
- Two-Way Mixed ANOVA
- Kruskal Wallis H-Test
- Friedman's ANOVA
- Pearson's 'r' Correlation
- Spearman's Rho Correlation
- Simple Linear Regression
- Multiple Linear Regression
- Chi-Squared (χ2) Test of Association
- Shapiro-Wilk Test of Normality

- Maths ResourcesToggle Dropdown
- Numeracy ResourcesToggle Dropdown
- Stats Software
- A-Z of Terminology
- Meet the Team

Request an addition to our A-Z of Terminology:

Here you'll find definitions and explanations of many mathematical terms, and statistical concepts and tests. You'll also find links to further information about them as well as the 'how-to' guides to accompany them.

This will forever be a work in progress, so please let us know if you would like something adding to this.

**Accuracy**: How close measured scores in a dataset are to their true values. The true value may be decided based on background literature or previous studies.

**Algebra**: The area of mathematics where letters and other general symbols are used to represent numbers and quantities in formulae and equations.

**Alternate-form reliability**: A method of measuring reliability which employs a functionally similar version of a survey or instrument alongside the original version. In this way, it works to counteract the practice effect which hinders test-retest reliability. Examples of such variations include changing the ordering or the wording of some questions in the survey/instrument.

**Alternative hypothesis**: A hypothesis which predicts that there will be a significant result on a statistical test. It contradicts the null hypothesis. It is sometimes known as H_{1}.

**Analysis of covariance (ANCOVA)**: An ANOVA (see ANOVA) which accounts for the effect of an added confounding variable, referred to as the covariate, when considering the relationship between the factor(s) and the dependent variable.

**Analysis of variance (ANOVA)**: An analysis of the difference among means where one or more independent variables (referred to as factors) are used to group the data. The data captured for each group is then compared with the others to observe any statistically significant differences between them. This method is best used when a factor has three levels, i.e. when it divides the data into three or more distinct groups.

**Bartlett’s test for homogeneity of variance:** A test used to determine whether the variances between all samples in an analysis are roughly equal. It tests the null hypothesis that they are equal, so a significant difference indicates unequal variances. It is best applied when the data is suspected to be roughly normally distributed.

**Between-Subjects ANOVA**: A One-Way Between-Subjects ANOVA is a parametric hypothesis test that compares the difference between more than two independent groups, such as comparing the difference between three distinct groups. [**SPSS Guide**]

**Bias: **A general term referring to systematic (i.e. not random) deviation of an estimate from the true value.

**Bimodal distribution:** A distribution curve where two ‘peaks’ are observed.

**Bonferroni correction:** A post hoc test used in procedures such as ANOVAs to adjust the p-value of each comparison between groups. This is performed to reduce the probability of a type 1 error.

**Calculus: **The area of mathematics involving derivatives and integrals, it is the study of motion in which changing values are studied.

**Central limit theorem:** A theorem which states that the sampling distribution of the mean draws closer to being normally distributed as the gathered sample size increases.

**Chi-square test (χ²)** – A statistical test for goodness-of-fit which tests the null hypothesis that the distribution of a discrete variable coincides with the distribution of the data. [**SPSS Guide**]

**Cohen’s d:** A measurement of effect size used to determine the size of a difference between means, such as in the case of a t test. Cohen’s d values equal to or greater than 0.2 represent a small effect, values equal to or greater than 0.5 represent a medium effect, and values equal to or larger than 0.8 represent a large effect.

**Collinearity: **Where, in a regression analysis, a strong correlation exists between two variables such that it is difficult to estimate their individual regression coefficients reliably.

**Confidence intervals:** A measurement of the range in which a researcher anticipates their observed values to fall within if their experiment was to be repeated. Confidence intervals often bracket a sample estimate on either side of its distribution, with the most common bracket being 95%.

**Confounding variable:** A variable, other than the independent variable(s) in a statistical test, which has a relationship with the dependent variable(s) that distorts the original IV-DV relationship.

**Correlation: **– A trend in the relationship between two variables where a change in one variable is associated with a change in another. A correlation does not necessarily mean that one variable directly causes a change in the other (causation).

**Correlation coefficient: **A measurement of the degree of correlation between two variables. It is a value between –1 and +1, representing a negative correlation if it is below 0, and a positive correlation if it is above 0. Examples of correlation coefficients include r (from Pearson’s r) and ρ (from Spearman’s Rank).

**Corollary: **A proposition that follows from (and is often appended to) a mathematical proof.

**Cronbach’s alpha (α): **A measure of the reliability or internal consistency of a multi-item scale, in a single value. The reliability coefficient derived from this test can range from 0 to 1, with the ideal score often being .70 or higher for new scales, .80 or higher for established scales. [SPSS Guide to come]

**Dependent variable (DV): ** A variable that depends on other factors, normally the manipulation of an independent variable. These variables are often measured in experiments.

**Descriptive statistics:** Statistics which quantitatively describe the properties of a data set, such as the mean, median, mode, standard deviation, or frequency distributions.

**Differentiation: **The process of finding the derivative or the rate of change of a function. It is often considered the inverse to integration.

**Distribution curve:** A graphical representation of the spread of scores for a continuous variable.

**Effect size:** A measure of how meaningful the relationship between two variables, or the difference between groups, is. Different tests are used to measure effect size for different statistical tests, such as Cohen’s d and partial eta squared (η_{p}^{2}). This is often interpreted in terms such as ‘small’, ‘medium’ and ‘large’.

**Factor analysis:** A method used to observe patterns in a data set by reducing it down to a set of variables for similar items in a measure (known as dimensions). Factor analysis can be performed without any preconceived ideas of the data’s structure (exploratory analysis), or to verify a specific idea that a researcher has about the data’s structure (confirmatory analysis). This method is particularly useful in studies which try to better understand psychological variables or socioeconomic status.

**Fleiss' kappa:** is a measure of inter-rater agreement used to determine how well two, or more, raters agree on their scoring of nominal data.

**Frequency distribution:** A display of how often each data value occurs in the data set or specific variable. This could be shown in a table or graph.

**Friedman’s ANOVA:** – A non-parametric statistical test of difference, suitable for comparing between more than two related groups. The test can be seen as a non-parametric alternative of a Repeated-Measures ANOVA. [**SPSS Guide**]

**Geometry: **The study of lines, angles, shapes, and their properties. Geometry studies physical shapes and object dimensions.

**Graph: **A diagram depicting the relationship between two or more variables. Examples of graphs include bar charts, box and whisker plots, histograms, line graphs, pie charts, and scatter plots.

**Heteroscedasticity: **Where the variances of two variables being compared are unequal. Parametric tests make the assumption of homogeneity of variance, so the heteroscedasticity of data makes this more difficult.

**Histogram: **A graph depicting the distribution of a numerical variable in the data set. A histogram divides a data set into equal intervals of values (often referred to as Bins) and then records the frequency with which the values in the data set fall into each interval. This is depicted through an array of bars, with a higher bar indicating a higher frequency of values falling within that interval. Normal distribution curves are normally depicted using histograms, as it is good for showing where the data forms peaks, or becomes skewed.

**Homoscedasticity: **Where the variances of two variables being compared are roughly equal. This is ideal for parametric testing, where the assumption of homogeneity of variance is made.

**Hypothesis **– A prediction of the outcome of a statistical test. There are two types of hypotheses: the null (H_{1}) and the alternative (H_{1}).

**Independent measures/between-subjects ANOVA:** An ANOVA where the data points which are separated into groups are gathered from different participants. For example, researchers may want to examine the effect of a pharmaceutical drug on people with a certain condition compared to people who don’t have that condition. [**SPSS Guide**]

**Independent Samples t-Test: **An independent-samples t-test is a parametric hypothesis test which compares the means between two unrelated groups, such as comparing the difference between class 1 and class 2. [**SPSS Guide**]

**Independent Samples:** A grouping condition, opposite to paired-samples, commonly used for t-tests. Subjects in one group are entirely distinct and independent from the subjects in the other groups.

**Independent variable (IV):** A variable whose variation doesn’t depend on that of another. These are often manipulated in experiments to measure any resulting changes in the dependent variable(s).

**Inferential statistics:** The practice of inferring properties of a population based on comparisons between the distributions of their data. One example of inferential statistics is hypothesis testing, which includes t-tests, correlations, ANOVAs etc.

**Integration: **The process of finding a function g(x) that's derivative is another function f(x). It is often considered the inverse to differentiation.

**Internal consistency:** A method of measuring reliability which involves measuring the reliability of the individual items of a test. This is usually performed using Cronbach’s alpha.

**Inter-rater reliability:** A method of measuring reliability which involves conducting the same measure multiple times, but with different people conducting it each time.

**Interquartile range:** A measure of dispersion. It is the difference between the 1st and 3rd quartiles in a data’s distribution. A larger interquartile range means a more dispersed distribution of data.

**Interval Data: **A classification or 'level' of data that comprises **continuous** measurements **without **an absolute zero. Examples include Temperature (°C or °F)

**Intra-rater reliability:** A method of measuring reliability which involves conducting the same measure multiple times, with the same people conducting it each time.

**Kendall’s tau rank test (τ):** A non-parametric test of correlation between two variables, where each variable is measured as ordinal data. This test uses the test statistic τ.

**Kruskal-Wallis test:** A non-parametric statistical test of difference, suitable for comparing between more than two independent groups. The test is a non-parametric version of a one-way Between-Subjects ANOVA. [**SPSS Guide**]

**Kurtosis:** The steepness or flatness of a distribution curve.

**Mean:** An average of a set of numbers that express the middle or typical value from a set of numbers. The Mean is calculated as the sum of all values, divided by the number of values included.

**Nominal Data: **A classification or 'level' of data that comprises named but **unordered** categories. Examples include Gender (Male, Female, Non-Binary), Eye Colour (Blue, Green, Brown), and subject choice (Maths, Chemistry, Sociology)

**Ordinal Data: **A classification or 'level' of data that comprises **ordered** categories, examples include Height (Short, Medium, Tall) Age, (Young, Middle Age, Old) and University Years (First Year, Second Year, Third Year)

**Proof: **A logical argument, or a series of arguments, that demonstrates that a theorem is true. Proofs can range from a diagram or a few sentences up to 10,000 pages.

**Ratio Data: **A classification or 'level' of data that comprises **continuous** measurements **with **an absolute zero. Examples include Distance (km), Weight (km), and Temperature (°K)

**Repeated Measures ANOVA**: A One-Way Repeated-Measures ANOVA is a parametric hypothesis test that compares the difference between more than two related groups, such as comparing the difference between three conditions that all participants experience. [**SPSS Guide**]

**Scale Data: **A classification of data that encompasses both Ratio and Interval level data, or **continuous** measurements. Examples include Height (in centimetres), Weight (in kilograms), Temperature (°C) and Test Scores (as a percentage)

**Theorem: **A proposition which is not self-evident but proved by a chain of reasoning often in the form of a mathematical proof.

**Trigonometry:** The area of mathematics involving the relations of the sides and angles of triangles and the relevant functions of any angles.

- Last Updated: Sep 19, 2023 9:03 AM
- URL: https://guides.library.lincoln.ac.uk/mash
- Print Page

© University of Lincoln