Have you ever wondered what all the different *t*-tests are about?

There seems to be an endless number of variants—but how do you tell them apart?

What are they good for, and when do you use which?

The confusion surrounding this topic will soon be over, because here you'll learn what *t*-tests are all about, what area of statistics they belong to, what types there are, how they differ from each other, and a few other interesting facts.

So it's going to be very exciting 😉

A small caveat: This article is about the basic understanding, not the calculation.

I'll show you how to do that in future blog posts and in my soon-to-be-launched online membership **Statistics Gym**.

## What are* t*-tests about & what area of statistics do they belong to?

*t*-tests are one of the most commonly used statistical methods when dealing with **differences between the means of two groups or time points.**

This could be hypotheses such as:

- Self-acceptance is lower in women than in men.

- Concentration is higher in the morning than in the evening.*t*-tests belong to the world of inferential statistics and here either to the world of differences or the world of changes over time when two measurements are taken.

They are used to test difference hypotheses, both directional, i.e. right- or left-tailed, and non-directional ones.

If you'd like to have more mindmaps that give you an overview where you're at in the world of stats, enriched with short descriptions of what the statistical methods are all about, snag your free Statistics Compass **HERE!**

## The 3 types of *t*-tests

There are **3 types** of *t*-tests:

**- One-sample t-test**

- Independent samples *t*-test, also called unpaired *t*-test

- Dependent samples *t*-test, also called paired *t*-test or paired samples *t*-test

Since all variants deal with differences in mean values between 2 groups or 2 measurement points, this means that **The number 2 is central here!**

**Remember well:**

**If your hypothesis involves mean differences between TWO groups**

**or TWO time points,you usually choose one of the t-tests—provided, of course, that the assumptions are met.If you have 3 or more groups or time pointsyou can no longer calculate t-tests,but have to switch to the analysis of variance (= ANOVA)!**

## Examples of the 3 variants

**one-sample t-Test**

As the name suggests, **this test is about the mean of ONE sample, which is THEN compared with a known (or assumed) mean.**

So despite the name, it is still a comparison of two means.

"Comparison" means that you calculate the difference between the two means, which is then further modified.

Ultimately, the question is:

Do the mean values differ significantly in the expected direction, i.e. are they far enough apart?

This applies to right- or left-tailed hypotheses.

For non-directional hypotheses, the question is:

Do they differ significantly, i.e. are they far enough apart, regardless of the direction?

**What could a hypothesis for a one-sample t-test look like?**

Let's assume that you suspect that the average intelligence of Bavarians is higher than the typical Germany-wide average IQ of 100 IQ points.

This would be a right-tailed difference hypothesis, in which you would, for example, find 50 Bavarians willing to take the test and subject them to an IQ test.

From those 50 IQ values, you then calculate the arithmetic mean and see whether it is significantly higher than the average IQ of 100.

Of course, the calculation and procedure is much more complex, but that is the basic principle (as mentioned, this article acquaints you with the basic principles, not the calculations).

**independent samples t-Test**

This test **compares the means of two independent groups.**

This could be a hypothesis such as:

The frustration tolerance of younger people is lower than that of older people. (left-tailed hypothesis)

In this case, you'd have two independent groups:

Younger and older people, in each of which frustration tolerance is measured.

Then you take the mean of the scores of each group and compare them—as mentioned, you have to do more, but that's just for a rough understanding.

"Independent samples" means that the people in the two groups have nothing to do with each other, so they are not "related" in any way.

**dependent samples t-Test**

As you may have guessed, this can also be used to compare two means, but now with dependent samples.

Here, things get a little more complicated in terms of possible applications, because—and this is true of statistical methods in general—**there are three different types of dependent samples**, namely:

**1. Multiple measurements on the same people at different times**

**2. Measurements on people who are related in some way**, such as married couples, partners, siblings, co-workers, etc., and

**3. Multiple measurements on the same people under different conditions or "treatments".**

Let's look at the
**first case: Multiple measurements on the same people at different times**

The hypothesis is:

Adolescents' social competence is higher after a competence training than before.

This would be a right-tailed hypothesis.

Here, the social competence of the adolescents is measured before and after a social competence training and the two means are compared.

**Case 2: Measurements on related individuals**

The hypothesis is:

Twins differ in their extraversion. (non-directional hypothesis)

For example, the sample could consist of 30 pairs of twins, and extraversion is measured on each twin.

**Case 3: Multiple measurements on the same people under different conditions or treatments**

Our hypothesis is:

Life satisfaction is lower after eating cheesecake than after eating Black Forest cake. (left-tailed hypothesis)

Each person is given two cakes and their life satisfaction is measured after eating each cake.

This is not a comparison of time points, but a comparison of responses to different treatments.

Of course, strictly speaking, the cakes are not eaten at the same time, but the interesting thing is not the scores at different times, but the results or reactions after eating different things.

## Other *t*-Tests

**Good to know:**

To make things even more confusing, there are other *t*-tests that are used to test the significance of correlations, such as the Pearson correlation coefficient *r*, or the regression coefficient* b*.

However, **these do not deal with differences in means!**

Back to the 3 classical *t*-tests:

## What types of variables you need

**In all t-tests, the group variable (= who is different from whom?) is categorical, i.e. nominal or ordinal scaled.**

You can also use higher scaled variables, but then you have to "downgrade" them, i.e., put them into categories:

For example, income (ratio scale) could be converted to high/low.**The group variable or time points are the IV.**

**The DV (= what is measured) is metric and normally distributed, with an unknown standard deviation or variance in the population.**

Since you are always comparing two groups, you can still use group variables with more than two characteristics, as long as you include only the two characteristics of interest in your hypothesis.

**Here's an example:**

Let's assume that you use "happiness after the vacation" as your DV and "type of vacation" as the IV.

The IV consists of the following 4 characteristics :

- Bus trip to Carinthia with the bowling club

- Cage diving with white sharks in Australia

- Laughter yoga retreat in Bali

- Sloth spotting in Costa Rica

However, if you are only interested in the difference in happiness after the vacation between the bus travelers and the laughter yogis, you only use the data from the bus travelers and the yogis for the calculation.

Your hypothesis would then be:

Bus travelers and laughter yogis differ in their happiness after the vacation. (non-directional hypothesis)

In other words, you only use these two characteristics of your four-level IV "type of vacation" for the *t*-test and ignore the data from the other vacation groups.

The same applies to multiple time points, such as morning, noon, evening and night, of which you are only interested in two:

For example, you use only the morning and evening measurements for your hypothesis and compare them.

## The corresponding distribution

A (testing) distribution is a theoretical distribution of values into which the result of your calculations falls.

**The test distribution associated with t-tests is the t-distribution.**

As soon as you have carried out the calculations for the *t*-test, you'll end up with a fancy* t*-value (= your test statistic) that falls into the associated *t*-distribution.

And then, as always in hypothesis testing, the question is:

Does it fall in the tail of the distribution, defined by the alpha significance level?

If it does, then there is a significant difference, and if it doesn't, then there isn't ;).

**Remember well:**

**When interpreting the results of right- or left-tailed hypotheses, **

**it's always important to check whether the results are really in the expected direction!**

** **It may be that the resulting

*t*-value seems to be significant because it falls in the tail of the

*t*-distribution.

But if it falls in the opposite direction of your hypothesis, i.e., on the other side, you actually have no significant result at all!

Therefore, always check whether the result is really in the expected direction when interpreting the results of right- or left-tailed hypotheses!

## Last, but not least:

## The corresponding effect size

You probably know that when you have a significant result, you should always calculate an appropriate effect size to find out if the difference you have found actually has some practical relevance.

Unfortunately for students, there are different effect sizes for* t*-tests, with Cohen's *d* being the most common.

It can be used for all three *t*-tests.

Cohen's *d* tells you how big the difference is between the two means.

Check out my blog post on Cohen's *d* for more details **HERE**.

Depending on the type of *t*-test, you can also calculate Hedges *g*, Glass delta, or the Pearson correlation coefficient *r*.

**let's finish with a summary:**

## Summary *t*-Tests

**AND THAT'S a wrap—YOU'VE DONE IT!**That was your introduction to the colorful and exciting world of

*t*-tests ;).

Now, have a great day, and of course:

**HAPPY LEARNING!**

**references**

Aron, A., Coups, E. J., & Aron, E. N. (2013). *Statistics for Psychology* (International ed). Pearson.

Field, A. (2018). *Discovering Statistics using IBM SPSS Statistics*. SAGE.

Howell, D. C. (2007). *Statistical Methods for Psychology*. Thomson Wadsworth.