How Big Should My Sample Be? A Practical Guide for Psychology Students
Most psychology students can tell you roughly what Bandura did to a Bobo doll before they can tell you how many participants their own study needs. This is not really their fault. Courses are very good at giving people famous names, dramatic findings, and a vague sense that statistics matter. They are often less good at explaining one of the most practical design questions in research: how big should your sample actually be?
The irritating answer is that there is no single magic number, which is exactly why students so often end up guessing when they should be calculating. There is no sacred psychology number hidden in the walls of academia. Your ideal sample size depends on what kind of study you are running, how big an effect you expect, how much uncertainty you are willing to tolerate, and whether your project is a dissertation, a lab experiment, or an underfunded act of stubbornness.
TL;DR
A decent sample size is not “whatever sounds respectable.” It is the number that gives your study a fair chance of detecting an effect worth caring about. In practice, that usually means thinking about power, effect size, study design, and real-world limits rather than copying a number from someone else’s methods section and hoping nobody notices.
The bad question, and the better one
Students often ask, “How many participants do I need?” It sounds sensible, but it is slightly too vague to be useful. Need for what? A tiny correlation? A large group difference? A messy interaction? A dissertation marker who enjoys cruelty?
The better question is this: how many participants do I need to have a reasonable chance of detecting the effect I care about, given the kind of design I am using?
That “reasonable chance” is what statistical power is about. Power is the probability that your test will detect an effect if there really is one there. In plain English, it is your study’s chance of not missing something real. A common target is 80% power, which means accepting a 20% chance of missing a genuine effect of the size you planned for. It is not a law of nature. It is just a widely used convention.
The four things that quietly control your sample size
1. Effect size
Smaller effects need bigger samples. This is where many student projects get into trouble. If the effect you care about is subtle, which plenty of psychological effects are, a small study may simply not have the muscle to detect it. You can run the study perfectly, analyse it correctly, and still end up with a non-significant result because your sample was too small to give the effect much of a chance.
This is also why blindly using Cohen’s “small,” “medium,” and “large” labels can get people into a mess. Those benchmarks are useful as rough guides, but they are not fixed truths handed down from the statistical heavens. They are starting points, not substitutes for thinking.
2. Desired power
If you want more power, you usually need more participants. This is the statistical equivalent of refusing to play darts in the dark. A study with 50% power is basically admitting that you are comfortable missing real effects half the time. That is not a design strategy. That is just gambling with extra steps.
3. Alpha level
Most psychology studies use an alpha of .05, which is the conventional threshold for statistical significance. If you make that threshold stricter, your study becomes harder to “pass,” which means you often need a larger sample to compensate.
4. Study design
This part matters more than students often realise. A repeated-measures design, where the same people are tested more than once, is usually more efficient than a between-groups design, where you compare separate groups of people. Once you start adding more groups, more variables, or especially interactions, sample size requirements can climb quickly.
Why “30 participants” is not a rule
At some point, someone probably told you that 30 participants is enough. Perhaps 30 in total. Perhaps 30 per group. These numbers float around psychology departments like slightly haunted folklore.
The trouble is that “30” is a heuristic, not a universal solution. A sample size can be justified by power analysis, previous research, practical constraints, or the realities of the population you can actually access. What matters is whether the number makes sense for the study you are doing. “I have seen 30 before” is not much of a justification.
This is the awkward truth a lot of first-year methods teaching dances around. A sample can be big enough to look busy and still too small to do the job properly.
A few concrete examples
To make this less abstract, here are some illustrative sample sizes using a conventional two-tailed alpha of .05 and 80% power.
If you expect a correlation of about r = .30, you need roughly 85 participants.
If you are comparing two independent groups and expecting an effect of about d = .50, you need roughly 64 participants per group, so about 128 in total.
If you are using a paired-samples design and expecting d = .50, you need roughly 34 participants.
If you are running a one-way ANOVA with three groups and expecting an effect around f = .25, you need roughly 158 participants in total.
These are not universal recommendations. They are examples. Their real value is that they show how quickly sample size demands can grow once you stop treating “I’ve got twenty people” as a serious research plan.
So how should a first-year student actually justify sample size?
For most undergraduate work, especially early on, you do not need to perform methodological theatre. You need to show that you understand the logic.
A sensible justification usually has five parts.
First, say what design you are using. Are you comparing two groups, testing the same participants twice, or looking for a correlation?
Second, state your significance threshold and desired power. In many student projects, that will be alpha = .05 and power = .80.
Third, explain the expected effect size. Ideally this comes from previous research, a meta-analysis, or at least a reasoned judgment about the smallest effect you would care about detecting. That is much better than plucking “medium effect” from the air because it sounds respectable.
Fourth, run an a priori power analysis using software such as G*Power, which was designed for the kinds of tests commonly used in social and behavioural research. If you want a more straightforward route, our Sample Size Calculator inside the Original Matter Stats Pack is built to help students think through the same problem without getting lost in statistical ritual.
Fifth, be honest about practical limits. If your dissertation is recruiting from one class, one clinic, or one local population, say so. Resource constraints are not automatically bad science. Pretending they do not exist is worse. A modest study can still be defended if the limits are clear and the claims are kept proportional.
What markers are usually looking for
They are not usually expecting first-year students to behave like grant-funded research labs. They are looking for signs that you understand the design logic. If you can explain that sample size depends on power, effect size, and study structure, and if you can justify your number with something better than folklore, you are already ahead of a surprising number of people.
In other words, they do not need you to be omniscient. They need you to stop guessing.
The real point
Sample size is one of those topics that gets treated as a technical side issue, when it is actually part of the integrity of the whole study. Too few participants and your study becomes fragile, underpowered, and oddly vulnerable to wishful thinking. Far too many and you can end up wasting time, effort, and participant goodwill chasing microscopic effects nobody actually cares about. Sample size is not a ceremonial number you paste into a methods section. It is part of the argument your study is making.
That is why it helps to think of sample size less as a hurdle and more as a match between the question and the tool. Good research design has always involved matching the method to the question. Sample size is part of that match. The maths just makes it feel less friendly.
Stop Guessing Your Sample Size
Working out sample size is one of those moments where psychology students realise statistics is not really about formulas alone. It is about making choices you can justify. The trouble is that even when you understand the logic of power, effect size, and study design, turning that into a confident decision can still feel like a slightly hostile puzzle.
That is exactly why we built the Original Matter Stats Pack.
It is designed for psychology and social science students who want more clarity when making statistical decisions, including:
choosing the right statistical test
making sense of descriptive statistics
checking regression assumptions
thinking more clearly about sample size and effect size
Instead of guessing, second-guessing, or treating statistics like a punishment ritual, you get practical tools that help you make better decisions before you commit to an analysis plan you barely trust.