How to Report a Correlation in APA 7: r, p, Direction, and Strength
Reporting a correlation in APA 7 is mostly about not making the result do more work than it can. This guide shows you how to write Pearson correlations clearly, including r, degrees of freedom, p values, direction, strength, and the difference between “significant” and “actually meaningful.”
Correlation is one of those statistics that looks friendly until you have to write it up properly. The output gives you a number, possibly asterisks, a p value, and the quiet sense that something should now be said about “relationships between variables.” That is where many results sections begin to wobble.
The basic APA-style correlation sentence is not complicated. You usually report the direction of the relationship, the variables involved, the correlation coefficient, the degrees of freedom, and the p value. For Pearson’s correlation, that usually means reporting r(df) = value, p = value. APA’s numbers and statistics guidance also recommends exact p values where possible, using p < .001 when the value is smaller than .001.
A simple example looks like this:
“There was a positive correlation between sleep duration and exam performance, r(42) = .36, p = .018.”
That sentence does enough. It tells the reader which variables were related, the direction of the relationship, the size of the correlation, and whether the result was statistically significant. It does not announce that sleep is the secret to academic greatness, because that would be a bit much for one correlation coefficient.
What a correlation tells you
A correlation tells you about the association between two variables. It shows whether higher scores on one variable tend to go with higher or lower scores on another variable.
A positive correlation means the variables tend to move in the same direction. For example, higher sleep duration might be associated with higher wellbeing scores.
A negative correlation means the variables tend to move in opposite directions. For example, higher stress might be associated with lower sleep quality.
A correlation close to zero suggests little or no linear relationship between the variables.
The key word is “associated.” A correlation does not, by itself, show that one variable caused the other. It does not matter how tempting the story is. The correlation is not a tiny causal certificate.
The basic APA format for a correlation
For a Pearson correlation, the basic APA-style structure is:
“There was a [positive/negative] correlation between [Variable 1] and [Variable 2], r(df) = X.XX, p = .XXX.”
For example:
“There was a positive correlation between revision time and exam performance, r(48) = .41, p = .004.”
Or:
“There was a negative correlation between perceived stress and sleep quality, r(52) = -.38, p = .006.”
The r is italicised because statistical symbols are usually italicised in APA style. The value of r does not use a leading zero because a correlation coefficient cannot be greater than 1 or less than -1. The same no-leading-zero rule also applies to p values.
So write:
“r(48) = .41, p = .004”
Not:
“r(48) = 0.41, p = 0.004”
The second version is understandable, but it is not clean APA-style reporting. It has the same energy as wearing trainers with a suit. Technically possible. Not the intended look.
What are the degrees of freedom for a correlation?
For Pearson’s correlation, the degrees of freedom are usually N - 2.
So, if you have 50 participants, the degrees of freedom are 48.
That is why the result would be written as:
“r(48) = .41, p = .004.”
Students often forget the degrees of freedom and write:
“r = .41, p = .004.”
That is not the end of civilisation, but it is less complete. If your course expects APA-style reporting, include the degrees of freedom.
How to report a positive correlation
A positive correlation means higher values on one variable are associated with higher values on the other variable.
For example:
“There was a significant positive correlation between self-esteem and life satisfaction, r(64) = .46, p < .001.”
You can also make the direction clearer in words:
“Higher self-esteem was associated with higher life satisfaction, r(64) = .46, p < .001.”
That second version is often better because it tells the reader what the correlation actually means. “Positive correlation” is accurate, but “higher self-esteem was associated with higher life satisfaction” is more readable.
Do not write:
“Self-esteem increased life satisfaction.”
That is causal language. Unless your design supports causation, avoid it. Correlation can show association. It cannot single-handedly prove that one variable marched into the study and caused the other.
How to report a negative correlation
A negative correlation means higher values on one variable are associated with lower values on the other variable.
For example:
“There was a significant negative correlation between stress and sleep quality, r(58) = -.42, p = .001.”
Or, more clearly:
“Higher stress was associated with lower sleep quality, r(58) = -.42, p = .001.”
Notice the minus sign before the correlation coefficient. That sign tells you the direction of the relationship. A negative correlation is not a “bad” correlation. It just means the variables move in opposite directions. Statistics, regrettably, has enough problems without giving minus signs a moral personality.
How to report a non-significant correlation
A non-significant correlation should still be reported clearly. It is not a failed sentence. It is just a result that did not provide enough evidence of a statistically significant association.
For example:
“The correlation between social media use and exam performance was not statistically significant, r(46) = -.12, p = .421.”
Or:
“There was no significant correlation between caffeine intake and reaction time, r(38) = .09, p = .584.”
Be careful with wording. “No significant correlation” is safer than “no relationship.” A non-significant result does not prove that no relationship exists; it means the analysis did not find sufficient evidence of one at the chosen threshold. This is especially important if the sample was small, the measure was noisy, or the study had all the statistical power of a damp tealight.
Should you say weak, moderate, or strong?
You can describe the strength of a correlation, but do it carefully. A common rough convention is to treat correlations around .10 as small, around .30 as moderate, and around .50 as large. These are guidelines, not sacred laws carved into the side of a statistics building.
For example:
“There was a moderate positive correlation between revision time and exam performance, r(48) = .41, p = .004.”
Or:
“There was a weak negative correlation between social media use and exam performance, although this was not statistically significant, r(46) = -.12, p = .421.”
The problem is that “weak,” “moderate,” and “strong” depend on context. In some areas of psychology, a correlation of .20 may be useful. In others, it may be less impressive. Do not use strength labels as decorative adjectives. Use them only when they help the reader.
Is r an effect size?
Yes, the correlation coefficient r is itself an effect size. It tells you the strength and direction of the linear relationship between two variables.
This is one reason correlation reporting can feel simpler than some other tests. You do not usually need to add a separate Cohen’s d or eta squared. The correlation coefficient is already doing the effect-size work.
That does not mean r tells you everything. A correlation can be statistically significant but small. It can also look moderate in a small sample and still fail to reach significance. The p value and the correlation coefficient answer different questions, so you need both if you want the reader to understand the result properly.
How to report p values for correlations
The same APA-style p value rules apply here as elsewhere. Report exact p values where possible, such as p = .018 or p = .247. If the value is smaller than .001, report it as p < .001 rather than p = .000. APA’s statistics guidance uses this exact pattern for reporting p values.
So:
“r(42) = .36, p = .018”
And:
“r(64) = .53, p < .001”
Not:
“r(64) = .53, p = .000”
Software output may display .000 because it has rounded a very small value. It does not mean the probability is literally zero. Do not copy that into your results section unless you want the formatting equivalent of leaving the price sticker on a gift.
Pearson, Spearman, or Kendall?
Most student examples use Pearson’s correlation, but it is not the only correlation coefficient. Pearson’s r is used for linear relationships between continuous variables when the assumptions are appropriate. Spearman’s rho is often used for ranked, ordinal, or non-normally distributed data. Kendall’s tau is another rank-based measure.
If you used Pearson’s correlation, report r.
If you used Spearman’s correlation, report Spearman’s rho, often written as rs or ρ depending on your course guidance and software output.
For example:
“There was a significant positive Spearman correlation between ranking on confidence and ranking on performance, ρ = .32, p = .014.”
Your module may have a preferred notation. Use it. The important thing is not to call every correlation “Pearson’s r” if you did not actually run Pearson’s r. That is not a style issue; that is a small statistical identity crisis.
Do you need to report N?
Sometimes, yes. If the degrees of freedom are reported, the sample size can usually be inferred for a Pearson correlation because df = N - 2. However, it can still be useful to report N in the text or in a table, especially if there were missing data or different correlations used different sample sizes.
For example:
“A Pearson correlation showed a significant positive association between sleep duration and exam performance, r(42) = .36, p = .018.”
Here, the degrees of freedom imply that N = 44.
If missing data makes the sample less obvious, you might write:
“Among participants with complete data (N = 44), sleep duration was positively correlated with exam performance, r(42) = .36, p = .018.”
That is more transparent, and transparency is one of the few academic virtues that does not require a committee.
Correlation does not mean causation
You knew this section was coming. It always does. Like a fire drill, but for methods.
A correlation can show that two variables are related. It does not show that one variable caused the other. There could be a reverse causal direction, a third variable, measurement issues, sampling problems, or sheer statistical mischief.
For example, suppose you find that social media use is negatively correlated with wellbeing. You should not write:
“Social media use reduced wellbeing.”
A safer version is:
“Higher social media use was associated with lower wellbeing.”
That sentence stays within what the analysis can support. It is less dramatic, yes, but also less wrong. A fair trade, in academic writing.
Common mistakes when reporting correlations
One common mistake is reporting only the p value:
“The correlation was significant, p = .018.”
This is too thin. The reader needs the correlation coefficient too:
“There was a positive correlation between sleep duration and exam performance, r(42) = .36, p = .018.”
Another mistake is forgetting the direction:
“There was a correlation between stress and sleep quality, r(58) = -.42, p = .001.”
This is not terrible, but it is less helpful than:
“Higher stress was associated with lower sleep quality, r(58) = -.42, p = .001.”
Students also sometimes report the sign but then describe the result incorrectly:
“Higher stress was associated with higher sleep quality, r(58) = -.42, p = .001.”
No. The minus sign says the relationship is negative. Higher stress goes with lower sleep quality in that example. The sentence and the statistic need to be on speaking terms.
Another common mistake is using causal language:
“Stress caused lower sleep quality, r(58) = -.42, p = .001.”
A correlation alone does not support that. Use “was associated with” instead.
Copy-ready templates
For a significant positive correlation:
“There was a significant positive correlation between [Variable 1] and [Variable 2], r(df) = .XX, p = .XXX.”
A more readable version:
“Higher [Variable 1] was associated with higher [Variable 2], r(df) = .XX, p = .XXX.”
For a significant negative correlation:
“There was a significant negative correlation between [Variable 1] and [Variable 2], r(df) = -.XX, p = .XXX.”
A more readable version:
“Higher [Variable 1] was associated with lower [Variable 2], r(df) = -.XX, p = .XXX.”
For a non-significant correlation:
“The correlation between [Variable 1] and [Variable 2] was not statistically significant, r(df) = .XX, p = .XXX.”
For a very small p value:
“There was a significant positive correlation between [Variable 1] and [Variable 2], r(df) = .XX, p < .001.”
Quick APA correlation checklist
Before you submit your results section, check that you have included the variables, the direction of the relationship, the value of r, the degrees of freedom, and the p value.
Make sure r and p are italicised. Remove the leading zero from the correlation coefficient and the p value, so you write r = .36 and p = .018 rather than r = 0.36 and p = 0.018. Report p < .001 instead of p = .000. Use “associated with” rather than causal language unless your design genuinely supports causation.
Also check that your words match your numbers. If the correlation is negative, the sentence should not describe a positive relationship. It sounds obvious until the third hour of editing, when every decimal point begins to look faintly accusatory.
Final thought
A good correlation write-up should be clear enough that the reader knows what was related, which direction the relationship went, how strong it was, and whether it was statistically significant. That is all it needs to do.
Do not make the correlation prove causation. Do not hide the r value behind the p value. Do not copy p = .000 from software output like it is a trusted friend. Report the result cleanly, say what it means, and then let the Discussion section do the interpretive work later.
Got the output but not the wording?
The Original Matter Formatting Pack includes the full Results Reporter, built to help turn common psychology statistics into cleaner APA-style results sentences. It covers the usual suspects: t-tests, correlations, chi-square, ANOVA, regression, and the reporting details that somehow become a problem at the exact moment you wanted to be finished.