p < .001 or p = .000? How to Report p Values Properly in APA 7
If your statistics output says p = .000, do not write p = .000 in your results section. This guide explains how to report p values properly in APA 7, when to use p < .001, how many decimal places to include, and how to avoid making statistical significance sound more dramatic than it is.
Statistics software has many talents. Explaining itself politely is not always one of them.
At some point, you will run a test in SPSS, JASP, jamovi, R, Excel, or whatever statistical contraption your module has chosen for you, and the output will give you a p value. Sometimes it will look beautifully ordinary, like p = .034. Sometimes it will look suspiciously dramatic, like p = .000. And then you are left wondering whether to copy it exactly, round it, translate it, or quietly close the laptop and pretend qualitative research was your plan all along.
The basic rule is simple: report exact p values when you can, but if the value is smaller than .001, report it as p < .001. APA’s own numbers and statistics guidance gives this exact pattern: use exact values such as p = .015, except when p is below .001, where the conventional report is p < .001.
That means p = .000 is not your final answer. It is output shorthand. Useful to the software, less useful to your results section.
Why p = .000 is wrong
A p value is a probability. More precisely, it tells you how compatible your observed result is with the null hypothesis under the statistical model being used. It is not the probability that your hypothesis is true, and it is not the probability that your result happened “by chance” in the vague pub-conversation sense. The American Statistical Association has warned against those common misreadings, because p values are useful but very easy to overinflate into something they are not.
So why does your output sometimes say .000?
Because the software is rounding. It is not telling you the probability is literally zero. It is telling you the value is very small and has been rounded to three decimal places. A result might be .0004, .00003, or some other tiny value. Rounded to three decimal places, it may display as .000, but that does not mean the probability is actually nothing.
In your APA-style write-up, you should convert that to:
p < .001
Not:
p = .000
The first version is the clean one. The second version makes it look as though probability has packed up and left the building.
The basic APA rule for p values
For most student results sections, use this rule:
Report exact p values to three decimal places when p is .001 or larger.
Report p < .001 when the value is smaller than .001.
So:
p = .047 is fine.
p = .003 is fine.
p < .001 is fine.
p = .000 is not fine.
APA guidance also uses the no-leading-zero convention for values that cannot be greater than 1, such as p values. So you normally write p = .047, not p = 0.047. Purdue OWL gives the same general rule: use a leading zero when a statistic can be greater than 1, but omit it for values such as p and r that cannot exceed 1.
This is one of those tiny formatting details that makes a results section look more controlled. Not exciting, obviously. Very little about decimal formatting is exciting unless something else has gone badly wrong.
Should p be italicised?
Yes. In APA style, statistical symbols are usually italicised. That includes p, along with symbols such as t, F, r, M, SD, and d.
So write:
Correct:p = .032
Not:
Incorrect: p = .032
The number itself is not italicised. The symbol is.
If you are writing in a normal web editor or Word document, this simply means italicising the letter p, not the entire statistic.
Examples of correctly reported p values
Here are some common outputs and how they should appear in your results section.
If your output says:
p = .047
Write:
p = .047
If your output says:
p = .003
Write:
p = .003
If your output says:
p = .000
Write:
p < .001
If your output says:
Sig. = .018
Write:
p = .018
If your output says:
Sig. = .000
Write:
p < .001
SPSS often labels p values as “Sig.”, which is a tiny act of interface cruelty. In your write-up, translate that into p. Do not write “Sig. = .018” in an APA results sentence unless you enjoy making your marker wonder what else has happened here.
Do you write p < .05?
Sometimes, but exact p values are usually better.
Older reporting habits often use threshold statements such as p < .05 or p < .01. You will still see this, especially in older papers, tables, or significance-star systems. But for a standard APA-style student results section, exact p values are usually preferred when available, unless the value is smaller than .001. APA’s numbers and statistics guidance gives exact reporting as the standard pattern, with p < .001 used for very small values.
So if your output says .032, write:
Better:p = .032
Rather than:
Less precise:p < .05
The second version is not always forbidden, but it gives away information for no very good reason. If you have the exact value, use it.
How many decimal places should p values have?
For APA-style student writing, three decimal places is the usual approach for exact p values.
So:
p = .041
p = .208
p = .006
Avoid writing too many decimal places:
p = .041382
That is not more impressive. It is just untidy. Your reader does not need the full numerical ancestry of the p value.
Also avoid rounding a value of .0006 to p = .001 if your software gives you enough precision to know it is below .001. In that case, write:
p < .001
The point is not to make the result look smaller. The point is to report it cleanly.
Is p = .050 significant?
This is where life becomes slightly irritating.
In many psychology courses, the conventional alpha level is .05. If p is less than .05, the result is usually treated as statistically significant. If p is greater than .05, it is not statistically significant. The ASA statement is clear that scientific conclusions should not rest only on whether a p value passes a threshold, but the .05 convention is still widely used in student reporting and many research contexts.
The awkward bit is exactly .050.
Strictly, if your alpha level is .05, then p = .050 is equal to .05, not less than .05. Some courses and supervisors treat this as significant if the rule is “less than or equal to .05”; others prefer the stricter “less than .05” wording. Your best move is to follow your module guidance. If there is no guidance, avoid melodrama and write the result plainly:
“The result was marginal and should be interpreted cautiously, p = .050.”
Or, more conservatively:
“The result did not reach the conventional .05 threshold, p = .050.”
Do not write:
“The result was very significant.”
That phrase should be placed carefully in a sealed container and removed from the building.
Can a result be “highly significant”?
You may see phrases like “highly significant” in older writing, and some lecturers may still use them casually. For most student APA-style results sections, it is better to avoid that language.
A very small p value does suggest the observed result would be unlikely under the null model, but it does not automatically mean the effect is large, important, meaningful, or interesting. The ASA statement specifically warns against treating p values as measures of effect size or practical importance.
So instead of writing:
“The result was highly significant, p < .001.”
Write:
“The result was statistically significant, p < .001.”
Then report the effect size where appropriate. If the effect is large, show that with the effect size. Do not make the p value do every job in the room.
What if p is not significant?
Report it calmly.
A non-significant p value is not a failed result. It means the test did not provide sufficient evidence to reject the null hypothesis at the chosen threshold. It does not prove there is no effect, no difference, no relationship, or no reason to continue living as a researcher.
For example:
“There was no significant difference in anxiety scores between the caffeine group and the control group, t(58) = 1.42, p = .161.”
Or:
“The correlation between sleep duration and exam performance was not statistically significant, r(42) = .18, p = .247.”
Notice the wording: “not statistically significant.” That is safer than saying “there was no relationship” or “there was no difference,” unless your design, sample size, confidence intervals, and broader evidence genuinely support that stronger claim. Usually, they do not. They are just standing there holding one p value and hoping nobody asks anything too searching.
Where does the p value go in a results sentence?
The p value usually appears after the test statistic.
For example:
“Participants in the caffeine condition reported higher anxiety scores than participants in the control condition, t(58) = 3.42, p = .001, d = 0.89.”
For a correlation:
“There was a positive correlation between sleep duration and exam performance, r(42) = .36, p = .018.”
For an ANOVA:
“There was a significant effect of condition on recall scores, F(2, 87) = 5.64, p = .005, η² = .12.”
The usual order is test statistic, degrees of freedom where relevant, p value, then effect size if included. Your course may have its own preferred order, because academia enjoys taking a simple convention and giving it local weather.
What about p values in tables?
The same general rules apply: use p values consistently, avoid p = .000, and define significance markers if you use them.
In a table, you might use exact values in a column:
p = .032
p = .208
p < .001
Or you might use significance stars, such as:
* p < .05
** p < .01
*** p < .001
If you use stars, explain them in the table note. Do not assume the reader shares your private star mythology. APA’s 2026 blog guidance on tables and figures also stresses consistency when using exact p values, threshold notation, or other significance indicators in visual displays.
For article pages on Original Matter, we can leave table formatting alone for now. The principle is enough: be consistent, define your notation, and never let p = .000 sneak into print pretending to be respectable.
Common p value mistakes
The most common mistake is copying software output directly:
“p = .000”
Change it to:
“p < .001”
Another mistake is using a leading zero:
“p = 0.034”
APA-style writing normally removes the leading zero:
“p = .034”
Students also sometimes write:
“p > .05”
This is not always wrong, but it is usually less useful than the exact value. If your output gives you p = .287, report p = .287. Your reader gets more information, and everyone avoids the vague fog of “greater than .05.”
Another mistake is treating p as the whole result:
“The result was significant, p = .021.”
That is too thin. Report the test statistic too:
“The result was significant, t(38) = 2.41, p = .021.”
Better still, add the direction and descriptive statistics when appropriate:
“Participants in the intervention group reported lower anxiety scores than participants in the control group, t(38) = -2.41, p = .021, d = 0.76.”
That sentence is doing actual work. The lonely p value was just loitering.
Quick APA p value checklist
Before you submit your results section, check these points:
Your p is italicised.
You have used three decimal places for exact p values.
You have written p < .001 instead of p = .000.
You have removed the leading zero, so it reads p = .043 rather than p = 0.043.
You have reported the test statistic as well as the p value.
You have avoided “very significant” and “highly significant.”
You have not claimed that a non-significant result proves there is no effect.
You have included an effect size if your assignment expects one.
That is most of the battle. A boring battle, yes, but still one worth winning.
Copy-ready examples
For a significant t-test:
“Participants in the caffeine condition reported higher anxiety scores than participants in the control condition, t(58) = 3.42, p = .001, d = 0.89.”
For a very small p value:
“Participants in the caffeine condition reported higher anxiety scores than participants in the control condition, t(58) = 4.86, p < .001, d = 1.25.”
For a non-significant t-test:
“There was no significant difference in anxiety scores between the caffeine condition and the control condition, t(58) = 1.12, p = .267, d = 0.29.”
For a significant correlation:
“There was a positive correlation between sleep duration and exam performance, r(42) = .36, p = .018.”
For a non-significant correlation:
“The correlation between sleep duration and exam performance was not statistically significant, r(42) = .18, p = .247.”
For an ANOVA:
“There was a significant effect of condition on recall scores, F(2, 87) = 5.64, p = .005, η² = .12.”
Final thought
The p value is not the whole result. It is one part of the result. It tells the reader something about statistical evidence under a model, but it does not tell them whether the effect is large, important, useful, or worth building a personality around.
For APA-style reporting, keep it clean. Use exact p values when possible. Write p < .001 when the value is very small. Do not write p = .000. Italicise the p. Remove the leading zero. Then move on with your life, or at least to the next slightly annoying formatting decision.
Got the output but not the wording?
The Original Matter Formatting Pack includes the full Results Reporter, built to help turn common psychology statistics into cleaner APA-style results sentences. It is there for the usual suspects: t-tests, correlations, chi-square, ANOVA, regression, and the little reporting details that somehow become everyone’s problem at 11:48 p.m.