Math · Live
Z-score calculator,
with probability areas.
Compute the standard score for any data point given the population mean and standard deviation. Instantly see all four probability areas under the normal curve: left tail, right tail, between ±z, and both tails, with a live interactive bell-curve visualization and critical z-value reference table.
Inputs
Distribution & value
Z-score formula
Area to display
- Z-score
- 0.7
- Percentile
- 75.80th
- P(Z < z)
- 75.8036%
- P(Z > z)
- 24.1964%
Z-score (standard score)
75.8th percentile
This value is 0.70σ above the mean, still well within the normal range (±1σ covers ~68% of data). 75.80th percentile.
Normal curve
Standard normal distribution — shaded area = 75.8036%
Working
How to compute the Z-score
- 1
Write the z-score formula
Z = (x − μ) / σ
- 2
Substitute the values
Z = (75 − 68) / 10
- 3
Compute the numerator
Z = 7 / 10
- 4
Divide by σ to get Z
Z = 0.7
- 5
Find the cumulative probability
P(Z < 0.7) = Φ(0.7) = 75.8036%
Reference
Critical z-values
| z | P(Z<z) | P(Z>z) | P(±z) | Use |
|---|---|---|---|---|
| z = 1.00 | 84.13% | 15.87% | 68.27% | ±1σ rule: 68.27% within |
| z = 1.28 | 90.00% | 10.00% | 80.00% | 80% confidence interval |
| z = 1.44 | 92.51% | 7.49% | 85.02% | Top 7.5% / bottom 7.5% |
| z = 1.645 | 95.00% | 5.00% | 90.00% | 90% confidence interval |
| z = 1.96 | 97.50% | 2.50% | 95.00% | 95% confidence interval |
| z = 2.00 | 97.72% | 2.28% | 95.45% | ±2σ rule: 95.45% within |
| z = 2.33 | 99.00% | 1.00% | 98.00% | 98% confidence interval |
| z = 2.576 | 99.50% | 0.50% | 99.00% | 99% confidence interval |
| z = 3.00 | 99.87% | 0.13% | 99.73% | ±3σ rule: 99.73% within |
| z = 3.29 | 99.95% | 0.05% | 99.90% | 99.9% confidence interval |
Field guide
The Z-score, the normal curve, and what they tell you about any data point.
A Z-score (also called a standard score) measures how many standard deviations a particular data point lies above or below the mean of its distribution. It standardises values from any normal distribution onto a single common scale: the standard normal distribution, where the mean is 0 and the standard deviation is 1. This makes it possible to compare measurements taken in completely different units, on completely different scales.
The Z-score formula
- x: the individual data point
- μ (mu): the population mean
- σ (sigma): the population standard deviation
- Z: the resulting standard score
A positive Z means the data point is above the mean; a negative Z means it is below. Z = 0 means the data point equals the mean exactly. Z = 2.0 means the point is two standard deviations above the mean.
What a Z-score actually tells you
On its own, a raw score has limited meaning without context. A test score of 85 is great or poor depending on whether the mean was 70 or 92. The Z-score removes this ambiguity: it expresses the score as a position relative to the distribution. A few intuitive landmarks:
| Z-score | Percentile | Interpretation |
|---|---|---|
| −3.0 | 0.13th | Extremely below average: rare (< 0.3% on this side) |
| −2.0 | 2.28th | Well below average: uncommon |
| −1.0 | 15.87th | Below average: lower 16% |
| 0.0 | 50.00th | Exactly at the mean |
| +1.0 | 84.13th | Above average: upper 16% |
| +2.0 | 97.72nd | Well above average: uncommon |
| +3.0 | 99.87th | Extremely above average: rare |
The standard normal distribution
When a variable is normally distributed, its Z-scores follow the standard normal distribution: a bell curve centred at 0 with standard deviation 1. The area under this curve between any two Z-scores corresponds exactly to the probability that a randomly chosen observation falls in that range.
The CDF (cumulative distribution function) Φ(z) gives P(Z ≤ z): the probability that a standard normal variable is less than or equal to z. This calculator evaluates Φ using the error function (erf), with accuracy better than 1.5 × 10⁻⁷.
The four probability regions
For any Z-score, there are four standard probability regions used in statistics:
- Left tail P(Z < z): the percentile rank; the probability that a randomly drawn observation is less than x.
- Right tail P(Z > z): the exceedance probability; how unusual it is to see a value at least this extreme in the upper direction. Used in one-tailed hypothesis tests.
- Between ±|z| P(−|z| < Z < |z|): the central region; how much of the distribution is withinz standard deviations of the mean. At z = 1.96 this equals 95% , the basis of the 95% confidence interval.
- Both tails P(|Z| > |z|): the two-tailed p-value used in hypothesis testing when you're testing whether a value is extreme in either direction (not just one).
The empirical rule (68-95-99.7)
For any normal distribution, the following approximate percentages of data fall within a given number of standard deviations of the mean, a fact so fundamental it is often called the empirical rule:
- ±1σ contains approximately 68.27% of the data (Z-scores between −1 and +1).
- ±2σ contains approximately 95.45% of the data (Z-scores between −2 and +2).
- ±3σ contains approximately 99.73% of the data, meaning only
0.27%of observations fall beyond ±3σ. In quality control, "six-sigma" manufacturing targets a ±6σ defect rate of 3.4 per million.
Z-scores and confidence intervals
Confidence intervals use critical z-values: the Z-scores corresponding to specific probability thresholds. The most commonly used in practice:
- z = ±1.645: 90% CI: 90% of the distribution falls between −1.645 and +1.645.
- z = ±1.960: 95% CI: 95% falls between −1.96 and +1.96. This is by far the most widely cited z-value in statistics, used by default in almost every confidence interval and margin-of-error calculation.
- z = ±2.576: 99% CI: 99% falls between −2.576 and +2.576.
A 95% confidence interval for a sample mean is constructed as x̄ ± 1.96 × (σ / √n), where the ±1.96 comes directly from the Z-score for the 97.5th percentile.
Z-scores in hypothesis testing
In a one-sample Z-test, you compute the Z-score of a sample statistic under the null hypothesis, then compare it to a critical value:
- State the null hypothesis: μ = μ₀.
- Compute the test statistic: Z = (x̄ − μ₀) / (σ / √n).
- Find the p-value:
P(Z > |z_obs|)for one-tailed, orP(|Z| > |z_obs|)for two-tailed. - Reject H₀ if p-value < α (the significance level, typically 0.05).
The Z-test is valid when the population standard deviation σ is known and either the population is normal or the sample size is large enough (n ≥ 30) for the Central Limit Theorem to apply. When σ is unknown and estimated from the sample, the t-distribution should be used instead.
Comparing scores across different scales
One of the most practically useful applications of Z-scores is comparing measurements that use different scales. Suppose a student scored 82 on a physics test (mean 74, σ = 9) and 91 on a literature essay (mean 85, σ = 12):
- Physics Z: (82 − 74) / 9 = +0.89: top 19% of physics students
- Literature Z: (91 − 85) / 12 = +0.50: top 31% of literature students
Even though the raw literature score is higher, the physics performance is relatively stronger within its distribution. Z-scores reveal this directly.
Limitations
Z-scores assume the underlying distribution is approximately normal. Applied to heavily skewed, bimodal, or fat-tailed distributions, they lose their precise probability interpretation. For non-normal data, robust alternatives like the modified Z-score (using median and median absolute deviation) or percentile-based methods may be more appropriate.