Skip to main content
ilovecalcs logoilovecalcs.

Math · Live

Z-score calculator, with probability areas.

Compute the standard score for any data point given the population mean and standard deviation. Instantly see all four probability areas under the normal curve: left tail, right tail, between ±z, and both tails, with a live interactive bell-curve visualization and critical z-value reference table.

How it worksReal-time

Inputs

Distribution & value

Z-score formula

Z=756810=0.7

Area to display

Z-score
0.7
Percentile
75.80th
P(Z < z)
75.8036%
P(Z > z)
24.1964%

Z-score (standard score)

75.8th percentile

0.7

This value is 0.70σ above the mean, still well within the normal range (±1σ covers ~68% of data). 75.80th percentile.

Normal curve

Standard normal distribution — shaded area = 75.8036%

z=0.7-3σ-2σ-1σ0σ1σ2σ3σ

Working

How to compute the Z-score

  1. 1

    Write the z-score formula

    Z = (x − μ) / σ

  2. 2

    Substitute the values

    Z = (75 − 68) / 10

  3. 3

    Compute the numerator

    Z = 7 / 10

  4. 4

    Divide by σ to get Z

    Z = 0.7

  5. 5

    Find the cumulative probability

    P(Z < 0.7) = Φ(0.7) = 75.8036%

Reference

Critical z-values

zP(Z<z)P(Z>z)
z = 1.0084.13%15.87%
z = 1.2890.00%10.00%
z = 1.4492.51%7.49%
z = 1.64595.00%5.00%
z = 1.9697.50%2.50%
z = 2.0097.72%2.28%
z = 2.3399.00%1.00%
z = 2.57699.50%0.50%
z = 3.0099.87%0.13%
z = 3.2999.95%0.05%

Field guide

The Z-score, the normal curve, and what they tell you about any data point.

A Z-score (also called a standard score) measures how many standard deviations a particular data point lies above or below the mean of its distribution. It standardises values from any normal distribution onto a single common scale: the standard normal distribution, where the mean is 0 and the standard deviation is 1. This makes it possible to compare measurements taken in completely different units, on completely different scales.

The Z-score formula

Z = (x − μ) / σ
  • x: the individual data point
  • μ (mu): the population mean
  • σ (sigma): the population standard deviation
  • Z: the resulting standard score

A positive Z means the data point is above the mean; a negative Z means it is below. Z = 0 means the data point equals the mean exactly. Z = 2.0 means the point is two standard deviations above the mean.

What a Z-score actually tells you

On its own, a raw score has limited meaning without context. A test score of 85 is great or poor depending on whether the mean was 70 or 92. The Z-score removes this ambiguity: it expresses the score as a position relative to the distribution. A few intuitive landmarks:

Z-scorePercentileInterpretation
−3.00.13thExtremely below average: rare (< 0.3% on this side)
−2.02.28thWell below average: uncommon
−1.015.87thBelow average: lower 16%
0.050.00thExactly at the mean
+1.084.13thAbove average: upper 16%
+2.097.72ndWell above average: uncommon
+3.099.87thExtremely above average: rare

The standard normal distribution

When a variable is normally distributed, its Z-scores follow the standard normal distribution: a bell curve centred at 0 with standard deviation 1. The area under this curve between any two Z-scores corresponds exactly to the probability that a randomly chosen observation falls in that range.

The CDF (cumulative distribution function) Φ(z) gives P(Z ≤ z): the probability that a standard normal variable is less than or equal to z. This calculator evaluates Φ using the error function (erf), with accuracy better than 1.5 × 10⁻⁷.

The four probability regions

For any Z-score, there are four standard probability regions used in statistics:

  • Left tail P(Z < z): the percentile rank; the probability that a randomly drawn observation is less than x.
  • Right tail P(Z > z): the exceedance probability; how unusual it is to see a value at least this extreme in the upper direction. Used in one-tailed hypothesis tests.
  • Between ±|z| P(−|z| < Z < |z|): the central region; how much of the distribution is withinz standard deviations of the mean. At z = 1.96 this equals 95% , the basis of the 95% confidence interval.
  • Both tails P(|Z| > |z|): the two-tailed p-value used in hypothesis testing when you're testing whether a value is extreme in either direction (not just one).

The empirical rule (68-95-99.7)

For any normal distribution, the following approximate percentages of data fall within a given number of standard deviations of the mean, a fact so fundamental it is often called the empirical rule:

  • ±1σ contains approximately 68.27% of the data (Z-scores between −1 and +1).
  • ±2σ contains approximately 95.45% of the data (Z-scores between −2 and +2).
  • ±3σ contains approximately 99.73% of the data, meaning only 0.27% of observations fall beyond ±3σ. In quality control, "six-sigma" manufacturing targets a ±6σ defect rate of 3.4 per million.

Z-scores and confidence intervals

Confidence intervals use critical z-values: the Z-scores corresponding to specific probability thresholds. The most commonly used in practice:

  • z = ±1.645: 90% CI: 90% of the distribution falls between −1.645 and +1.645.
  • z = ±1.960: 95% CI: 95% falls between −1.96 and +1.96. This is by far the most widely cited z-value in statistics, used by default in almost every confidence interval and margin-of-error calculation.
  • z = ±2.576: 99% CI: 99% falls between −2.576 and +2.576.

A 95% confidence interval for a sample mean is constructed as x̄ ± 1.96 × (σ / √n), where the ±1.96 comes directly from the Z-score for the 97.5th percentile.

Z-scores in hypothesis testing

In a one-sample Z-test, you compute the Z-score of a sample statistic under the null hypothesis, then compare it to a critical value:

  1. State the null hypothesis: μ = μ₀.
  2. Compute the test statistic: Z = (x̄ − μ₀) / (σ / √n).
  3. Find the p-value: P(Z > |z_obs|) for one-tailed, or P(|Z| > |z_obs|) for two-tailed.
  4. Reject H₀ if p-value < α (the significance level, typically 0.05).

The Z-test is valid when the population standard deviation σ is known and either the population is normal or the sample size is large enough (n ≥ 30) for the Central Limit Theorem to apply. When σ is unknown and estimated from the sample, the t-distribution should be used instead.

Comparing scores across different scales

One of the most practically useful applications of Z-scores is comparing measurements that use different scales. Suppose a student scored 82 on a physics test (mean 74, σ = 9) and 91 on a literature essay (mean 85, σ = 12):

  • Physics Z: (82 − 74) / 9 = +0.89: top 19% of physics students
  • Literature Z: (91 − 85) / 12 = +0.50: top 31% of literature students

Even though the raw literature score is higher, the physics performance is relatively stronger within its distribution. Z-scores reveal this directly.

Limitations

Z-scores assume the underlying distribution is approximately normal. Applied to heavily skewed, bimodal, or fat-tailed distributions, they lose their precise probability interpretation. For non-normal data, robust alternatives like the modified Z-score (using median and median absolute deviation) or percentile-based methods may be more appropriate.