Math · Live
Standard deviation,
in one paste.
Paste any list of numbers — comma, space, or line-separated — and instantly see mean, variance, and standard deviation (sample and population), plus median, mode, quartiles, range, and a step-by-step working.
Inputs
Pick a dataset type
Data is a subset of a larger population. Uses N − 1 (Bessel's correction).
Separate values with commas, spaces, or new lines. Decimals and negatives allowed. 1,000 values maximum.
- Sample SD (s)
- 5.6421
- Mean
- 20.5
- Count
- 10
Sample standard deviation
n = 10
Average distance of every value from the mean — using N − 1 (Bessel's correction).
Mean (x̄)
20.5
Spread
Distribution
Sorted
Ascending order
Working
Step-by-step
205 ÷ 10
20.5
Sum of squared deviations
286.5
286.5 ÷ 9
31.8333
Field guide
How to calculate standard deviation, step by step.
Standard deviation is the most widely used measure of how spread-out a dataset is. A small standard deviation means the values huddle close to the average. A large one means they scatter widely. The arithmetic itself is short — three steps, but two details trip up almost everyone the first time: squaring the deviations (so positives and negatives don't cancel) and the population vs. sample distinction (which denominator to divide by). This calculator computes both variants in parallel so you can see the difference at a glance.
Step 1: The mean (average)
Add every value, then divide by how many values you have. The mean (also called the arithmetic average) is the centre of mass of the dataset:
For the dataset 12, 15, 18, 22, 30, 27, 19, 21, 16, 25, the sum is 205 and the count is 10, so the mean is 20.5. Notation: when the data is a sample, the mean is written x̄ (“x-bar”); when it's the entire population, the mean is written μ (“mu”).
Step 2: The squared deviations
For each value, subtract the mean and square the result. The squaring matters: without it, the positive and negative deviations would cancel out and you'd always get zero. Sum the squares to get what statisticians call SS (sum of squares):
For the same dataset, the squared deviations are 72.25, 30.25, 6.25, 2.25, 90.25, 42.25, 2.25, 0.25, 20.25, 20.25. Their sum is 286.5.
Step 3: Variance and standard deviation
Variance is just SS divided by either N or N−1. Standard deviation is the square root of variance, taking the root puts the answer back into the same units as the original data, which is why standard deviation, not variance, is what people usually quote.
Sample variance s² = SS ÷ (N − 1)
σ = √σ² · s = √s²
Population vs. sample and why N − 1
Use the population formula when your dataset truly contains every member of the group you care about — a class of 30 students, a roster of 12 employees, every game played in a season. Divide by N.
Use the sample formula when your dataset is a subset drawn from a larger population, such as a poll of 1,000 voters, a clinical trial of 200 patients, daily sales for one month chosen to estimate yearly variance. Divide by N − 1.
The N − 1 denominator is called Bessel's correction. It exists because the sample mean is itself estimated from the data, using the data twice (first for the mean, then for the deviations) slightly under-estimates the true variance. Dividing by N − 1 instead of N is the smallest adjustment that makes the sample variance an unbiased estimator of the population variance.
For our 10-value dataset:
Sample: s² = 286.5 ÷ 9 ≈ 31.833 → s ≈ 5.642
With small samples the gap is noticeable. As N grows large, the two values converge; at N = 1,000 the difference is in the third decimal place.
What standard deviation actually tells you
For a roughly bell-shaped (normal) distribution, the 68–95–99.7 rule applies:
- ~68% of values fall within 1 standard deviation of the mean.
- ~95% fall within 2 standard deviations.
- ~99.7% fall within 3 standard deviations.
So if a stock's daily returns have a mean of 0.05% and a standard deviation of 1.2%, you should expect about 95% of days to land between −2.35% and +2.45%. A move beyond ±3.6% in either direction is a once-in-300-day event under the normal-distribution assumption.
Coefficient of variation (CV)
Standard deviation is in the same units as the data, which makes it hard to compare across very different scales. The coefficient of variation normalizes by the mean and is reported as a percent:
A CV of 5% is “low” variability; 30%+ is “high.” CV is meaningless when the mean is zero or near-zero, so the calculator hides it in those cases.
Median, mode, and quartiles
Standard deviation is one of several spread measures. The calculator also reports:
- Median: the middle value when sorted. Less sensitive to outliers than the mean.
- Mode: the most frequent value (or values, if multiple tie). Empty when every value is unique.
- Q1 / Q3: 25th and 75th percentiles. Their gap is the interquartile range (IQR) , the spread of the middle half of the data, robust to extreme values.
- Range: max minus min. Easy to compute, but a single outlier can make it misleading.
Worked example: quality control
A factory measures the diameter of 8 ball bearings (in mm): 9.97, 10.02, 10.00, 9.99, 10.01, 10.03, 9.98, 10.00.
- Mean =
80.00 ÷ 8= 10.000 mm - Squared deviations sum to
0.00280 - Sample variance s² =
0.00280 ÷ 7 ≈ 0.000400 - Sample SD s =
√0.000400 ≈ 0.0200 mm
By the 68–95–99.7 rule, ~95% of bearings should fall between 9.96 mm and 10.04 mm, well within typical machining tolerance.
Common pitfalls
- Forgetting to square: summing the raw deviations always gives zero.
- Using the wrong denominator: dividing a sample by N instead of N − 1 systematically under-estimates spread.
- Mixing units. Make sure every value is in the same unit before computing.
- Outliers; one or two extreme values can dominate the standard deviation. Look at the median and IQR alongside it.
- Non-normal data: the 68–95–99.7 rule only applies when the data is roughly bell-shaped. For skewed data, prefer quartiles.
Disclaimer
This calculator is a fast computational tool. It does the arithmetic exactly, but interpreting the result is up to you. For inferential statistics (hypothesis tests, confidence intervals, regression), pair this with the appropriate test for your study design.