Math · Live
Percent Error Calculator —
observed vs true value.
Calculate the percent error between an experimental measurement and the accepted true value. Get the step-by-step formula, absolute and relative error, signed direction, and an accuracy interpretation — used in chemistry, physics, engineering, and data science.
Inputs
Measurement values
Formula
Percent Error = |observed − true| ÷ |true| × 100
= |5.8 − 6| ÷ |6| × 100
- Percent error
- 3.33%
- Absolute error
- 0.2
- Direction
- Underestimate
Percent error
abs. error 0.2
1–5%: acceptable for most lab or applied work.
Working
Step-by-step calculation
- 1Absolute error
|observed − true| = |5.8 − 6| = 0.2
- 2Divide by |true value|
0.2 ÷ |6| = 0.2 ÷ 6 = 0.033333
- 3Multiply by 100 to get percent
0.033333 × 100 = 3.33%
Reference
Typical acceptable percent error by field
| Field | Acceptable range | Notes |
|---|---|---|
| Analytical chemistry | < 0.1% | Titrations, gravimetric analysis |
| Physics lab | < 1–5% | Depends on instrument precision |
| Chemistry lab | < 1–3% | Stoichiometry, reaction yields |
| Biology / ecology | < 5% | Field measurements |
| Engineering | < 0.1–1% | Safety-critical applications |
| Financial forecasting | < 2–5% | Revenue, cost projections |
| Medical diagnostics | < 1–2% | Lab assays, imaging |
| Weather forecasting | Varies | Ensemble models, time-horizon dependent |
Field guide
What percent error measures and why it matters.
Percent error quantifies how far an experimental or observed measurement deviates from the theoretically accepted (true) value, expressed as a percentage of that true value. It answers the question: “How wrong is my measurement, proportionally?” A 3% percent error on a laboratory density measurement means the observed value is 3 percentage points away from the reference, as a fraction of the reference.
The formula
Percent Error = |observed − true| ÷ |true| × 100
Absolute error = |observed − true|
Relative error = |observed − true| ÷ |true|
Signed % error = (observed − true) ÷ |true| × 100
The absolute value signs around the numerator ensure the result is always non-negative — percent error is a magnitude, not a direction. The absolute value around the denominator handles cases where the true value is negative (e.g., a temperature below zero).
Unsigned vs signed percent error
The unsigned percent error (always ≥ 0) is the standard definition used in most scientific contexts. It tells you the magnitude of the error without indicating whether the measurement was too high or too low.
The signed percent error omits the outer absolute value on the numerator, so the result is negative when the observed value is lower than the true value (underestimate) and positive when it is higher (overestimate):
Signed % error = (observed − true) ÷ |true| × 100
Negative → observed < true (underestimate / systematic low bias)
Positive → observed > true (overestimate / systematic high bias)
The sign matters when diagnosing systematic bias in an instrument or method — consistently negative errors suggest the instrument reads low; consistently positive errors suggest it reads high.
Percent error vs percent difference
These two formulas are frequently confused:
- Percent error compares a measured value to a known true value. Use it when one of the two values is a reference standard or theoretical prediction and the other is your experimental result.
- Percent difference compares two experimental values of equal status when neither is the “true” value. Its denominator is the average of the two values:
|A − B| ÷ ((A + B) / 2) × 100.
If you measured the boiling point of water under two experimental conditions and want to compare the results, use percent difference. If you measured the boiling point once and want to compare it against 100 °C (the accepted value), use percent error.
When the true value is zero
Percent error is mathematically undefined when the true value is exactly zero, because the formula requires dividing by the true value. In this case, the absolute error (|observed − 0| = |observed|) is the appropriate metric. In practice, true values of exactly zero are rare in physical measurements but do appear in contexts like residuals from regression models or null-hypothesis tests — where different metrics (RMSE, MAE, Cohen's d) are used instead.
Worked example: measuring gravitational acceleration
A student drops a ball and measures gravitational acceleration using a timer, recording 9.72 m/s². The accepted value is 9.807 m/s².
| Step | Calculation | Result |
|---|---|---|
| Absolute error | |9.72 − 9.807| | 0.087 m/s² |
| Relative error | 0.087 ÷ |9.807| | 0.00887 |
| Percent error | 0.00887 × 100 | 0.887% |
| Signed % error | (9.72 − 9.807) ÷ 9.807 × 100 | −0.887% (underestimate) |
A percent error of 0.887% is excellent for a simple timing experiment. The negative sign confirms the student's measurement was slightly below the true value, which is typical for manual timing, since human reaction time adds a small systematic delay.
How to reduce experimental percent error
High percent error traces back to two root causes:
- Random error: unpredictable variation from measurement to measurement (instrument resolution, environmental noise, observer inconsistency). Reduce it by repeating the measurement and averaging. A single trial is always less reliable than the mean of ten.
- Systematic error (bias): a consistent offset in one direction (faulty calibration, reaction time, a scale that reads 0.5 g too high). Averaging more trials does not fix systematic error; you must identify and correct the root cause.
The signed percent error is the key diagnostic: if every trial underestimates the true value, the error is systematic. If trials scatter around the true value, the error is primarily random.