# Accuracy and precision

** Accuracy and precision** are terms applied to individual measurements or methods of measurement.

*Accuracy*refers to how closely a measurement agrees with the true value it is trying to measure, whereas

*precision*refers to how close repeated measurements are (or would be) to each other. Precision may refer to the degree to which a measurement is rounded before being reported, but can also be an inherent characteristic of the method of measurement being used.

A system of measurement is called *valid* if it is both accurate and precise.

## Contents |

## An example

In the context of weighing an apple, for example, a reported weight of 503.276 grams would be more *precise* than a reported weight of 500 grams, however neither measurement would be *accurate* if the apple actually weighs 452 grams. In this case, a reported weight of 460 grams would be *more accurate* than the 500-gram reported weight.

The scale itself in the preceding example (the method of measurement) would be considered accurate if repeated measurements of the weight of the apple were correct *on the average* (i.e., if their arithmetic mean matches the actual weight of the apple); the scale would be considered precise if repeated measurements agree with each other (i.e., if the standard deviation of the weights is small).

## Systematic error and bias

If a method of measurement results in values that are consistently too large or too small, the method may be called *biased*. In this case, the method has some source of so-called *systematic error*. The *bias* of the method is the difference between its average (or mean) measurement and the true value of what was being measured; in the 452-gram apple example above, if the scale reports an average value of 500 grams when the apple is repeatedly weighed, the the bias of the scale would be 48 grams. A measurement method that gives a bias of zero is called *unbiased*.

## Random error

A method of measurement would have the highest possible degree of precision if repeated measurements always result in the same value. Most methods of measurement in the real world, however, have some source of *random error*, which makes repeated measurements disagree with one another even if the method is unbiased. To overcome random error, taking repeated measurements and averaging them can greatly improve the precision of the final measurement, but this will not affect its accuracy.

## Precision fallacy

Reporting measurements to an unrealistic degree of precision is a fallacy.