Accuracy is how close a measurement is to the correct value for that measurement. The precision of a measurement system is refers to how close the agreement is between repeated measurements (which are repeated under the same conditions). Measurements can be both accurate and precise, accurate but not precise, precise but not accurate, or neither.
Precision and Imprecision
Precision refers to how well measurements agree with each other in multiple tests.
Random error or Imprecision is usually quantified by calculating the coefficient of variation from the results of a set of duplicate measurements
The accuracy of a measurement is how close a result comes to the true value.
When randomness is attributed to errors, they are "errors" in the sense in which that term is used in statistics.
Systematic error (bias) occurs with the same value, when we use the instrument in the same way (eg calibration error) and in the same case is sometimes called statistical bias. It may often be reduced with standardised procedures. Part of the learning process in the various sciences is learning how to use standard instruments and protocols so as to minimise systematic error.
Random error which may vary from observation to another. Random error (or random variation) is due to factors which cannot or will not be controlled. Random error often occurs when instruments are pushed to the extremes of their operating limits. For example, it is common for digital balances to exhibit random error in their least significant digit. Three measurements of a single object might read something like 0.9111g, 0.9110g, and 0.9112g.
Systematic error or Inaccuracy is quantified by the average difference (bias) between a set of measurements obtained with the test method with a reference value or values obtained with a reference method.
There is uncertainty in all scientific data. Uncertainty is reported in terms of confidence.
Uncertainty is the quantitative estimation of error present in data; all measurements contain some uncertainty generated through systematic error and/or random error.
Acknowledging the uncertainty of data is an important component of reporting the results of scientific investigation.
Careful methodology can reduce uncertainty by correcting for systematic error and minimizing random error. However, uncertainty can never be reduced to zero.
Estimating the experimental uncertainty for a single measurement
Any measurement made will have some uncertainty associated with it, no matter the precision of the measuring tool. So how is this uncertainty determined and reported?
The uncertainty of a single measurement is limited by the precision and accuracy of the measuring instrument, along with any other factors that might affect the ability of the experimenter to make the measurement.
For example, if you are trying to use a ruler to measure the diameter of a tennis ball, the uncertainty might be ± 5 mm, but if you used a Vernier caliper, the uncertainty could be reduced to maybe ± 2 mm. The limiting factor with the ruler is parallax, while the second case is limited by ambiguity in the definition of the tennis ball's diameter (it's fuzzy!). In both of these cases, the uncertainty is greater than the smallest divisions marked on the measuring tool (likely 1 mm and 0.05 mm respectively).
Unfortunately, there is no general rule for determining the uncertainty in all measurements. The experimenter is the one who can best evaluate and quantify the uncertainty of a measurement based on all the possible factors that affect the result. Therefore, the person making the measurement has the obligation to make the best judgment possible and report the uncertainty in a way that clearly explains what the uncertainty represents:
Measurement = (measured value ± standard uncertainty) unit of measurement
where the ± standard uncertainty indicates approximately a 68% confidence interval (see sections on Standard Deviation and Reporting Uncertainties).
Example: Diameter of tennis ball = 6.7 ± 0.2 cm.
Estimating the experimental uncertainty for a repeated measure (Standard deviation)
Suppose you time the period of oscillation of a pendulum using a digital instrument (that you assume is measuring accurately) and find: T = 0.44 seconds. This single measurement of the period suggests a precision of ±0.005 s, but this instrument precision may not give a complete sense of the uncertainty. If you repeat the measurement several times and examine the variation among the measured values, you can get a better idea of the uncertainty in the period. For example, here are the results of 5 measurements, in seconds: 0.46, 0.44, 0.45, 0.44, 0.41.
For this situation, the best estimate of the period is the average, or mean.
Whenever possible, repeat a measurement several times and average the results. This average is generally the best estimate of the "true" value (unless the data set is skewed by one or more outliers which should be examined to determine if they are bad data points that should be omitted from the average or valid measurements that require further investigation).
Generally, the more repetitions you make of a measurement, the better this estimate will be, but be careful to avoid wasting time taking more measurements than is necessary for the precision required.
Consider, as another example, the measurement of the thickness of a piece of paper using a micrometer. The thickness of the paper is measured at a number of points on the sheet, and the values obtained are entered in a data table.
This average is the best available estimate of the thickness of the piece of paper, but it is certainly not exact. We would have to average an infinite number of measurements to approach the true mean value, and even then, we are not guaranteed that the mean value is accurate because there is still some systematic error from the measuring tool, which can never be calibrated perfectly. So how do we express the uncertainty in our average value?
The most common way to describe the spread or uncertainty of the data is the standard deviation
The significance of the standard deviation is this: if you now make one more measurement using the same micrometer, you can reasonably expect (with about 68% confidence) that the new measurement will be within 0.002 mm of the estimated average of 0.065 mm. In fact, it is reasonable to use the standard deviation as the uncertainty associated with this single new measurement.
This is written
The thickness of 80 gsm paper (n=5) averaged 0.065 (s = 0.002mm)
s = standard deviation
The thickness of 80 gsm paper (n=5) averaged 0.065 ± 0.002 mm
This style necessitates specifically saying in the Methods what measure of variability is reported with the mean.
Standard deviation of the means (Standard error of mean (SEM))
The standard error is a measure of the accuracy of the estimate of the mean from the true or reference value. The main use of the standard error of the mean is to give confidence intervals around the estimated means for normally distributed data, not for the data itself but for the mean.
If measured values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the standard error of the mean, which is the standard deviation divided by the square root of the number of measurements.
Standard error is often used to test (in terms of null hypothesis testing) differences between means.
For example two populations of salmon fed on two different diets may be considered significantly different if the 95% confidence intervals (2 std errors) around the estimated fish sizes under Diet A do not cross the estimated mean fish size under Diet B.
Note that the standard error of the mean depends on the sample size, the standard error of the mean shrink to 0 as sample size increases to infinity.
Standard error of mean (SEM) versus standard deviation
In scientific and technical literature, experimental data are often summarized either using the mean and standard deviation of the sample data or the mean with the standard error. This often leads to confusion about their interchangeability. However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean is descriptive of the random sampling process.
The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem.
Put simply, the standard error of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the standard deviation of the sample is the degree to which individuals within the sample differ from the sample mean. If the population standard deviation is finite, the standard error of the mean of the sample will tend to zero with increasing sample size, because the estimate of the population mean will improve, while the standard deviation of the sample will tend to approximate the population standard deviation as the sample size increases.
The confidence level represents the frequency (i.e. the proportion) of possible confidence intervals that contain the true value of the unknown population parameter. ... Most commonly, the 95.4% (“two sigma”) confidence level is used. However, other confidence levels can be used, for example, 68.3% (“one sigma”) and 99.7% (“three sigma”).
Absolute error The absolute error of a measurement is half of the smallest unit on the measuring device. The smallest unit is called the precision of the device.
Array An array is an ordered collection of objects or numbers arranged in rows and columns.
Bias Bias generally refers to a systematic favouring of certain outcomes more than others, due to unfair influence (knowingly or otherwise).
Confidence levels the probability that the value of a parameter falls within a specified range of values. For example 2s = 95% confidence level
Data cleansing Detecting and removing errors and inconsistencies from data in order to improve the quality of data (also known as data scrubbing).
data set An organised collection of data.
Descriptive statistics Statistics that quantitatively describe or summarise features of a collection of information.
Large data sets Data sets that must be of a size to be statistically reliable and require computational analysis to reveal patterns, trends and associations.
Limits of accuracy The limits of accuracy for a recorded measurement are the possible upper and lower bounds for the actual measurement.
Measures of central tendency Measures of central tendency are the values about which the set of data values for a particular variable are scattered. They are a measure of the centre or location of the data. The two most common measures of central tendency are the mean and the median.
Measures of spread Measures of spread describe how similar or varied the set of data values are for a particular variable. Common measures of spread include the range, combinations of quantiles (deciles, quartiles, percentiles), the interquartile range, variance and standard deviation.
Normal distribution The normal distribution is a type of continuous distribution whose graph looks like this:
The mean, median and mode are equal and the scores are symmetrically arranged either side of the mean.
The graph of a normal distribution is often called a ‘bell curve’ due to its shape.
Reliability An extent to which repeated observations and/or measurements taken under identical circumstances will yield similar results.
Sampling Sampling is the selection of a subset of data from a statistical population. Methods of sampling include:
systematic sampling – sample data is selected from a random starting point and using a fixed periodic interval
self-selecting sampling – non-probability sampling where individuals volunteer themselves to be part of a sample
simple random sampling – sample data is chosen at random where each member has an equal probability of being chosen
stratified sampling – after dividing the population into separate groups or strata, a random sample is then taken from each group/strata in an equivalent proportion to the size of that group/strata in the population.
A sample can be used to estimate the characteristics of the statistical population.
Standard deviation Standard deviation is a measure of the spread of a dataset. It gives an indication of how far, on average, individual data values are spread from the mean.
Standard error The standard error of the mean (SEM) is the standard deviation of the sampling distribution of the mean.
Uncertainty Any single value has an uncertainty equal to the standard deviation. However, if the values are averaged, then the mean measurement value has a much smaller uncertainty, equal to the standard error of the mean, which is the standard deviation divided by the square root of the number of measurements.
Measurements and Error Analysis, www.webassign.net/question_assets/unccolphysmechl1/measurements/manual.html.
Altman, Douglas G, and J Martin Bland. “Standard Deviations and Standard Errors.” BMJ (Clinical Research Ed.), BMJ Publishing Group Ltd., 15 Oct. 2005, www.ncbi.nlm.nih.gov/pmc/articles/PMC1255808/.
Hertzog, Lionel. “Standard Deviation vs Standard Error.” DataScience , 28 Apr. 2017, datascienceplus.com/standard-deviation-vs-standard-error/.
Mott, Vallerie. “Introduction to Chemistry.” Lumen, courses.lumenlearning.com/introchem/chapter/accuracy-precision-and-error/.
Schoonjans, Frank. “Definition of Accuracy and Precision.” MedCalc, MedCalc Software, 9 Nov. 2018, www.medcalc.org/manual/accuracy_precision.php.
“Standard Error.” Wikipedia, Wikimedia Foundation, 7 Mar. 2019, en.wikipedia.org/wiki/Standard_error.