In probability and statistics, the standard deviation of a probability distribution, random variable, or population or multiset of values is a measure of the spread of its values. It is usually denoted with the letter σ (lower case sigma). It is defined as the square root of the variance.
To understand standard deviation, keep in mind that variance is the average of the squared differences between data points and the mean. Variance is tabulated in units squared. Standard deviation, being the square root of that quantity, therefore measures the spread of data about the mean, measured in the same units as the data.
Said more formally, the standard deviation is the root mean square (RMS) deviation of values from their arithmetic mean.
For example, in the population {4, 8}, the mean is 6 and the deviations from mean are {-2, 2}. Those deviations squared are {4, 4} the average of which (the variance) is 4. Therefore, the standard deviation is 2. In this case 100% of the values in the population are at one standard deviation of the mean.
The standard deviation is the most common measure of statistical dispersion, measuring how widely spread the values in a data set are. If the data points are close to the mean, then the standard deviation is small. As well, if many data points are far from the mean, then the standard deviation is large. If all the data values are equal, then the standard deviation is zero.
For a population, the standard deviation can be estimated by a modified standard deviation (s) of a sample. The formulas are given below.