Skip to main content
🔬 Advanced 🔥 Popular

เครื่องคำนวณส่วนเบี่ยงเบนมาตรฐาน

Calculate standard deviation, variance, mean, and more for any data set. Supports both population and sample calculations.

วิธีใช้เครื่องคิดเลขนี้

  1. ป้อนNumbers (comma-separated)
  2. คลิกปุ่มคำนวณ
  3. อ่านผลลัพธ์ที่แสดงด้านล่างเครื่องคิดเลข

What Is Standard Deviation and Why Does It Matter?

Standard deviation measures how spread out your data is around the mean (average). A small standard deviation means values cluster tightly around the mean; a large standard deviation means values are widely scattered.

Two datasets can have the same average but completely different distributions — standard deviation captures that difference:

Both have a mean of 10, but Dataset B is nearly 10× more variable. Standard deviation makes this visible.

Standard deviation is denoted σ (sigma) for a population and s for a sample. It is the square root of variance, expressed in the same units as the original data — making it more interpretable than variance alone.

Applications span almost every field: quality control (are manufactured parts consistently within tolerance?), finance (investment risk = return volatility), medicine (is a patient's reading within 2 SD of normal?), education (how are test scores distributed?), and sports analytics (how consistent is an athlete's performance?).

Population vs Sample Standard Deviation

The most important choice when calculating standard deviation is whether you're working with a population (all possible data points) or a sample (a subset). This determines which formula to use and affects the result.

Population standard deviation (σ): Use when you have data for the entire group you're studying. Formula: σ = √[Σ(xᵢ − μ)² / N]

Where: μ = population mean, N = number of values, Σ = sum of all values.

Sample standard deviation (s): Use when your data is a sample drawn from a larger population. Formula: s = √[Σ(xᵢ − x̄)² / (n−1)]

Where: x̄ = sample mean, n = number of values in sample, (n−1) = Bessel's correction.

Bessel's correction divides by (n−1) instead of n because samples tend to underestimate the true population variance — particularly for small samples. Using (n−1) provides an unbiased estimator of the population variance.

Which to use?

Step-by-Step Standard Deviation Calculation

Let's work through a complete example with real numbers:

Dataset: Test scores of 6 students: {72, 85, 91, 68, 79, 88}

Step 1 — Find the mean: (72 + 85 + 91 + 68 + 79 + 88) / 6 = 483 / 6 = 80.5

Step 2 — Find each deviation from mean and square it:

Score (xᵢ)Deviation (xᵢ − x̄)Squared (xᵢ − x̄)²
7272 − 80.5 = −8.572.25
8585 − 80.5 = +4.520.25
9191 − 80.5 = +10.5110.25
6868 − 80.5 = −12.5156.25
7979 − 80.5 = −1.52.25
8888 − 80.5 = +7.556.25
Sum0 (always)417.50

Step 3 — Calculate variance: Sample variance (n−1) = 417.50 / 5 = 83.50

Step 4 — Take square root for standard deviation: s = √83.50 ≈ 9.14

Interpretation: Most scores fall within about 9.14 points of the 80.5 mean. Approximately 68% of scores would be expected between 71.4 and 89.6 (mean ± 1 SD) if this were a normally distributed population.

The Empirical Rule and Normal Distribution

For data that follows a normal distribution (bell curve), the Empirical Rule (68-95-99.7 rule) tells you exactly how many values fall within each standard deviation range:

RangePercentage of DataExample (mean=100, SD=15)
Mean ± 1 SD~68.27%85 to 115
Mean ± 2 SD~95.45%70 to 130
Mean ± 3 SD~99.73%55 to 145
Beyond ± 3 SD~0.27%Below 55 or above 145

The classic application is IQ scores: mean = 100, SD = 15. An IQ of 130 is 2 SDs above the mean — only about 2.3% of people score that high. An IQ of 145 is 3 SDs above the mean — about 0.13% of people (roughly 1 in 750).

In quality control, the Six Sigma standard requires processes to have fewer than 3.4 defects per million opportunities — equivalent to keeping variation within ±6 standard deviations from the target, leaving only 0.00034% defect rate. This is the statistical foundation of Six Sigma manufacturing quality programs.

Not all data is normally distributed. Income distributions are right-skewed (a few very high earners stretch the right tail). In such cases, the median and interquartile range may be more informative than mean and standard deviation.

Other Statistical Measures: Mean, Median, Variance, and More

Standard deviation is most meaningful alongside other descriptive statistics. Here's how they work together:

Standard Deviation in Finance, Science, and Sports

Standard deviation has specific, practical interpretations across different fields:

Finance — Measuring investment risk: In finance, standard deviation of returns = volatility = risk. A stock returning 10% annually with SD of 15% has a 68% probability of returning between −5% and +25% in any given year. The S&P 500 historically has an annual SD of about 15–20%. Bond portfolios typically have SD of 3–7%. Risk-adjusted performance (Sharpe Ratio) = (return − risk-free rate) / SD — the higher, the better.

Science — Quality control and measurement: Laboratory instruments report measurements as mean ± SD. A thermometer reading 37.2 ± 0.3°C means the measurement is within 0.3°C of the true value with 68% confidence. In clinical trials, statistical significance is typically defined as the treatment effect being more than 2 SDs from the control group mean (p < 0.05).

Sports analytics: Player consistency is quantified with SD. A basketball player averaging 25 points per game with SD of 3 is more reliable than one averaging 25 with SD of 10. Weather forecasting uses ensemble models where the SD of temperature predictions indicates confidence — a narrow SD means forecasters agree; a wide SD means high uncertainty.

Education: Z-scores express how many standard deviations a student's score is from the class mean: Z = (score − mean) / SD. A Z-score of +2 means scoring 2 SDs above the mean — better than approximately 97.7% of students. Standardized tests like the SAT are designed so scores follow a roughly normal distribution, enabling these percentile comparisons.

อัปเดตล่าสุด: March 2026

Frequently Asked Questions

What is the difference between standard deviation and variance?

Variance is the average of squared deviations from the mean. Standard deviation is the square root of variance. Both measure spread, but standard deviation is in the same units as the data (easier to interpret), while variance is in squared units. A height dataset in cm has variance in cm² — not meaningful. The SD in cm is directly comparable to the original measurements.

When should I use population vs sample standard deviation?

Use population SD (σ, divides by N) when you have data for the entire population you're describing — all students in one specific class, all employees in one company. Use sample SD (s, divides by n−1) when your data is a subset of a larger population and you're estimating the population's variability — a survey sample, clinical trial participants, quality control samples from a production run.

What does a high or low standard deviation mean?

A low standard deviation means data points are clustered closely around the mean — consistency, low variability. A high standard deviation means data is spread widely — high variability. Neither is inherently better; it depends on context. In manufacturing, low SD is desired (consistency). In investment returns, some investors accept higher SD for higher potential returns.

What is a Z-score and how does it relate to standard deviation?

A Z-score measures how many standard deviations a data point is from the mean: Z = (value − mean) / SD. A Z-score of 0 = exactly average. Z = +1 = 1 SD above mean (84th percentile). Z = −2 = 2 SDs below mean (2.3rd percentile). Z-scores allow comparing values from different datasets with different scales.

What is the standard error and how is it different from standard deviation?

Standard deviation describes the spread of individual data points. Standard error of the mean (SEM = SD/√n) describes the precision of the sample mean as an estimate of the true population mean. As sample size increases, SEM decreases (more data = more precise estimate), but SD doesn't necessarily change. SEM is used in confidence intervals; SD describes the distribution of the data itself.

Can standard deviation be negative?

No. Standard deviation is always zero or positive. It equals zero only when all data values are identical (no variability at all). Since it's calculated as a square root of a sum of squares, it cannot be negative. Negative variance or standard deviation would indicate a calculation error.

How do outliers affect standard deviation?

Outliers can dramatically inflate standard deviation because deviations are squared — large deviations from the mean contribute disproportionately. For example, in {10, 11, 10, 12, 100}: removing the outlier (100) drops SD from ~38 to ~0.9. When outliers are present, the median and interquartile range (IQR) are more robust measures of central tendency and spread.

What does it mean if standard deviation equals zero?

A standard deviation of zero means all values in the dataset are identical — there is no variability whatsoever. For example, {5, 5, 5, 5, 5} has mean = 5 and SD = 0. This occurs in artificial or highly constrained datasets. In practical datasets, SD = 0 often indicates a data collection error or identical measurements.