Sample Size Calculator
Calculate the required sample size for your survey or study. Enter population size, confidence level, and margin of error to get the minimum sample needed. Free statistics tool.
What Is Sample Size and Why Does It Matter?
Sample size is the number of individuals or observations you need to include in your study to draw statistically valid conclusions about a larger population. Too few participants and your results may be unreliable; too many and you waste time and resources. The sample size calculator tells you the minimum number of respondents needed for your survey, poll, or experiment to be statistically meaningful.
Every survey involves a trade-off: you cannot ask every single person in your population (unless it's very small), so you select a representative subset. The quality of that subset depends on three key factors:
- Population size: How many people are in the group you're studying? For very large populations (100,000+), this barely affects the required sample size.
- Confidence level: How sure do you want to be that your sample reflects the true population? The standard is 95%, meaning your results would be correct 95 out of 100 times if you repeated the survey.
- Margin of error: How much variation can you accept? A ±5% margin is typical for general surveys; ±2% is used for high-stakes research like election polling or clinical trials.
The calculator uses the standard formula for proportions, assuming maximum variability (p = 0.5, which gives the largest possible required sample — a conservative, safe choice). For finite populations, it applies the finite population correction factor to reduce the required sample when your population is smaller.
The Sample Size Formula Explained
The core formula for calculating sample size uses the normal approximation and assumes you're estimating a proportion with maximum uncertainty (p = 0.5):
For an infinite (very large) population:
n = Z² × p × (1 − p) / e²
Where:
- n = required sample size
- Z = Z-score for your confidence level (90% → 1.645, 95% → 1.960, 99% → 2.576)
- p = estimated proportion of population with the characteristic (use 0.5 for maximum conservatism)
- e = desired margin of error as a decimal (5% → 0.05)
For a finite population: After calculating n above, apply the finite population correction (FPC):
n_adjusted = n / (1 + (n − 1) / N)
Where N is the total population size. The FPC reduces the required sample when your population is small relative to the initial sample size estimate. For populations over 100,000, the correction is negligible (less than 1%).
Example: You want to survey employees at a company of 500 people. Confidence = 95%, Margin = 5%.
- Z = 1.96, p = 0.5, e = 0.05
- n = (1.96² × 0.5 × 0.5) / 0.05² = 3.8416 × 0.25 / 0.0025 = 384.16 → 385
- n_adjusted = 385 / (1 + 384/500) = 385 / 1.768 = 217.8 → 218
So you need 218 respondents — not 385 — because your population is small enough for the correction to matter significantly.
Sample Size Reference Table
Pre-calculated sample sizes for the most common scenarios (95% confidence level, p = 0.5):
| Population | ±3% Margin | ±5% Margin | ±10% Margin |
|---|---|---|---|
| 100 | 92 | 80 | 49 |
| 500 | 341 | 218 | 82 |
| 1,000 | 516 | 278 | 88 |
| 5,000 | 880 | 357 | 95 |
| 10,000 | 964 | 370 | 96 |
| 50,000 | 1,045 | 382 | 97 |
| 100,000 | 1,056 | 383 | 97 |
| ∞ (infinite) | 1,068 | 384 | 97 |
Key insight: once your population exceeds ~50,000, the required sample size barely changes. This is why national polls of 1,000–1,500 people can represent hundreds of millions — the math works regardless of whether the population is 1 million or 100 million.
Confidence Level: 90% vs 95% vs 99%
The confidence level is one of the most misunderstood concepts in statistics. A 95% confidence level does not mean "there's a 95% chance the true value is in my interval." It means: if you repeated your survey 100 times with different random samples, approximately 95 of those 100 surveys would produce intervals containing the true population value.
| Confidence Level | Z-Score | Sample Size Impact | Typical Use Case |
|---|---|---|---|
| 90% | 1.645 | Smallest (fastest/cheapest) | Exploratory research, pilot studies |
| 95% | 1.960 | Standard (+36% vs 90%) | Academic research, business surveys |
| 99% | 2.576 | Largest (+78% vs 95%) | Clinical trials, high-stakes decisions |
Moving from 95% to 99% confidence increases your required sample by 78%. Moving from ±5% to ±3% margin (at 95% confidence) almost triples the sample size. These trade-offs are why defining your acceptable margin of error and confidence level before data collection is essential — changing them after the fact is a form of p-hacking.
For most market research and business surveys, 95% confidence with ±5% margin is the industry standard. For academic publications, 95% is almost universal. For medical or safety-critical research where a wrong conclusion could harm people, 99% confidence is appropriate. For quick internal company surveys where exactness matters less, 90% confidence with ±7% or ±10% margin may be entirely acceptable and dramatically reduces the required sample.
Margin of Error: What ±5% Actually Means
The margin of error defines the range around your sample result within which the true population value likely falls. If 60% of survey respondents prefer option A, with a ±5% margin, the true population preference is between 55% and 65% (at your stated confidence level).
Margins of error matter enormously in practice:
- Election polls: A ±3% margin with two candidates at 49% vs 51% means the race is essentially a toss-up statistically — neither candidate is outside the other's error range.
- Customer satisfaction: If satisfaction scores 72% this quarter and 69% last quarter, with a ±5% margin, the change is not statistically meaningful — it's within the error range.
- A/B testing: Conversion rate 3.2% vs 3.5% may not be meaningful if your sample is too small; the confidence interval would be wide enough to contain zero difference.
- Medical studies: A new drug showing 48% efficacy vs placebo's 45% needs very tight margins (±1–2%) to claim a meaningful effect; ±5% would make the result meaningless.
As a practical rule: use ±5% for general surveys, ±3% when you need to detect moderate differences (e.g., before/after comparisons), and ±1–2% only when resources permit and the stakes demand it.
When to Use the Finite Population Correction
The finite population correction (FPC) reduces the required sample size when your population is small. The correction becomes significant when your initial (uncorrected) sample size exceeds 5% of the population. The rule of thumb:
| n/N ratio | Correction Impact | Action |
|---|---|---|
| < 5% | Negligible (<5% reduction) | Use uncorrected formula |
| 5% – 20% | Moderate (5–20% reduction) | Apply FPC, saves real effort |
| 20% – 50% | Large (20–50% reduction) | Always apply FPC |
| > 50% | Very large (>50% reduction) | Consider census (survey everyone) |
Examples where FPC matters: a university surveying its 800 students, a company surveying its 200 employees, a school district studying outcomes for its 1,500 pupils. In all these cases, the uncorrected formula significantly overestimates the required sample, wasting resources. Our calculator automatically applies FPC when you provide a population size.
Examples where FPC doesn't matter: a national consumer survey (population = 330 million Americans), a global market research study, any survey of a city's residents. For these, treat the population as effectively infinite — the required sample size is the same whether you're polling 100,000 or 100 million people.
Common Mistakes in Sample Size Planning
Even experienced researchers make these errors when planning sample sizes:
- Confusing population with universe: If you're studying "American adults who use smartphones," your population is ~270 million — not 330 million total Americans. Define your population precisely before calculating.
- Ignoring non-response rate: If you need 385 completed surveys but expect a 40% response rate, you must send invitations to 385 / 0.6 = 642 people. Build non-response into your recruitment plan.
- Underpowering subgroup analysis: If you want to analyze results separately by age group (18–34, 35–54, 55+), each subgroup needs its own minimum sample. A total sample of 400 gives you only ~133 per group — fine for ±8% margin but not ±5%.
- Using known proportions without updating Z: If you know from prior research that the proportion is 20% (not 50%), using p = 0.20 gives a smaller required sample: n = Z² × 0.20 × 0.80 / e². For 95%, ±5%: n = 246 instead of 384. This is valid if your prior estimate is reliable.
- Setting margin of error after seeing data: Changing your margin of error thresholds based on what your data shows is a major methodological error. Set them before data collection.
Sample Size in Different Research Contexts
Different fields have different standards for acceptable sample sizes and statistical rigor:
Market research and consumer surveys: Sample sizes of 300–1,000 are typical for national consumer research. The key is not absolute size but representativeness — a well-structured sample of 400 US adults (properly stratified by age, gender, region, income) is more reliable than a convenient 2,000-person sample from a biased channel.
Clinical trials: Sample size calculations for clinical trials use power analysis (typically 80% power to detect a specific effect size), not just the margin-of-error approach. A study needs enough participants to detect a clinically meaningful difference between treatment and control groups. For small effect sizes, this can require thousands of participants per arm.
UX research and usability testing: The famous "5 users" heuristic from Jakob Nielsen suggests that 5 participants reveal 85% of usability issues in qualitative research — but this is for qualitative observation, not quantitative measurement. For measuring task completion rates, satisfaction scores, or time-on-task with statistical rigor, you still need 25–50+ participants per condition.
Academic surveys: Social science research typically targets 95% confidence, ±5% margin, with detailed subgroup analysis. Online survey tools and university participant pools make collecting 500–1,000 responses feasible for most researchers.
Employee engagement surveys: For small organizations (under 200 employees), collecting responses from everyone (census approach) is often better than sampling — it eliminates sampling error entirely, though non-response bias remains an issue.
Quality control in manufacturing: Statistical process control uses different sampling schemes (like acceptance sampling per ISO 2859) that factor in lot size and acceptable quality levels. These are specialized applications beyond the general survey formula.
Frequently Asked Questions
What sample size do I need for a 95% confidence level with 5% margin of error?
For a large or unknown population with 95% confidence and ±5% margin of error, you need 384 respondents (using p = 0.5 for maximum conservatism). If your population is smaller (e.g., 1,000 people), the finite population correction reduces this to 278. This is the most common setup in general survey research.
What is the margin of error for a sample of 1,000?
A sample of 1,000 from a large population gives a margin of error of approximately ±3.1% at 95% confidence (formula: e = 1.96 × √(0.5 × 0.5 / 1000) = 0.031). This is why major polls typically use 1,000–1,500 respondents — it gives a respectable ±3% margin without excessive cost.
Does sample size depend on population size?
For large populations (100,000+), no — sample size barely depends on population size. The required sample to achieve ±5% at 95% confidence is 384 whether the population is 100,000 or 100 million. Population size only meaningfully affects sample size when N is small (under ~10,000) and you apply the finite population correction.
What happens if I use a smaller sample than required?
Using a smaller sample than calculated increases your margin of error. If you need 384 but only collect 200, your actual margin of error widens from ±5% to approximately ±6.9%. Your results are still valid but less precise — the true population value could be further from your sample estimate than you intended to allow.
How do I calculate sample size for a small population of 200?
For N = 200 at 95% confidence and ±5% margin: first calculate the infinite-population n = 384, then apply FPC: n_adj = 384 / (1 + 383/200) = 384 / 2.915 = 132. So you need about 132 out of 200 people — a 66% participation rate. At this ratio, you might consider surveying everyone (census) to eliminate sampling error.