Sample Size Calculator

What Is Sample Size and Why Does It Matter?

Sample size is the number of individuals or observations you need to include in your study to draw statistically valid conclusions about a larger population. Too few participants and your results may be unreliable; too many and you waste time and resources. The sample size calculator tells you the minimum number of respondents needed for your survey, poll, or experiment to be statistically meaningful.

Every survey involves a trade-off: you cannot ask every single person in your population (unless it's very small), so you select a representative subset. The quality of that subset depends on three key factors:

The calculator uses the standard formula for proportions, assuming maximum variability (p = 0.5, which gives the largest possible required sample — a conservative, safe choice). For finite populations, it applies the finite population correction factor to reduce the required sample when your population is smaller.

The Sample Size Formula Explained

The core formula for calculating sample size uses the normal approximation and assumes you're estimating a proportion with maximum uncertainty (p = 0.5):

For an infinite (very large) population:

n = Z² × p × (1 − p) / e²

Where:

For a finite population: After calculating n above, apply the finite population correction (FPC):

n_adjusted = n / (1 + (n − 1) / N)

Where N is the total population size. The FPC reduces the required sample when your population is small relative to the initial sample size estimate. For populations over 100,000, the correction is negligible (less than 1%).

Example: You want to survey employees at a company of 500 people. Confidence = 95%, Margin = 5%.

So you need 218 respondents — not 385 — because your population is small enough for the correction to matter significantly.

Sample Size Reference Table

Pre-calculated sample sizes for the most common scenarios (95% confidence level, p = 0.5):

Population±3% Margin±5% Margin±10% Margin
100928049
50034121882
1,00051627888
5,00088035795
10,00096437096
50,0001,04538297
100,0001,05638397
∞ (infinite)1,06838497

Key insight: once your population exceeds ~50,000, the required sample size barely changes. This is why national polls of 1,000–1,500 people can represent hundreds of millions — the math works regardless of whether the population is 1 million or 100 million.

Confidence Level: 90% vs 95% vs 99%

The confidence level is one of the most misunderstood concepts in statistics. A 95% confidence level does not mean "there's a 95% chance the true value is in my interval." It means: if you repeated your survey 100 times with different random samples, approximately 95 of those 100 surveys would produce intervals containing the true population value.

Confidence LevelZ-ScoreSample Size ImpactTypical Use Case
90%1.645Smallest (fastest/cheapest)Exploratory research, pilot studies
95%1.960Standard (+36% vs 90%)Academic research, business surveys
99%2.576Largest (+78% vs 95%)Clinical trials, high-stakes decisions

Moving from 95% to 99% confidence increases your required sample by 78%. Moving from ±5% to ±3% margin (at 95% confidence) almost triples the sample size. These trade-offs are why defining your acceptable margin of error and confidence level before data collection is essential — changing them after the fact is a form of p-hacking.

For most market research and business surveys, 95% confidence with ±5% margin is the industry standard. For academic publications, 95% is almost universal. For medical or safety-critical research where a wrong conclusion could harm people, 99% confidence is appropriate. For quick internal company surveys where exactness matters less, 90% confidence with ±7% or ±10% margin may be entirely acceptable and dramatically reduces the required sample.

Margin of Error: What ±5% Actually Means

The margin of error defines the range around your sample result within which the true population value likely falls. If 60% of survey respondents prefer option A, with a ±5% margin, the true population preference is between 55% and 65% (at your stated confidence level).

Margins of error matter enormously in practice:

As a practical rule: use ±5% for general surveys, ±3% when you need to detect moderate differences (e.g., before/after comparisons), and ±1–2% only when resources permit and the stakes demand it.

When to Use the Finite Population Correction

The finite population correction (FPC) reduces the required sample size when your population is small. The correction becomes significant when your initial (uncorrected) sample size exceeds 5% of the population. The rule of thumb:

n/N ratioCorrection ImpactAction
< 5%Negligible (<5% reduction)Use uncorrected formula
5% – 20%Moderate (5–20% reduction)Apply FPC, saves real effort
20% – 50%Large (20–50% reduction)Always apply FPC
> 50%Very large (>50% reduction)Consider census (survey everyone)

Examples where FPC matters: a university surveying its 800 students, a company surveying its 200 employees, a school district studying outcomes for its 1,500 pupils. In all these cases, the uncorrected formula significantly overestimates the required sample, wasting resources. Our calculator automatically applies FPC when you provide a population size.

Examples where FPC doesn't matter: a national consumer survey (population = 330 million Americans), a global market research study, any survey of a city's residents. For these, treat the population as effectively infinite — the required sample size is the same whether you're polling 100,000 or 100 million people.

Common Mistakes in Sample Size Planning

Even experienced researchers make these errors when planning sample sizes:

Sample Size in Different Research Contexts

Different fields have different standards for acceptable sample sizes and statistical rigor:

Market research and consumer surveys: Sample sizes of 300–1,000 are typical for national consumer research. The key is not absolute size but representativeness — a well-structured sample of 400 US adults (properly stratified by age, gender, region, income) is more reliable than a convenient 2,000-person sample from a biased channel.

Clinical trials: Sample size calculations for clinical trials use power analysis (typically 80% power to detect a specific effect size), not just the margin-of-error approach. A study needs enough participants to detect a clinically meaningful difference between treatment and control groups. For small effect sizes, this can require thousands of participants per arm.

UX research and usability testing: The famous "5 users" heuristic from Jakob Nielsen suggests that 5 participants reveal 85% of usability issues in qualitative research — but this is for qualitative observation, not quantitative measurement. For measuring task completion rates, satisfaction scores, or time-on-task with statistical rigor, you still need 25–50+ participants per condition.

Academic surveys: Social science research typically targets 95% confidence, ±5% margin, with detailed subgroup analysis. Online survey tools and university participant pools make collecting 500–1,000 responses feasible for most researchers.

Employee engagement surveys: For small organizations (under 200 employees), collecting responses from everyone (census approach) is often better than sampling — it eliminates sampling error entirely, though non-response bias remains an issue.

Quality control in manufacturing: Statistical process control uses different sampling schemes (like acceptance sampling per ISO 2859) that factor in lot size and acceptable quality levels. These are specialized applications beyond the general survey formula.

Frequently Asked Questions

What sample size do I need for a 95% confidence level with 5% margin of error?

For a large or unknown population with 95% confidence and ±5% margin of error, you need 384 respondents (using p = 0.5 for maximum conservatism). If your population is smaller (e.g., 1,000 people), the finite population correction reduces this to 278. This is the most common setup in general survey research.

What is the margin of error for a sample of 1,000?

A sample of 1,000 from a large population gives a margin of error of approximately ±3.1% at 95% confidence (formula: e = 1.96 × √(0.5 × 0.5 / 1000) = 0.031). This is why major polls typically use 1,000–1,500 respondents — it gives a respectable ±3% margin without excessive cost.

Does sample size depend on population size?

For large populations (100,000+), no — sample size barely depends on population size. The required sample to achieve ±5% at 95% confidence is 384 whether the population is 100,000 or 100 million. Population size only meaningfully affects sample size when N is small (under ~10,000) and you apply the finite population correction.

What happens if I use a smaller sample than required?

Using a smaller sample than calculated increases your margin of error. If you need 384 but only collect 200, your actual margin of error widens from ±5% to approximately ±6.9%. Your results are still valid but less precise — the true population value could be further from your sample estimate than you intended to allow.

How do I calculate sample size for a small population of 200?

For N = 200 at 95% confidence and ±5% margin: first calculate the infinite-population n = 384, then apply FPC: n_adj = 384 / (1 + 383/200) = 384 / 2.915 = 132. So you need about 132 out of 200 people — a 66% participation rate. At this ratio, you might consider surveying everyone (census) to eliminate sampling error.

⚡ Powered by RunCalc