Calculate statistically valid sample sizes for A/B tests, surveys, and experiments with power analysis and effect size calculations.
Per Variation
24,194
Visitors needed per variant
Total Sample
48,388
2 variations total
Days to Run
49
~7 weeks
Effect Size
0.025
Cohen's h: Small
Expected Variant Rate
3.45%
+0.45pp from baseline
Statistical Power
80%
Probability of detecting true effect
Significance Level
5%
Alpha (Type I error rate)
Sample size calculations ensure your experiments and surveys have enough data to draw reliable conclusions. The required sample size depends on several key factors that balance statistical rigor with practical constraints.
Power (1-beta) is the probability of detecting a true effect. Standard is 80%, meaning 20% chance of missing a real difference (Type II error).
Confidence level determines your false positive rate. 95% confidence means 5% chance of detecting an effect that doesn't exist (Type I error).
MDE is the smallest improvement worth detecting. Smaller MDEs require exponentially larger samples. Choose based on business impact.
Effect size measures practical significance independent of sample size. It helps determine if a statistically significant result is also meaningful.
Where p is pooled proportion, z_alpha is critical value for confidence, z_beta is critical value for power.
Where z is the z-score for confidence level, p is expected proportion, e is margin of error, N is population size.
Where sigma is standard deviation and delta is the difference between means to detect.