Analytics

A/B Test Analysis

Compare performance metrics between different variants of your landing page with statistical confidence.

The A/B Tests tab provides comprehensive statistical analysis of your landing page variants, helping you make data-driven decisions with confidence. View real-time test performance, statistical significance, and clear winner recommendations.

Understanding A/B Test Cards

Each A/B test appears as a card displaying key information about your experiment:

Test Information:

  • Running Status Badge: Shows whether the test is Draft, Running, Paused, or Completed
  • Hypothesis: Your prediction about which variant will perform better and why
  • Test Dates: Start date and (if completed) end date of the test
  • Total Exposures: Number of visitors who have seen any variant in this test

Variants Table: Each variant (A, B, C, etc.) shows:

  • Visits, Conversions, Conversion Rate
  • Statistical significance indicators
  • Performance comparison to other variants

Statistical Significance Explained

Firebuzz automatically calculates statistical measures to help you understand whether your test results are reliable or just random chance.

Key Statistical Metrics

P-Value The probability that the difference between variants happened by chance. A lower p-value means more confidence in the results.

  • p < 0.05: Statistically significant (95% confident)
  • p < 0.01: Highly significant (99% confident)
  • p ≥ 0.05: Not yet significant (need more data)

Win Probability The likelihood that a variant is truly better than the others, expressed as a percentage.

  • > 95%: Strong confidence this variant is the winner
  • 80-95%: Good confidence, but consider collecting more data
  • < 80%: Insufficient evidence to declare a winner

Confidence Intervals The range where the true conversion rate likely falls. Narrower intervals mean more precise estimates.

  • Displayed as CVR CI Low and CVR CI High
  • Example: If a variant shows 5.2% conversion rate with CI of 4.8%-5.6%, the true rate is likely in that range

Z-Statistic Measures how many standard deviations away from the baseline a variant performs. Higher absolute values indicate stronger differences.

  • |z| > 1.96: Significant at 95% level
  • |z| > 2.58: Significant at 99% level

Interpreting Results

Understanding Lift

Relative Lift The percentage improvement over the baseline variant.

  • Example: "+15% relative lift" means the variant converts 15% better than the control
  • Positive lift = improvement, Negative lift = worse performance

Absolute Lift The raw percentage point difference in conversion rates.

  • Example: "2.5% absolute lift" means if control has 5% CVR, this variant has 7.5% CVR
  • More intuitive for understanding real-world impact

When to Trust Your Results

Your test results are reliable when ALL these conditions are met:

  • p-value < 0.05 (95% confidence)
  • Win probability > 95% for the leading variant
  • At least 100-200 conversions per variant
  • Test ran for at least 1-2 weeks to account for day-of-week effects

Don't stop a test early just because you see a "winner"! Early results are often misleading due to small sample sizes and random variation. Let the test run until statistical significance is achieved.

Reading the Metrics

The A/B test analytics displays comprehensive metrics for decision-making:

Performance Metrics

Visits Total number of unique visitors who viewed this variant. Ensure traffic is distributed according to your traffic weights.

Conversions Total conversions (form submissions, CTA clicks, or custom events) completed on this variant.

Conversion Rate (CVR) The primary metric: percentage of visitors who converted. This is what you're optimizing.

Average Conversion Value If you've set up conversion values, this shows the average value per conversion for each variant.

Time Period View

Unlike other analytics screens, A/B Tests show all-time data by default. This ensures you're viewing the complete test history without filtering that could skew results.

There is no period selector on the A/B Tests screen because tests should be evaluated on their entire run, not arbitrary date ranges.

Control Bar Features

The A/B test screen includes:

  • Production: Switch between Preview (test data) and Production (live data)
  • Refresh Button : Manually update test results (rate-limited to once every 10 seconds)

Selecting a Winner

Step-by-Step Winner Selection

Check Statistical Significance

Look for:

  • p-value < 0.05
  • Win probability > 95%
  • Confidence intervals that don't overlap with other variants

Verify Sample Size

Ensure each variant has:

  • At least 100-200 conversions
  • At least 1-2 weeks of data collection
  • Consistent traffic distribution

Identify the Champion

The winning variant should have:

  • Highest conversion rate
  • Strong statistical significance
  • Positive relative lift over control

Implement the Winner

  1. Navigate to Landing PagesEditorVariants
  2. Find the winning variant
  3. Set its Traffic Weight to 100%
  4. Save and publish changes

Test Status Workflow

Your tests progress through these stages:

  1. Draft Draft: Test configured but not yet started
  2. Running Running: Actively collecting data
  3. Paused Paused: Temporarily stopped (if needed)
  4. Completed Completed: Test concluded, winner can be selected

Examples

Clear Winner Scenario

Variant B has 6.5% CVR vs Variant A at 5.0% CVR

  • p-value: 0.003 ✓ (highly significant)
  • Win probability: 98% ✓
  • Relative lift: +30%

Action: Variant B is the clear winner. Set traffic to 100%.

Insufficient Data Scenario

Variant B has 5.2% CVR vs Variant A at 4.8% CVR

  • p-value: 0.18 ✗ (not significant)
  • Win probability: 73% ✗
  • Only 50 conversions per variant

Action: Keep test running. Need more data for reliable conclusion.

No Clear Winner Scenario

Variant B: 5.1% CVR, Variant A: 5.0% CVR

  • p-value: 0.67 ✗ (no significant difference)
  • Win probability: 52% ✗
  • 200+ conversions per variant

Action: Variants perform equally. Keep control (A) or test more radical changes.

FAQ

Troubleshooting

If you encounter issues with A/B test analytics:

  • Not seeing test results: Ensure your test status is Running and traffic is being distributed to variants
  • Low exposure counts: Check traffic weights in the landing page editor
  • Significance not increasing: May need more traffic or longer test duration
  • Inconsistent data: Toggle between Preview and Production to verify data source

For more help, see Analytics Troubleshooting.