TroubleshootingCampaign Issues

Traffic Distribution Issues

Resolve A/B test traffic splitting and variant distribution problems.

Your A/B test traffic isn't splitting as expected? Here's how to diagnose and fix distribution issues.

A/B tests use randomized traffic distribution to fairly compare landing page variants. Statistical variance is normal with small sample sizes.

Issues and Solutions

Uneven Traffic Distribution

One variant is receiving significantly more or less traffic than its configured weight suggests.

Solution:

  1. Allow for statistical variance:

    • Small sample sizes naturally produce uneven distributions
    • Wait for at least 100+ visitors per variant before evaluating
    • Distribution normalizes over larger sample sizes
    • Think of it like coin flips - 50 flips rarely produces exactly 25 heads and 25 tails
  2. Verify weight configuration:

    • Navigate to the Flow tab
    • Click the A/B test node
    • Check that weights are set correctly for each variant
    • For 50/50 split, set equal weights like 1:1 or 50:50
  3. Understand weight normalization:

    • Weights are relative, not absolute percentages
    • 30/70 produces the same distribution as 3/7
    • Weights are automatically normalized to 100%

Statistical randomness means short-term distribution will vary. Over thousands of visitors, distribution closely matches your configured weights.

Same Visitor Sees Different Variants

A returning visitor sees a different landing page variant than their previous visit.

Solution:

  1. Understand session consistency:

    • Firebuzz maintains variant assignment per session using cookies
    • Clearing browser cookies resets the assignment
    • Different browsers or devices create separate sessions
    • Each session gets independently assigned to a variant
  2. Check visitor identification method:

    • Assignments are stored in browser cookies
    • Private/incognito browsing doesn't persist assignments across sessions
    • Cookie blockers prevent consistent assignment storage
    • Ad blockers may interfere with session tracking
  3. Preview mode behavior:

    • Preview mode may not persist variant assignments
    • Test session consistency in production environment
    • Use standard browser mode (not incognito) when testing

For consistent visitor experience, ensure visitors have cookies enabled and use the same browser/device.

Variant Not Receiving Any Traffic

One or more variants in your A/B test never receive any visitors.

Solution:

  1. Verify variant is enabled:

    • Navigate to the Flow tab
    • Click the A/B test node
    • Ensure the variant isn't paused or disabled
    • Check that the variant node is properly connected to the flow
  2. Check weight configuration:

    • A weight of 0 means no traffic is allocated to that variant
    • Set a positive weight (minimum 1) for the variant to receive traffic
    • Verify weights aren't accidentally set to null or undefined
  3. Review prerequisite segments:

    • If a segment node precedes the A/B test, it may filter out all traffic
    • Verify visitors are reaching the A/B test node
    • Check analytics to see traffic flow through your campaign
  4. Confirm landing page assignment:

    • Each variant must have a landing page assigned
    • Landing page must be published and not in draft state
    • Click the variant to verify landing page status is Published

A/B Test Results Seem Wrong

Statistical significance isn't being reached, or results don't match expectations.

Solution:

  1. Wait for sufficient data:

    • A/B tests need significant sample size for reliable results
    • Navigate to Analytics to check confidence level
    • Don't make decisions below 95% statistical confidence
    • Typical tests need 1,000+ sessions per variant
  2. Verify conversion tracking:

    • Ensure your primary conversion goal is properly configured
    • Navigate to Goals to review settings
    • Check that conversions are being recorded for all variants
    • See Analytics Not Tracking if conversions aren't appearing
  3. Consider external factors:

    • Traffic quality varies by time of day, day of week, and source
    • External events (holidays, news, weather) can skew results
    • Run tests for at least 1-2 full business cycles (weeks)
    • Review traffic sources to ensure consistent quality across variants

Don't stop tests early based on preliminary results. Wait for 95%+ confidence to avoid false conclusions.

Traffic Goes to Control But Not Test Variants

The control variant receives all or most traffic while test variants receive little to none.

Solution:

  1. Check variant landing pages:

    • All variants must have landing pages assigned
    • Navigate to the Flow tab
    • Click each variant node and verify landing page assignment
    • Ensure all landing pages are published (status: Published)
  2. Review flow connections:

    • Each variant output must be properly connected
    • Look for disconnected nodes or broken connections
    • Verify connections flow from A/B test node to variant nodes
  3. Verify variant configuration:

    • Click each variant node to open settings
    • Check for error or warning indicators
    • Ensure no configuration issues exist
    • Verify weights are set correctly
  4. Check for flow validation errors:

    • Look for warning icons in the flow editor
    • Resolve any validation errors before publishing
    • See Flow Validation Errors

Understanding A/B Test Metrics

MetricDescription
SessionsTotal visits per variant
ConversionsGoal completions per variant
Conversion RateConversions divided by Sessions (percentage)
Confidence LevelStatistical certainty in results (aim for 95%+)
WinnerVariant with best performance at high confidence

Best Practices

Run tests for at least 1-2 weeks — Allow time for traffic patterns to normalize

Aim for 1,000+ sessions per variant — Larger samples produce more reliable results

Define clear, measurable goals — Set conversion goals before starting tests

Test one change at a time — Isolate variables to understand what drives results

Wait for 95%+ confidence — Don't declare winners prematurely

FAQ