Dynamic Traffic Allocation

Dynamic Traffic Allocation

Dynamic traffic allocation is the experimentation pattern of changing how much traffic each live variation receives based on observed performance, instead of keeping the split fixed until the end of a test.

Key points

  • AB Tasty frames Multi-Armed Bandits as a machine-learning approach to dynamic traffic allocation: better-performing variations receive more traffic while weaker variations receive less [src-022].
  • The mechanism is KPI-driven. After choosing a primary KPI, traffic is reassigned based on that KPI’s observed performance across live variations [src-022].
  • The business value is reducing opportunity cost: fewer users are sent to underperforming variations once early evidence suggests a better option [src-022].
  • Dynamic traffic allocation is most compelling when the goal is short-term conversion maximisation, especially for limited-time offers, short-lived content, or tests with many variations [src-022].
  • The trade-off is interpretability and operational complexity. AB Tasty cautions that bandit experiments are harder and require more technical expertise than classic A/B tests [src-022].
  • Hightouch describes the same mechanism as send-volume allocation: start with equal exploration across options, shift more volume to early winners as data accumulates, refine as confidence grows, and keep some exploration to adapt if behaviour changes [src-025].
  • In the send-time example, allocation evolves from 20 percent across five send times, to 40 percent at 2 pm with 15 percent elsewhere, then to 40 percent at 2 pm, 30 percent at 5 pm, and 10 percent for the remaining times [src-025].
  • Braze applies dynamic allocation to campaign exposure: traffic can move toward the top-performing message, offer, channel, subject line, CTA, notification, onboarding path, or retention incentive while lower-performing options still receive some exploration [src-027].
  • The Braze article ties this allocation to real-time campaign rewards such as clicks and purchases, so each interaction can influence subsequent distribution decisions [src-027].

Related entities

Related concepts

Source references

  • [src-022] AB Tasty — “Multi-Armed Bandits: A/B Testing with Fewer Regrets”
  • [src-025] Hightouch — “Under the hood of AI Decisioning, part three: Multi-armed bandits”
  • [src-027] Team Braze — “What is a multi-armed bandit? Smarter experimentation for real-time marketing”