Skip to main content

Overview

Scoring drives budget allocation at every level of the three-tier architecture. The platform uses a weighted formula combining three distinct scores to optimize:
  • Campaign Level: Budget flows to high-scoring tactics
  • Tactic Level: Resources allocated to best-performing media buys
  • Media Buy Level: Individual publisher performance tracking
You provide normalized outcome scores while Scope3 calculates quality and story affinity scores.

Scoring Algorithm Breakdown

1. Quality Score (Scope3-Provided)

Media quality metrics assessed by Scope3:
  • Impression quality: Valid ad serving and visibility
  • View metrics: Actual human viewability rates
  • Attention score: User engagement and interaction depth
  • Completion rates: Video/audio content completion percentages
  • IVT detection: Invalid traffic and fraud prevention
Scale: 0-100 where 100 represents premium media quality.

2. Outcome Score (User-Provided)

Your normalized downstream measurement data: You provide scores on the standard scale:
  • 0 = No measurable value
  • 100 = Meets defined objective
  • 1000 = 10x target performance
What you measure is up to you:
  • ROAS from attribution platforms
  • Brand lift from research studies
  • Conversion rates from your CRM
  • Any KPI that matters to your business
Critical: You define the normalization, not us. We just use your scores.

3. Story Affinity Score (Scope3-Provided)

How well tactics align with your selected brand stories:
  • Uses the brand stories you’ve created and selected for the campaign
  • Assesses tactic performance against audience profiles you defined
  • Measures how effectively tactics reach your intended brand narrative
Scale: 0-100 where 100 represents perfect alignment with your brand stories.
Story Affinity requires you to have created and assigned brand stories to your campaign. Without brand stories, this component will be 0.

The Weighted Formula

Campaign reward calculation:
Tactic Reward = (a × Quality Score) + (b × Outcome Score) + (c × Story Affinity Score)
Where a, b, and c are weights you configure.

Example Weight Configurations

Performance-Focused Campaign (prioritize conversions)
Quality Weight: 0.2
Outcome Weight: 0.7  ← Emphasize your measurement results
Story Affinity Weight: 0.1
Brand-Safe Premium Campaign (prioritize quality inventory)
Quality Weight: 0.6  ← Emphasize media quality
Outcome Weight: 0.2
Story Affinity Weight: 0.2
Brand Narrative Campaign (align with brand stories)
Quality Weight: 0.3
Outcome Weight: 0.2
Story Affinity Weight: 0.5  ← Emphasize story alignment

Outcome Score Window Days

Critical for RL Algorithm Performance You must specify how many days before your outcome scores are available:
  • Immediate data (same day): 0-1 days
  • Attribution data (conversion tracking): 1-7 days
  • Brand studies (lift measurement): 7-30 days
  • MMM data (incrementality): 30+ days
Why this matters: The RL algorithm won’t penalize tactics for “poor performance” before outcome data arrives. Without proper window configuration, tactics get incorrectly downgraded.

Campaign Configuration

Setting Up Scoring

const campaign = await createCampaign({
  brandAgentId: "ba_123",
  name: "Q4 Performance Campaign",
  prompt: "Target high-value customers with premium inventory",
  
  // Configure the three-component scoring
  scoringWeights: {
    quality: 0.2,      // 20% media quality
    outcome: 0.7,      // 70% your measurement results  
    affinity: 0.1      // 10% brand story alignment
  },
  
  // Tell algorithm when your outcome data arrives
  outcomeScoreWindowDays: 7,  // Conversion data available after 7 days
  
  budget: { total: 50000, currency: "USD" }
});

Providing Outcome Scores

// Scores can be provided at any level of the hierarchy
await provideScoringOutcomes({
  campaignId: "campaign_123",
  tacticId: "tactic_456",        // Optional - tactic-level measurement
  mediaBuyId: "mb_789",          // Optional - publisher-specific measurement
  creativeId: "creative_abc",    // Optional - creative performance
  exposureRange: {
    start: "2024-01-08",
    end: "2024-01-15"
  },
  performanceIndex: 150  // 1.5x your target performance
});

// Scores automatically roll up the hierarchy:
// Media Buy scores → Tactic scores → Campaign optimization

How Scores Drive Three-Tier Optimization

Discovery Phase (All Levels)

Campaign → Tactic:
  • New tactics created with strategic targeting
  • Initial budget allocation based on expected performance
  • Quality and Affinity scores available immediately
Tactic → Media Buy:
  • Publisher products discovered via ADCP
  • Media buys created with negotiated CPMs
  • Performance tracking begins on execution
Tactic Seed Data Cooperative: Brand agents opted into the cooperative (tacticSeedDataCoop: true) benefit from:
  1. Historical pricing data: Realistic CPMs for media buy creation
  2. Performance quintiles: Skip poor-performing publisher products
  3. Better initial allocation: Start media buys with proven inventory
This dramatically improves the Discovery Phase across all three tiers.

Learning Phase (Performance Flows Up)

Media Buy Level:
  • Real-time delivery data from publishers
  • CPM efficiency and pacing tracked
  • Quality scores calculated per publisher
Tactic Level:
  • Aggregates media buy performance
  • Adjusts allocation between publishers
  • Combined scores inform tactic value
Campaign Level:
  • Tactics compete for budget via multi-armed bandit
  • High-scoring tactics get increased allocation
  • Poor performers reduced or paused

Optimization Phase (Continuous Refinement)

  • Campaign optimizes tactic budget distribution
  • Tactics optimize media buy allocation
  • Media buys optimize delivery pacing
  • All three tiers working in concert

Common Weight Strategies

E-commerce/Performance

Focus on conversion results:
Quality: 0.2, Outcome: 0.8, Affinity: 0.0
No brand stories needed - pure performance focus

Brand Building

Balance quality and story alignment:
Quality: 0.6, Outcome: 0.0, Affinity: 0.4  
No measurable outcomes - focus on quality placement and narrative

Premium/Luxury

Prioritize quality inventory:
Quality: 0.6, Outcome: 0.3, Affinity: 0.1

Audience Development

Test story affinity effectiveness:
Quality: 0.2, Outcome: 0.3, Affinity: 0.5

Key Concepts

You Control What “Success” Means

  • Quality Score: We assess media quality
  • Outcome Score: You define and provide success metrics
  • Story Affinity: We measure against your selected brand stories
  • Weights: You determine the importance of each component

Window Days Are Critical

Set outcomeScoreWindowDays accurately:
  • Too short = penalize tactics before data arrives
  • Too long = delay optimization decisions
  • Match your actual measurement lag for best results

Scoring in the Three-Tier Architecture

Score Application by Tier

TierWhat Gets ScoredHow Scores Are UsedOptimization Result
CampaignTacticsAllocate budget between tacticsHigh-scoring tactics get more budget
TacticMedia BuysDistribute budget to publishersBest publishers get increased allocation
Media BuyIndividual deliveryTrack publisher performancePacing and budget adjustments

Score Aggregation

Scores flow up the hierarchy:
  1. Media Buys generate quality scores from delivery
  2. Tactics aggregate media buy scores
  3. Campaigns optimize based on tactic performance
Your outcome scores can be provided at any level and automatically influence the entire hierarchy.

The Bottom Line

  • Three-tier optimization uses scores at every level
  • Scores flow up from Media Buys → Tactics → Campaigns
  • Budget flows down based on performance scores
  • You control weights to match your business objectives
  • Outcome window ensures fair performance evaluation