The Three Scoring Components

Tactic optimization uses three distinct scores combined with user-defined weights. Instead, you provide normalized outcome scores while Scope3 calculates quality and story affinity scores. The platform uses a weighted formula to determine which tactics get more budget.

Scoring Algorithm Breakdown

1. Quality Score (Scope3-Provided)

Media quality metrics assessed by Scope3:
  • Impression quality: Valid ad serving and visibility
  • View metrics: Actual human viewability rates
  • Attention score: User engagement and interaction depth
  • Completion rates: Video/audio content completion percentages
  • IVT detection: Invalid traffic and fraud prevention
Scale: 0-100 where 100 represents premium media quality.

2. Outcome Score (User-Provided)

Your normalized downstream measurement data: You provide scores on the standard scale:
  • 0 = No measurable value
  • 100 = Meets defined objective
  • 1000 = 10x target performance
What you measure is up to you:
  • ROAS from attribution platforms
  • Brand lift from research studies
  • Conversion rates from your CRM
  • Any KPI that matters to your business
Critical: You define the normalization, not us. We just use your scores.

3. Story Affinity Score (Scope3-Provided)

How well tactics align with your selected brand stories:
  • Uses the brand stories you’ve created and selected for the campaign
  • Assesses tactic performance against audience profiles you defined
  • Measures how effectively tactics reach your intended brand narrative
Scale: 0-100 where 100 represents perfect alignment with your brand stories.
Story Affinity requires you to have created and assigned brand stories to your campaign. Without brand stories, this component will be 0.

The Weighted Formula

Campaign reward calculation:
Tactic Reward = (a × Quality Score) + (b × Outcome Score) + (c × Story Affinity Score)
Where a, b, and c are weights you configure.

Example Weight Configurations

Performance-Focused Campaign (prioritize conversions)
Quality Weight: 0.2
Outcome Weight: 0.7  ← Emphasize your measurement results
Story Affinity Weight: 0.1
Brand-Safe Premium Campaign (prioritize quality inventory)
Quality Weight: 0.6  ← Emphasize media quality
Outcome Weight: 0.2
Story Affinity Weight: 0.2
Brand Narrative Campaign (align with brand stories)
Quality Weight: 0.3
Outcome Weight: 0.2
Story Affinity Weight: 0.5  ← Emphasize story alignment

Outcome Score Window Days

Critical for RL Algorithm Performance You must specify how many days before your outcome scores are available:
  • Immediate data (same day): 0-1 days
  • Attribution data (conversion tracking): 1-7 days
  • Brand studies (lift measurement): 7-30 days
  • MMM data (incrementality): 30+ days
Why this matters: The RL algorithm won’t penalize tactics for “poor performance” before outcome data arrives. Without proper window configuration, tactics get incorrectly downgraded.

Campaign Configuration

Setting Up Scoring

const campaign = await createCampaign({
  brandAgentId: "ba_123",
  name: "Q4 Performance Campaign",
  prompt: "Target high-value customers with premium inventory",
  
  // Configure the three-component scoring
  scoringWeights: {
    quality: 0.2,      // 20% media quality
    outcome: 0.7,      // 70% your measurement results  
    affinity: 0.1      // 10% brand story alignment
  },
  
  // Tell algorithm when your outcome data arrives
  outcomeScoreWindowDays: 7,  // Conversion data available after 7 days
  
  budget: { total: 50000, currency: "USD" }
});

Providing Outcome Scores

// You send normalized scores via your measurement integration
await provideScoringOutcomes({
  campaignId: "campaign_123",
  tacticId: "tactic_456",        // Optional - can measure campaign-level
  creativeId: "creative_789",    // Optional - can measure creative-level
  exposureRange: {
    start: "2024-01-08",
    end: "2024-01-15"
  },
  performanceIndex: 150  // 1.5x your target performance
});

How Scores Drive Optimization

Discovery Phase

  • New tactics start with Quality and Affinity scores immediately
  • Outcome scores remain at 0 until your measurement data arrives
  • Budget allocation uses only Quality + Affinity during window period
Tactic Seed Data Cooperative: Brand agents opted into the tactic seed data cooperative (tacticSeedDataCoop: true) benefit from better initial tactic selection based on:
  1. Historical delivery data: Actual CPM and impression volumes for realistic starting budgets
  2. Performance quintiles: Category-specific rankings (top 20% vs bottom 20% of tactics) to prioritize proven inventory
This improves the Discovery Phase by starting with higher-confidence tactic selections.

Learning Phase

  • As your outcome scores arrive, they’re incorporated into the weighted formula
  • Multi-armed bandit adjusts based on complete scoring picture
  • Tactics with high combined scores get more budget

Optimization Phase

  • All three components inform budget allocation decisions
  • Your weighting determines which factors matter most
  • Platform optimizes for your specific definition of success

Common Weight Strategies

E-commerce/Performance

Focus on conversion results:
Quality: 0.2, Outcome: 0.8, Affinity: 0.0
No brand stories needed - pure performance focus

Brand Building

Balance quality and story alignment:
Quality: 0.6, Outcome: 0.0, Affinity: 0.4  
No measurable outcomes - focus on quality placement and narrative

Premium/Luxury

Prioritize quality inventory:
Quality: 0.6, Outcome: 0.3, Affinity: 0.1

Audience Development

Test story affinity effectiveness:
Quality: 0.2, Outcome: 0.3, Affinity: 0.5

Key Concepts

You Control What “Success” Means

  • Quality Score: We assess media quality
  • Outcome Score: You define and provide success metrics
  • Story Affinity: We measure against your selected brand stories
  • Weights: You determine the importance of each component

Window Days Are Critical

Set outcomeScoreWindowDays accurately:
  • Too short = penalize tactics before data arrives
  • Too long = delay optimization decisions
  • Match your actual measurement lag for best results

The Bottom Line

  • Three distinct scoring components with different data sources
  • You provide normalized outcome scores based on your measurement
  • Weighted formula combines all three according to your priorities
  • Outcome window prevents premature penalization of tactics
  • Your weights determine optimization behavior - not our assumptions