Overview
The Scope3 Measurement Engine turns raw advertising signals into causal evidence about what is working. Instead of treating every conversion as proof, it asks the harder question — would that conversion have happened anyway? — and updates a running set of beliefs about each campaign’s incremental impact as new data arrives. The pipeline has four layers:- Event sources — buyer-registered pixels, SDKs, MMPs, CRMs, or measurement partners that send conversions, impressions, or outcome data.
- Measurement data — privacy-safe outcome records (revenue, conversions, LTV) attached to a campaign, media buy, package, or creative.
- Belief state — a Bayesian summary of what the engine currently believes about each hypothesis (e.g. “audience segment A drives more incremental revenue than segment B”), expressed as posterior distributions with confidence intervals.
- Incrementality tests — explicit treatment / control / observation cohorts and test plans that produce stronger causal estimates than passive observation.
How belief updating works (high level)
The engine starts each hypothesis with a prior — your initial guess about
size and confidence. As measurement records arrive, the learning cycle
updates that prior into a posterior using Bayesian inference: high-quality,
fresh data shifts beliefs faster; sparse or noisy data shifts them less.
Running an A/B test with proper test and control cohorts produces the
strongest evidence and tightens the posterior fastest.
Prerequisites
Scope3 API key
Generate a key at agentic.scope3.com/user-api-keys.
See Authentication for setup.
Advertiser ID
All measurement endpoints are scoped to an advertiser. You’ll use this in
every URL:
/api/buyer/advertisers/:advertiserId/....A campaign or media buy (recommended)
Measurement data is most useful when attached to in-flight campaigns.
See Campaigns for setup.
Step 1: Register Event Sources
An event source is a logical channel through which measurement events flow — a website pixel, a mobile SDK, a CRM export, or an MMP feed. Every event you send must reference a registeredevent_source_id.
Use the ADCP-spec sync endpoint to upsert event sources for an advertiser:
Request fields
| Field | Type | Required | Description |
|---|---|---|---|
account.account_id | string | yes | Must match the :advertiserId in the URL |
event_sources[].event_source_id | string | yes | Buyer-assigned ID, max 255 chars |
event_sources[].name | string | no | Human-readable label |
event_sources[].event_types | string[] | no | Restricts which event types this source may send. Omit to accept all. |
event_sources[].allowed_domains | string[] | no | Origin domains authorized for this source |
delete_missing | boolean | no | Archive any buyer-managed sources not in this payload (default false) |
action is one of created, updated, unchanged, failed, or deleted.
List configured sources
Step 2: Configure Measurement
Measurement configuration controls which measurement features are active for an advertiser — Marketing Mix Modeling (MMM), incrementality testing, and brand lift — plus any provider-specific MMM settings.Fields
| Field | Type | Description |
|---|---|---|
mmmEnabled | boolean | Enable Marketing Mix Modeling |
mmmConfig.provider | string | MMM partner name (e.g. measured) |
mmmConfig.dataSourceIds | string[] | IDs of upstream data feeds powering MMM |
mmmConfig.reportingFrequency | enum | weekly, monthly, or quarterly |
incrementalityTestingEnabled | boolean | Enable A/B incrementality tests |
brandLiftEnabled | boolean | Enable brand-lift study integration |
settings | object | Free-form key/value advertiser-level overrides |
PUT is upsert — pass only the fields you want to set; missing fields fall back to defaults / prior values. Read the current config with GET /advertisers/:advertiserId/measurement-config.
Step 3: Validate Configuration
Before relying on incrementality estimates, assess whether your planned spend, geos, and flight length can actually move the needle on a hypothesis. The testability assessment returns power-analysis-style guidance.Step 4: Send Measurement Data
There are two complementary ways to feed the engine:- Conversion events — fine-grained, per-user actions (purchases, leads, sign-ups). Use the Conversion API — same identity rules apply.
- Measurement records — pre-aggregated outcomes for a time window and geo (e.g. “incremental revenue, US-CA, week of 2026-03-01 = $8,450”). Use the sync endpoint below.
Sync aggregated measurement data
| Field | Type | Required | Notes |
|---|---|---|---|
start_time / end_time | ISO 8601 | yes | Must include offset; start_time < end_time |
metric_id | enum | yes | revenue, incremental_revenue, conversions, incremental_conversions, page_view_count, add_to_cart_count, purchase_count, ltv_1d, ltv_7d, ltv_30d |
metric_value | number | yes | The measured value |
unit | enum | yes | currency, count, ratio, percentage |
currency | string | conditional | ISO 4217 — required when unit is currency |
campaign_id / media_buy_id / package_id / creative_id | string | one required | Attaches the measurement to an entity |
source | enum | no | advertiser, mmp, or measurement_partner |
external_row_id | string | no | Idempotency key for re-syncs |
action: created | updated | unchanged | failed.
Upload raw measurement records (advanced)
For research-style flows that already produce per-geo outcomes, the learning engine accepts batched records directly:Upload context records
Context records describe market conditions that the learning engine should partial out — promos, weather, competitor activity, seasonality:Step 5: Inspect the Event Summary
Once events are flowing, the event-summary endpoint returns hourly counts per event type so you can confirm ingestion before depending on downstream attribution:Response
| Query param | Notes |
|---|---|
eventType | One of conversion, click, impression, measurement, mmp. Omit for all types. |
startHour / endHour | Hour-aligned ISO 8601 timestamps. Defaults to the last completed UTC hour. |
Step 6: Set Up Test Cohorts and Test Plans
Incrementality tests work by comparing well-defined groups. The v2 model has three layers:- Hypotheses — what you’re testing: a falsifiable claim about a treatment vs. a comparison (“premium CTV drives more incremental revenue per impression than general CTV”). Hypotheses are persistent — each test plan accumulates evidence against the same hypothesis over time.
- Test cohorts — who is in each arm (treatment, control, observation). A cohort is a flexible audience definition (geo, segment, zip code, custom).
- Test plans — how the comparison runs against a hypothesis: which conditions, which dimensions to match on, how cells are sized.
Create a hypothesis
Every test plan is anchored to a hypothesis. Create one before linking test plans, cohorts, or allocation entries to it.Response
| Field | Type | Required | Notes |
|---|---|---|---|
category | enum | yes | audience, creative, channel, context, timing, tactic |
statement | string | yes | The falsifiable claim being tested |
treatment | string | yes | Description of the treatment arm |
comparison | string | yes | Description of the comparison / control arm |
outcomeMetric | string | yes | Primary metric (e.g. incremental_revenue, purchase_count) |
priorConfidence | number (0–1) | yes | Initial confidence before any data |
priorMagnitude | number | yes | Expected effect size before any data |
icon | string | no | Emoji for UI display, defaults to 💡 |
incrementalCostCpm | number | no | Cost premium of treatment vs. comparison |
measurementSource | string | no | Source key tying ground-truth records to this hypothesis |
measurementLagWeeks | integer | no | Expected reporting lag, defaults to 1 |
minimumTestCells | integer | no | Minimum cells required for a meaningful test, defaults to 6 |
proxyMetrics | array | no | { outcomeType, weight } proxies that update beliefs alongside the primary metric |
groundTruth | object | no | Free-form expected outcome for sanity checks |
id is the hypothesisId you’ll plug into the test-plan creation step below — and into testability, learning-records, and belief-state queries.
Hypothesis status lifecycle
Every hypothesis carries astatus that the engine advances as evidence accrues. New hypotheses always start at no_buys; the rest are reached automatically as media buys link, fire, and produce measurable outcomes.
| Status | Meaning | Triggered when |
|---|---|---|
no_buys | Hypothesis exists but no media is running against it | Hypothesis is created and no allocation entries are linked yet |
in_market | At least one linked buy is delivering | A linked allocation entry / media buy goes active |
partially_measured | Some data has arrived, but coverage is thin | Measurement records start landing but cells / geos are below minimumTestCells or coverage gaps remain |
well_measured | Enough data to update beliefs with confidence | Coverage hits minimumTestCells across the matched dimensions and the posterior tightens |
proven | Posterior supports the treatment hypothesis | Belief converges with sufficient confidence in the hypothesized direction |
disproven | Posterior rejects the treatment hypothesis | Belief converges with sufficient confidence against the hypothesized direction |
List hypotheses
category, status, flightId. Pagination via take (max 100) and skip.
Create test cohorts
| Field | Type | Required | Description |
|---|---|---|---|
name | string | yes | Cohort label, max 255 chars |
cohortType | string | yes | Free-form classification (e.g. geographic, demographic, behavioral) |
role | enum | no | TREATMENT, CONTROL, or OBSERVATION (default TREATMENT) |
definition.type | string | yes | Discriminator — zip_code, user_segment, geo_region, custom |
definition.* | any | — | Additional fields per type |
estimatedSize | integer | no | Size estimate for power analysis |
| Method | Path | Purpose |
|---|---|---|
GET | /advertisers/:advertiserId/test-cohorts | List (filter by role, isActive) |
POST | /advertisers/:advertiserId/test-cohorts | Create |
GET | /advertisers/:advertiserId/test-cohorts/:id | Get one |
PUT | /advertisers/:advertiserId/test-cohorts/:id | Update |
DELETE | /advertisers/:advertiserId/test-cohorts/:id | Archive |
Create a test plan against a hypothesis
A hypothesis is the question being tested (e.g. “Does running on premium CTV inventory drive incremental purchases versus general CTV?”). Test plans are nested under the hypothesis they test:| Field | Type | Description |
|---|---|---|
testCondition | object | What defines the treatment arm |
controlCondition | object | What defines the control arm |
matchDimensions | string[] | Dimensions to balance across cells (at least one) |
Activate a test plan and link media buys
Once a plan is created (statusdesigned), patch it to active and link the allocation entries (media buys) that fall under each role:
status transitions: designed → active → complete. Updating to complete (or letting the engine auto-complete on flight end) closes the test for inference.
Step 7: Trigger Learning and Read Belief State
The learning cycle ingests new measurement records, fits the Bayesian update, and refreshes the belief state. It typically runs on a schedule, but you can trigger it on demand after a large data load:no_buys, in_market, partially_measured, well_measured, proven, disproven), and which sources contributed evidence.
To inspect the records that fed the most recent updates:
Belief state is read-only. To change beliefs, send better data — more
records, fewer gaps, properly designed test plans — and let the next
learning cycle re-fit.
Step 8: Review and Acknowledge Test Results
When a test plan reachescomplete, the engine produces an incrementality result attached to the test plan and hypothesis. Read it via the test plan endpoint:
Mark results as reviewed
Acknowledge a result so it stops surfacing in unread-results queues and so the audit trail records who signed off. Use the test plan update endpoint to transition status (e.g. recording observed test/control cell counts and final coverage gaps):Best Practices
Register event sources before turning on a pixel
Register event sources before turning on a pixel
Events sent to an unknown
event_source_id are rejected. Always run
sync_event_sources first; for new advertisers, do this in your onboarding
automation.Send aggregated measurement records when you have them
Send aggregated measurement records when you have them
If your MMP / CRM / data warehouse can produce weekly per-geo outcomes,
use
POST /measurement-data/sync (or /measurement-records). Aggregated
records arrive faster, are cheaper to ingest, and are immune to user-level
identity-resolution drift.Send raw events when you need fine-grained attribution
Send raw events when you need fine-grained attribution
For per-user attribution and click-id matching, use the
Conversion API. It’s the only path that
can attribute back to a specific impression or click.
Set good priors
Set good priors
Hypothesis
priorConfidence and priorMagnitude are not “the answer” —
they reflect what you’d believe before the test. Calibrated priors make
early-flight estimates much more useful than uncalibrated ones.Match cohorts on the dimensions that move the metric
Match cohorts on the dimensions that move the metric
matchDimensions should include any factor that meaningfully drives the
outcome (geo, daypart, segment). Forgetting a strong driver introduces
confounding even with a clean A/B split.Use external_row_id for idempotency
Use external_row_id for idempotency
measurement-data/sync deduplicates on external_row_id. Re-running a
daily export is safe — only changed rows update.Hash before sending
Hash before sending
hashed_email and hashed_phone must be SHA-256 hex (lowercase, 64
chars). Normalize email to lowercase / trimmed and phone to E.164 before
hashing. The engine will not match malformed hashes.Run testability before launch, freshness during flight
Run testability before launch, freshness during flight
POST /testability answers “is this design likely to learn anything?”
before media starts. GET /measurement-freshness answers “is data
actually arriving?” once it has.Keep human feedback in the loop
Keep human feedback in the loop
Posterior deltas are a guide, not a verdict. When stakeholders disagree
with a result, capture the disagreement as feedback so the next cycle can
spawn refined hypotheses rather than re-litigating the old one.
Endpoint Reference
All paths are relative tohttps://api.agentic.scope3.com/api/buyer.
Event sources
| Method | Path |
|---|---|
POST | /advertisers/:advertiserId/event-sources/sync |
GET | /advertisers/:advertiserId/event-sources |
Measurement config
| Method | Path |
|---|---|
GET | /advertisers/:advertiserId/measurement-config |
PUT | /advertisers/:advertiserId/measurement-config |
Measurement data
| Method | Path |
|---|---|
POST | /advertisers/:advertiserId/measurement-data/sync |
POST | /advertisers/:advertiserId/measurement-records |
GET | /advertisers/:advertiserId/measurement-records |
POST | /advertisers/:advertiserId/context-records |
Events and freshness
| Method | Path |
|---|---|
GET | /advertisers/:advertiserId/events/summary |
GET | /advertisers/:advertiserId/measurement-freshness |
Learning engine
| Method | Path |
|---|---|
POST | /advertisers/:advertiserId/learning-cycle/run |
GET | /advertisers/:advertiserId/belief-state |
GET | /advertisers/:advertiserId/learning-records |
POST | /advertisers/:advertiserId/testability |
Measurement sources
| Method | Path |
|---|---|
GET | /advertisers/:advertiserId/measurement-sources |
POST | /advertisers/:advertiserId/measurement-sources |
GET | /advertisers/:advertiserId/measurement-sources/:id |
PATCH | /advertisers/:advertiserId/measurement-sources/:id |
Hypotheses
| Method | Path |
|---|---|
GET | /advertisers/:advertiserId/hypotheses |
POST | /advertisers/:advertiserId/hypotheses |
GET | /advertisers/:advertiserId/hypotheses/:id |
PATCH | /advertisers/:advertiserId/hypotheses/:id |
DELETE | /advertisers/:advertiserId/hypotheses/:id |
Test cohorts
| Method | Path |
|---|---|
GET | /advertisers/:advertiserId/test-cohorts |
POST | /advertisers/:advertiserId/test-cohorts |
GET | /advertisers/:advertiserId/test-cohorts/:id |
PUT | /advertisers/:advertiserId/test-cohorts/:id |
DELETE | /advertisers/:advertiserId/test-cohorts/:id |
Test plans
| Method | Path |
|---|---|
GET | /advertisers/:advertiserId/hypotheses/:hypothesisId/test-plans |
POST | /advertisers/:advertiserId/hypotheses/:hypothesisId/test-plans |
GET | /advertisers/:advertiserId/test-plans/:id |
PATCH | /advertisers/:advertiserId/test-plans/:id |
POST | /advertisers/:advertiserId/test-plans/:id/link-buys |
Related
Conversion API
Send per-user purchase, lead, and engagement events with click and identity
matching.
Campaigns
Set up the campaigns and media buys that measurement data attaches to.
Authentication
Generate and manage API keys for measurement requests.
Reporting
Pull aggregated performance once measurement data is flowing.