Skip to main content

Overview

The Scope3 Measurement Engine turns raw advertising signals into causal evidence about what is working. Instead of treating every conversion as proof, it asks the harder question — would that conversion have happened anyway? — and updates a running set of beliefs about each campaign’s incremental impact as new data arrives. The pipeline has four layers:
  1. Event sources — buyer-registered pixels, SDKs, MMPs, CRMs, or measurement partners that send conversions, impressions, or outcome data.
  2. Measurement data — privacy-safe outcome records (revenue, conversions, LTV) attached to a campaign, media buy, package, or creative.
  3. Belief state — a Bayesian summary of what the engine currently believes about each hypothesis (e.g. “audience segment A drives more incremental revenue than segment B”), expressed as posterior distributions with confidence intervals.
  4. Incrementality tests — explicit treatment / control / observation cohorts and test plans that produce stronger causal estimates than passive observation.
How belief updating works (high level) The engine starts each hypothesis with a prior — your initial guess about size and confidence. As measurement records arrive, the learning cycle updates that prior into a posterior using Bayesian inference: high-quality, fresh data shifts beliefs faster; sparse or noisy data shifts them less. Running an A/B test with proper test and control cohorts produces the strongest evidence and tightens the posterior fastest.
Privacy-first: the measurement engine never accepts raw PII (plain emails, phone numbers, addresses). Send pre-hashed identifiers (SHA-256) or pre-resolved identity tokens (RampID, UID2, ID5, etc.) — see the Conversion API guide for hashing rules.

Prerequisites

1

Scope3 API key

Generate a key at agentic.scope3.com/user-api-keys. See Authentication for setup.
2

Advertiser ID

All measurement endpoints are scoped to an advertiser. You’ll use this in every URL: /api/buyer/advertisers/:advertiserId/....
3

A campaign or media buy (recommended)

Measurement data is most useful when attached to in-flight campaigns. See Campaigns for setup.
All examples below use:
BASE = https://api.agentic.scope3.com/api/buyer
AUTH = Authorization: Bearer scope3_<your_api_key>

Step 1: Register Event Sources

An event source is a logical channel through which measurement events flow — a website pixel, a mobile SDK, a CRM export, or an MMP feed. Every event you send must reference a registered event_source_id. Use the ADCP-spec sync endpoint to upsert event sources for an advertiser:
curl -X POST "$BASE/advertisers/12345/event-sources/sync" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "account": { "account_id": "12345" },
    "event_sources": [
      {
        "event_source_id": "website_pixel",
        "name": "Website Pixel",
        "event_types": ["purchase", "add_to_cart", "lead"],
        "allowed_domains": ["shop.example.com", "checkout.example.com"],
        "example_event": {
          "event_type": "purchase",
          "hashed_email": "5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8",
          "value": 89.99,
          "currency": "USD"
        }
      },
      {
        "event_source_id": "crm_import",
        "name": "Salesforce CRM Export",
        "event_types": ["purchase", "qualify_lead"],
        "allowed_domains": []
      }
    ],
    "delete_missing": false
  }'

Request fields

FieldTypeRequiredDescription
account.account_idstringyesMust match the :advertiserId in the URL
event_sources[].event_source_idstringyesBuyer-assigned ID, max 255 chars
event_sources[].namestringnoHuman-readable label
event_sources[].event_typesstring[]noRestricts which event types this source may send. Omit to accept all.
event_sources[].allowed_domainsstring[]noOrigin domains authorized for this source
delete_missingbooleannoArchive any buyer-managed sources not in this payload (default false)
Each result’s action is one of created, updated, unchanged, failed, or deleted.

List configured sources

curl "$BASE/advertisers/12345/event-sources?take=50&skip=0" \
  -H "Authorization: Bearer scope3_<your_api_key>"
Events sent to an unregistered event_source_id are rejected. Always sync sources before turning on a pixel or CRM job.

Step 2: Configure Measurement

Measurement configuration controls which measurement features are active for an advertiser — Marketing Mix Modeling (MMM), incrementality testing, and brand lift — plus any provider-specific MMM settings.
curl -X PUT "$BASE/advertisers/12345/measurement-config" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "mmmEnabled": true,
    "mmmConfig": {
      "provider": "measured",
      "dataSourceIds": ["ds_revenue", "ds_orders"],
      "reportingFrequency": "weekly"
    },
    "incrementalityTestingEnabled": true,
    "brandLiftEnabled": false,
    "settings": {
      "defaultLookbackDays": 28
    }
  }'

Fields

FieldTypeDescription
mmmEnabledbooleanEnable Marketing Mix Modeling
mmmConfig.providerstringMMM partner name (e.g. measured)
mmmConfig.dataSourceIdsstring[]IDs of upstream data feeds powering MMM
mmmConfig.reportingFrequencyenumweekly, monthly, or quarterly
incrementalityTestingEnabledbooleanEnable A/B incrementality tests
brandLiftEnabledbooleanEnable brand-lift study integration
settingsobjectFree-form key/value advertiser-level overrides
PUT is upsert — pass only the fields you want to set; missing fields fall back to defaults / prior values. Read the current config with GET /advertisers/:advertiserId/measurement-config.

Step 3: Validate Configuration

Before relying on incrementality estimates, assess whether your planned spend, geos, and flight length can actually move the needle on a hypothesis. The testability assessment returns power-analysis-style guidance.
curl -X POST "$BASE/advertisers/12345/testability" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "hypothesisIds": ["1f5c0c4e-2c9f-4f0a-9e31-2d4f3c7b1a01"],
    "budget": 250000,
    "weeklyBudget": 25000,
    "flightWeeks": 10,
    "testGeos": ["US-CA", "US-OR", "US-WA"],
    "controlGeos": ["US-AZ", "US-NV", "US-NM"]
  }'
The response surfaces, per hypothesis, whether the design has enough cells, whether geos are well-matched, and which gaps would weaken inference.
Run testability before launching media. It’s far cheaper to widen your geo list or extend the flight than to discover post-flight that the test was underpowered.

Step 4: Send Measurement Data

There are two complementary ways to feed the engine:
  • Conversion events — fine-grained, per-user actions (purchases, leads, sign-ups). Use the Conversion API — same identity rules apply.
  • Measurement records — pre-aggregated outcomes for a time window and geo (e.g. “incremental revenue, US-CA, week of 2026-03-01 = $8,450”). Use the sync endpoint below.

Sync aggregated measurement data

curl -X POST "$BASE/advertisers/12345/measurement-data/sync" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "measurements": [
      {
        "start_time": "2026-03-01T00:00:00-05:00",
        "end_time":   "2026-03-07T23:59:59-05:00",
        "metric_id":  "incremental_revenue",
        "metric_value": 8450.75,
        "unit": "currency",
        "currency": "USD",
        "campaign_id": "camp_456",
        "media_buy_id": "mb_789",
        "source": "advertiser",
        "source_platform": "billy_grace",
        "source_metric_name": "Incremental Revenue",
        "external_row_id": "bg_row_001"
      },
      {
        "start_time": "2026-03-01T00:00:00-05:00",
        "end_time":   "2026-03-07T23:59:59-05:00",
        "metric_id":  "purchase_count",
        "metric_value": 142,
        "unit": "count",
        "campaign_id": "camp_456",
        "external_row_id": "bg_row_002"
      }
    ]
  }'
FieldTypeRequiredNotes
start_time / end_timeISO 8601yesMust include offset; start_time < end_time
metric_idenumyesrevenue, incremental_revenue, conversions, incremental_conversions, page_view_count, add_to_cart_count, purchase_count, ltv_1d, ltv_7d, ltv_30d
metric_valuenumberyesThe measured value
unitenumyescurrency, count, ratio, percentage
currencystringconditionalISO 4217 — required when unit is currency
campaign_id / media_buy_id / package_id / creative_idstringone requiredAttaches the measurement to an entity
sourceenumnoadvertiser, mmp, or measurement_partner
external_row_idstringnoIdempotency key for re-syncs
Up to 1,000 measurements per call. Each result reports action: created | updated | unchanged | failed.

Upload raw measurement records (advanced)

For research-style flows that already produce per-geo outcomes, the learning engine accepts batched records directly:
curl -X POST "$BASE/advertisers/12345/measurement-records" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "records": [
      {
        "outcomeType": "incremental_revenue",
        "geo": "US-CA",
        "timeWindowStart": "2026-03-01",
        "timeWindowEnd":   "2026-03-07",
        "value": 8450.75,
        "baselineValue": 7100.00,
        "confidenceInterval": 0.92,
        "source": "measured",
        "lagDays": 7
      }
    ]
  }'
Up to 5,000 records per call.

Upload context records

Context records describe market conditions that the learning engine should partial out — promos, weather, competitor activity, seasonality:
curl -X POST "$BASE/advertisers/12345/context-records" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "records": [
      {
        "geo": "US-CA",
        "timeWindowStart": "2026-03-01",
        "timeWindowEnd":   "2026-03-07",
        "promoActive": true,
        "promoType": "site_wide_15_off",
        "seasonalityIndex": 1.12,
        "flightStatus": "active"
      }
    ]
  }'
Never include raw user identifiers (emails, phone numbers, names) in measurement or context records. These endpoints accept aggregated outcomes only — per-user events go through the Conversion API.

Step 5: Inspect the Event Summary

Once events are flowing, the event-summary endpoint returns hourly counts per event type so you can confirm ingestion before depending on downstream attribution:
curl "$BASE/advertisers/12345/events/summary?\
eventType=conversion&\
startHour=2026-03-27T14:00:00Z&\
endHour=2026-03-27T20:00:00Z" \
  -H "Authorization: Bearer scope3_<your_api_key>"
Response
{
  "periodStart": "2026-03-27T14:00:00.000Z",
  "periodEnd":   "2026-03-27T20:00:00.000Z",
  "entries": [
    { "eventHour": "2026-03-27T14:00:00.000Z", "eventType": "conversion", "eventCount": 1500 },
    { "eventHour": "2026-03-27T15:00:00.000Z", "eventType": "conversion", "eventCount": 1612 },
    { "eventHour": "2026-03-27T16:00:00.000Z", "eventType": "conversion", "eventCount": 1483 }
  ],
  "totalEventCount": 4595
}
Query paramNotes
eventTypeOne of conversion, click, impression, measurement, mmp. Omit for all types.
startHour / endHourHour-aligned ISO 8601 timestamps. Defaults to the last completed UTC hour.
You can also check measurement freshness — gaps in expected geo/time coverage:
curl "$BASE/advertisers/12345/measurement-freshness?\
flightStart=2026-03-01&\
flightEnd=2026-04-30&\
geos=US-CA,US-OR,US-WA" \
  -H "Authorization: Bearer scope3_<your_api_key>"

Step 6: Set Up Test Cohorts and Test Plans

Incrementality tests work by comparing well-defined groups. The v2 model has three layers:
  • Hypotheseswhat you’re testing: a falsifiable claim about a treatment vs. a comparison (“premium CTV drives more incremental revenue per impression than general CTV”). Hypotheses are persistent — each test plan accumulates evidence against the same hypothesis over time.
  • Test cohortswho is in each arm (treatment, control, observation). A cohort is a flexible audience definition (geo, segment, zip code, custom).
  • Test planshow the comparison runs against a hypothesis: which conditions, which dimensions to match on, how cells are sized.

Create a hypothesis

Every test plan is anchored to a hypothesis. Create one before linking test plans, cohorts, or allocation entries to it.
curl -X POST "$BASE/advertisers/12345/hypotheses" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "category": "channel",
    "statement": "Premium CTV inventory drives more incremental purchases than general CTV at the same CPM.",
    "treatment": "Premium CTV (top-tier publisher list)",
    "comparison": "General CTV (open marketplace)",
    "outcomeMetric": "incremental_revenue",
    "icon": "📺",
    "priorConfidence": 0.4,
    "priorMagnitude": 0.15,
    "incrementalCostCpm": 8.0,
    "measurementSource": "billy_grace",
    "measurementLagWeeks": 1,
    "minimumTestCells": 6,
    "proxyMetrics": [
      { "outcomeType": "add_to_cart_count", "weight": 0.3 },
      { "outcomeType": "purchase_count", "weight": 0.7 }
    ]
  }'
Response
{
  "id": "1f5c0c4e-2c9f-4f0a-9e31-2d4f3c7b1a01",
  "advertiserId": "12345",
  "category": "channel",
  "statement": "Premium CTV inventory drives more incremental purchases than general CTV at the same CPM.",
  "treatment": "Premium CTV (top-tier publisher list)",
  "comparison": "General CTV (open marketplace)",
  "outcomeMetric": "incremental_revenue",
  "icon": "📺",
  "priorConfidence": 0.4,
  "currentConfidence": 0.4,
  "priorMagnitude": 0.15,
  "currentMagnitude": 0.15,
  "incrementalCostCpm": 8.0,
  "measurementSource": "billy_grace",
  "measurementLagWeeks": 1,
  "minimumTestCells": 6,
  "proxyMetrics": [
    { "outcomeType": "add_to_cart_count", "weight": 0.3 },
    { "outcomeType": "purchase_count", "weight": 0.7 }
  ],
  "groundTruth": null,
  "status": "no_buys",
  "confidenceHistory": [],
  "createdBy": "user_98765",
  "sourceName": null,
  "createdAt": "2026-04-26T16:00:00.000Z",
  "updatedAt": "2026-04-26T16:00:00.000Z"
}
FieldTypeRequiredNotes
categoryenumyesaudience, creative, channel, context, timing, tactic
statementstringyesThe falsifiable claim being tested
treatmentstringyesDescription of the treatment arm
comparisonstringyesDescription of the comparison / control arm
outcomeMetricstringyesPrimary metric (e.g. incremental_revenue, purchase_count)
priorConfidencenumber (0–1)yesInitial confidence before any data
priorMagnitudenumberyesExpected effect size before any data
iconstringnoEmoji for UI display, defaults to 💡
incrementalCostCpmnumbernoCost premium of treatment vs. comparison
measurementSourcestringnoSource key tying ground-truth records to this hypothesis
measurementLagWeeksintegernoExpected reporting lag, defaults to 1
minimumTestCellsintegernoMinimum cells required for a meaningful test, defaults to 6
proxyMetricsarrayno{ outcomeType, weight } proxies that update beliefs alongside the primary metric
groundTruthobjectnoFree-form expected outcome for sanity checks
The returned id is the hypothesisId you’ll plug into the test-plan creation step below — and into testability, learning-records, and belief-state queries.

Hypothesis status lifecycle

Every hypothesis carries a status that the engine advances as evidence accrues. New hypotheses always start at no_buys; the rest are reached automatically as media buys link, fire, and produce measurable outcomes.
StatusMeaningTriggered when
no_buysHypothesis exists but no media is running against itHypothesis is created and no allocation entries are linked yet
in_marketAt least one linked buy is deliveringA linked allocation entry / media buy goes active
partially_measuredSome data has arrived, but coverage is thinMeasurement records start landing but cells / geos are below minimumTestCells or coverage gaps remain
well_measuredEnough data to update beliefs with confidenceCoverage hits minimumTestCells across the matched dimensions and the posterior tightens
provenPosterior supports the treatment hypothesisBelief converges with sufficient confidence in the hypothesized direction
disprovenPosterior rejects the treatment hypothesisBelief converges with sufficient confidence against the hypothesized direction
You can filter hypotheses by status when listing — GET /advertisers/:advertiserId/hypotheses?status=well_measured is a useful default for “what’s worth a stakeholder review this week.”

List hypotheses

curl "$BASE/advertisers/12345/hypotheses?take=50&skip=0" \
  -H "Authorization: Bearer scope3_<your_api_key>"
Optional filters: category, status, flightId. Pagination via take (max 100) and skip.

Create test cohorts

curl -X POST "$BASE/advertisers/12345/test-cohorts" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "West Coast Treatment",
    "description": "Users in CA, OR, WA receiving full ad exposure",
    "cohortType": "geographic",
    "role": "TREATMENT",
    "definition": {
      "type": "geo_region",
      "regions": ["US-CA", "US-OR", "US-WA"]
    },
    "estimatedSize": 50000
  }'
curl -X POST "$BASE/advertisers/12345/test-cohorts" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "Mountain West Control",
    "cohortType": "geographic",
    "role": "CONTROL",
    "definition": {
      "type": "geo_region",
      "regions": ["US-AZ", "US-NV", "US-NM"]
    },
    "estimatedSize": 48000
  }'
FieldTypeRequiredDescription
namestringyesCohort label, max 255 chars
cohortTypestringyesFree-form classification (e.g. geographic, demographic, behavioral)
roleenumnoTREATMENT, CONTROL, or OBSERVATION (default TREATMENT)
definition.typestringyesDiscriminator — zip_code, user_segment, geo_region, custom
definition.*anyAdditional fields per type
estimatedSizeintegernoSize estimate for power analysis
Available cohort operations:
MethodPathPurpose
GET/advertisers/:advertiserId/test-cohortsList (filter by role, isActive)
POST/advertisers/:advertiserId/test-cohortsCreate
GET/advertisers/:advertiserId/test-cohorts/:idGet one
PUT/advertisers/:advertiserId/test-cohorts/:idUpdate
DELETE/advertisers/:advertiserId/test-cohorts/:idArchive

Create a test plan against a hypothesis

A hypothesis is the question being tested (e.g. “Does running on premium CTV inventory drive incremental purchases versus general CTV?”). Test plans are nested under the hypothesis they test:
curl -X POST "$BASE/advertisers/12345/hypotheses/1f5c0c4e-2c9f-4f0a-9e31-2d4f3c7b1a01/test-plans" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "testCondition":    { "inventoryTier": "premium_ctv" },
    "controlCondition": { "inventoryTier": "general_ctv" },
    "matchDimensions":  ["geo", "daypart", "audienceSegment"]
  }'
FieldTypeDescription
testConditionobjectWhat defines the treatment arm
controlConditionobjectWhat defines the control arm
matchDimensionsstring[]Dimensions to balance across cells (at least one)
Once a plan is created (status designed), patch it to active and link the allocation entries (media buys) that fall under each role:
# Link allocations
curl -X POST "$BASE/advertisers/12345/test-plans/<test_plan_id>/link-buys" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "allocationEntryIds": [
      "a1111111-1111-1111-1111-111111111111",
      "a2222222-2222-2222-2222-222222222222"
    ],
    "role": "test"
  }'

# Activate
curl -X PATCH "$BASE/advertisers/12345/test-plans/<test_plan_id>" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{ "status": "active" }'
status transitions: designedactivecomplete. Updating to complete (or letting the engine auto-complete on flight end) closes the test for inference.

Step 7: Trigger Learning and Read Belief State

The learning cycle ingests new measurement records, fits the Bayesian update, and refreshes the belief state. It typically runs on a schedule, but you can trigger it on demand after a large data load:
curl -X POST "$BASE/advertisers/12345/learning-cycle/run" \
  -H "Authorization: Bearer scope3_<your_api_key>"
Read the current belief state for an advertiser:
curl "$BASE/advertisers/12345/belief-state" \
  -H "Authorization: Bearer scope3_<your_api_key>"
The response summarizes each tracked hypothesis: posterior magnitude, confidence interval, status (no_buys, in_market, partially_measured, well_measured, proven, disproven), and which sources contributed evidence. To inspect the records that fed the most recent updates:
curl "$BASE/advertisers/12345/learning-records?\
hypothesisId=1f5c0c4e-2c9f-4f0a-9e31-2d4f3c7b1a01&\
take=50&skip=0" \
  -H "Authorization: Bearer scope3_<your_api_key>"
You can also query raw measurement records by outcome / geo / date range:
curl "$BASE/advertisers/12345/measurement-records?\
outcomeType=incremental_revenue&\
geos=US-CA,US-OR&\
startDate=2026-03-01&endDate=2026-03-31" \
  -H "Authorization: Bearer scope3_<your_api_key>"
Belief state is read-only. To change beliefs, send better data — more records, fewer gaps, properly designed test plans — and let the next learning cycle re-fit.

Step 8: Review and Acknowledge Test Results

When a test plan reaches complete, the engine produces an incrementality result attached to the test plan and hypothesis. Read it via the test plan endpoint:
curl "$BASE/advertisers/12345/test-plans/<test_plan_id>" \
  -H "Authorization: Bearer scope3_<your_api_key>"
The response includes the resolved test cell counts, control cell counts, coverage gaps (geos / dimensions where matching was weak), and the posterior delta on the hypothesis. List all plans for a hypothesis with:
curl "$BASE/advertisers/12345/hypotheses/<hypothesis_id>/test-plans?take=50" \
  -H "Authorization: Bearer scope3_<your_api_key>"

Mark results as reviewed

Acknowledge a result so it stops surfacing in unread-results queues and so the audit trail records who signed off. Use the test plan update endpoint to transition status (e.g. recording observed test/control cell counts and final coverage gaps):
curl -X PATCH "$BASE/advertisers/12345/test-plans/<test_plan_id>" \
  -H "Authorization: Bearer scope3_<your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{
    "status": "complete",
    "testCellsCount": 12,
    "controlCellsCount": 11,
    "coverageGaps": []
  }'
If the result motivates a new line of inquiry, capture it as human feedback or as a new hypothesis on the advertiser — that becomes the prior for the next round of testing.

Best Practices

Events sent to an unknown event_source_id are rejected. Always run sync_event_sources first; for new advertisers, do this in your onboarding automation.
If your MMP / CRM / data warehouse can produce weekly per-geo outcomes, use POST /measurement-data/sync (or /measurement-records). Aggregated records arrive faster, are cheaper to ingest, and are immune to user-level identity-resolution drift.
For per-user attribution and click-id matching, use the Conversion API. It’s the only path that can attribute back to a specific impression or click.
Hypothesis priorConfidence and priorMagnitude are not “the answer” — they reflect what you’d believe before the test. Calibrated priors make early-flight estimates much more useful than uncalibrated ones.
matchDimensions should include any factor that meaningfully drives the outcome (geo, daypart, segment). Forgetting a strong driver introduces confounding even with a clean A/B split.
measurement-data/sync deduplicates on external_row_id. Re-running a daily export is safe — only changed rows update.
hashed_email and hashed_phone must be SHA-256 hex (lowercase, 64 chars). Normalize email to lowercase / trimmed and phone to E.164 before hashing. The engine will not match malformed hashes.
POST /testability answers “is this design likely to learn anything?” before media starts. GET /measurement-freshness answers “is data actually arriving?” once it has.
Posterior deltas are a guide, not a verdict. When stakeholders disagree with a result, capture the disagreement as feedback so the next cycle can spawn refined hypotheses rather than re-litigating the old one.

Endpoint Reference

All paths are relative to https://api.agentic.scope3.com/api/buyer.

Event sources

MethodPath
POST/advertisers/:advertiserId/event-sources/sync
GET/advertisers/:advertiserId/event-sources

Measurement config

MethodPath
GET/advertisers/:advertiserId/measurement-config
PUT/advertisers/:advertiserId/measurement-config

Measurement data

MethodPath
POST/advertisers/:advertiserId/measurement-data/sync
POST/advertisers/:advertiserId/measurement-records
GET/advertisers/:advertiserId/measurement-records
POST/advertisers/:advertiserId/context-records

Events and freshness

MethodPath
GET/advertisers/:advertiserId/events/summary
GET/advertisers/:advertiserId/measurement-freshness

Learning engine

MethodPath
POST/advertisers/:advertiserId/learning-cycle/run
GET/advertisers/:advertiserId/belief-state
GET/advertisers/:advertiserId/learning-records
POST/advertisers/:advertiserId/testability

Measurement sources

MethodPath
GET/advertisers/:advertiserId/measurement-sources
POST/advertisers/:advertiserId/measurement-sources
GET/advertisers/:advertiserId/measurement-sources/:id
PATCH/advertisers/:advertiserId/measurement-sources/:id

Hypotheses

MethodPath
GET/advertisers/:advertiserId/hypotheses
POST/advertisers/:advertiserId/hypotheses
GET/advertisers/:advertiserId/hypotheses/:id
PATCH/advertisers/:advertiserId/hypotheses/:id
DELETE/advertisers/:advertiserId/hypotheses/:id

Test cohorts

MethodPath
GET/advertisers/:advertiserId/test-cohorts
POST/advertisers/:advertiserId/test-cohorts
GET/advertisers/:advertiserId/test-cohorts/:id
PUT/advertisers/:advertiserId/test-cohorts/:id
DELETE/advertisers/:advertiserId/test-cohorts/:id

Test plans

MethodPath
GET/advertisers/:advertiserId/hypotheses/:hypothesisId/test-plans
POST/advertisers/:advertiserId/hypotheses/:hypothesisId/test-plans
GET/advertisers/:advertiserId/test-plans/:id
PATCH/advertisers/:advertiserId/test-plans/:id
POST/advertisers/:advertiserId/test-plans/:id/link-buys

Conversion API

Send per-user purchase, lead, and engagement events with click and identity matching.

Campaigns

Set up the campaigns and media buys that measurement data attaches to.

Authentication

Generate and manage API keys for measurement requests.

Reporting

Pull aggregated performance once measurement data is flowing.