Scope3 Campaign API - Development Guidelines
Project Overview
This is the Scope3 Campaign API MCP server with comprehensive brand agent management and dual-mode reporting capabilities. The API provides both conversational AI interactions and enterprise data integration.Architecture
Core Components
- Brand Agents: Central advertiser accounts that own all resources
- Campaigns: Marketing initiatives with budgets and targeting
- Creatives: Reusable ad content assets
- Reporting: Dual-mode (conversational + structured data export)
Tech Stack
- Runtime: Node.js with TypeScript
- Protocol: MCP (Model Context Protocol)
- Documentation: Mintlify with OpenAPI auto-generation
- Backend: Hybrid GraphQL + BigQuery architecture
Backend Architecture
The server uses a GraphQL-primary with BigQuery enhancement approach:- GraphQL (
https://api.scope3.com/api/graphql
): Primary data source for all core entities- Brand agents (
public_agent
table) - base brand agent data - Brand stories, brand standards, PMPs, measurement sources
- Reliable, always-available API with authentication
- Brand agents (
- BigQuery (
bok-playground.agenticapi
): Customer-scoped extensions and advanced featuresbrand_agent_extensions
- Extendspublic_agent
with customer-scoped fieldscampaigns
- Campaign management with budget trackingcreatives
- Creative assets with format/content metadata- Assignment mappings and relationships
- GraphQL First: Query GraphQL for core entity data (reliable, authenticated)
- BigQuery Enhancement: Add customer-scoped fields and advanced features when available
- No Fallbacks: Each backend serves its specific architectural purpose - don’t treat as backups
BigQuery Tables
brand_agent_extensions
- Extends GraphQLpublic_agent
with customer-scoped fieldscampaigns
- Campaign management with budget trackingcreatives
- Creative assets with format/content metadatacampaign_creatives
- Campaign-creative assignment mappingcampaign_brand_stories
- Campaign-brand story assignment mapping
Caching Layer
The server implements a comprehensive in-memory caching system to reduce BigQuery costs and improve response times: Architecture:- Transparent Caching: Drop-in replacement for BigQuery with same interface
- TTL-Based Invalidation: Configurable time-to-live for different data types
- Race Condition Prevention: Promise deduplication prevents duplicate queries
- Customer Scoping: Cache keys include customer identification for isolation
- Background Preloading: Common queries preloaded on customer authentication
src/server.ts
):
- Cache Hits: ~100% speed improvement (sub-millisecond response)
- Memory Management: Automatic cleanup on TTL expiration
- Hit Rate Tracking: Monitoring for cache effectiveness
- Pattern Invalidation: Clear related entries on updates
Key Services
- CachedBigQuery (
src/services/cache/cached-bigquery.ts
) - In-memory caching wrapper for BigQuery - PreloadService (
src/services/cache/preload-service.ts
) - Background preloading of common queries - CampaignBigQueryService (
src/services/campaign-bigquery-service.ts
) - CRUD operations for BigQuery entities - BrandAgentService (
src/services/brand-agent-service.ts
) - BigQuery extensions for brand agents - Scope3ApiClient (
src/client/scope3-client.ts
) - GraphQL-primary with BigQuery enhancement
Development Standards
Code Quality
- Full TypeScript coverage - no
any
types - Use
Record<string, unknown>
for flexible object types - Zod schemas for parameter validation
- Consistent error handling patterns
- Human-readable response formatting
Tool Patterns
All MCP tools should follow these patterns:- Clear
verb_noun
naming (e.g.,create_brand_agent
) - Comprehensive parameter validation
- Auth checking with environment fallback
- Rich text responses with formatting and insights
- Consistent error codes and messages
API Integration
- Use existing GraphQL client patterns
- Separate brand agent queries in dedicated files
- Follow resource ownership model (brand agents own campaigns/creatives)
- Support both create/update patterns for assignments
ADCP Integration
CRITICAL: Always use the@adcp/client
library for ADCP operations
- Use
ADCPProductDiscoveryService
: For product discovery, use the service layer that wrapsADCPMultiAgentClient
- Never manually call MCP tools: The ADCP client library handles MCP protocol details internally
- No custom MCP clients: Don’t create custom MCP client services - use the established ADCP patterns
- Service initialization patterns:
- Database-scoped:
ADCPProductDiscoveryService.fromDatabase(customerId)
- Environment fallback:
ADCPProductDiscoveryService.fromEnv(config)
- Database-scoped:
- Parameter handling: Pass parameters directly to ADCP service - don’t wrap in MCP-specific formats
- ❌ Manual
client.callTool()
with ADCP requests - ❌ Wrapping parameters in
{ req: request }
objects - ❌ Custom MCP client services for ADCP operations
- ❌ Manual progress tracking and Promise.allSettled for agent calls
BigQuery Integration
- Hybrid Routing: Always try BigQuery first, fall back to GraphQL on failure
- Type Safety: Use proper TypeScript interfaces for BigQuery row structures
- Error Handling: Log BigQuery failures and gracefully fall back
- Schema Changes: Update both BigQuery tables and TypeScript interfaces
- Setup Scripts: Use
scripts/create-bigquery-tables.sql
for table creation - Testing: Use
scripts/test-bigquery-integration.ts
for integration validation
Caching Integration
- Dependency Injection: Services accept optional BigQuery instance for transparent caching
- Fire-and-Forget Preloading: Triggered on customer authentication, doesn’t block responses
- Cache Key Strategy: Base64-encoded JSON with customer and query parameters
- Invalidation Patterns: Pattern-based cache clearing for updates (e.g.,
brand_agent:123:*
) - Contract Compliance: CachedBigQuery implements CacheService interface for testing
- Environment Configuration: TTL values configurable via environment variables
Mintlify Documentation Standards
Documentation Audience and Accuracy
- Target audience: Developers and coding agents
- Tone: Matter-of-fact, technical, precise
- Accuracy requirement: All information must be factual and verifiable
- Push back on ideas with reasoning - this leads to better documentation
- ALWAYS ask for clarification rather than making assumptions
- NEVER lie, guess, or make up information
- NEVER reference non-existent packages, tools, or features
- NEVER include inaccurate tool lists or capabilities
Project Context
- Format: MDX files with YAML frontmatter
- Config: docs.json for navigation, theme, settings
- Components: Mintlify components (Note, Warning, Tip, CardGroup)
- File Location: All public documentation MUST be in the
mintlify/
directory
Content Strategy
- Document just enough for user success - not too much, not too little
- Prioritize accuracy and usability of information
- Make content evergreen when possible
- Search for existing information before adding new content
- Check existing patterns for consistency
- Start by making the smallest reasonable changes
docs.json Configuration
- Refer to the docs.json schema when building navigation
- Use tabs for major sections (“Guides”, “API Reference”)
- Organize content into logical groups within tabs
- Leverage OpenAPI auto-generation for API documentation - DON’T duplicate it
Writing Standards
- Second-person voice (“you”)
- Prerequisites at start of procedural content
- Test all code examples before publishing
- Match style and formatting of existing pages
- Include both basic and advanced use cases
- Language tags on all code blocks
- Alt text on all images
- Relative paths for internal links
MCP Endpoint Guidelines
CRITICAL: There are TWO different MCP endpoints with different purposes:-
Documentation MCP Server:
https://docs.agentic.scope3.com/mcp
- For interactive documentation and learning experiences
- Use in tutorials, examples, and “try this out” scenarios
- Provides demo tools and educational content
- Safe for public examples and screenshots
-
Production API MCP Server:
https://api.agentic.scope3.com/mcp
- For actual campaign management and production use
- Use in setup instructions, configuration examples
- Requires proper authentication and API keys
- Used for real campaign operations
- Documentation endpoint: Tutorial examples, demo scenarios, learning content
- API endpoint: Production setup guides, real configuration instructions, actual usage
Documentation Don’ts
- Skip frontmatter on any MDX file
- Use absolute URLs for internal links
- Include untested code examples
- Make assumptions - always ask for clarification
- Duplicate API reference content (use OpenAPI auto-generation)
- Create excessive navigation depth
- Confuse the two MCP endpoints - always use the right one for the context
- Reference non-existent npm packages or dependencies
- Include inaccurate tool lists or feature claims
- Use marketing language or subjective claims without evidence
Git Workflow
Commit Standards
- NEVER use
--no-verify
when committing - NEVER skip or disable pre-commit hooks
- Commit frequently throughout development
- Use descriptive commit messages explaining the “why”
- Include co-authoring for AI assistance
Branch Management
- NEVER push directly to main branch - Always create feature branches
- Create new branches for feature work:
git checkout -b feature/description
- Ask how to handle uncommitted changes before starting
- Use rebase for clean history when merging
- Test locally before pushing
- Always create pull requests for code review before merging to main
- Production fixes require proper testing before deployment
Brand Agent Architecture
Resource Hierarchy
Key Patterns
- Advertiser-Centric: Brand agents own all resources
- Resource Sharing: Creatives/audiences reused across campaigns
- Create/Update Pattern: Assignments via campaign creation/updates
- Dual-Mode Reporting: Conversational summaries + structured exports
Testing & Validation
Testing Strategy
Backend-Independent Contract Testing (Current Approach) We use a contract testing pattern that ensures tests remain valid across backend technology changes (e.g., BigQuery → PostgreSQL). This approach provides:- Technology Independence: Tests focus on service behavior, not implementation
- Future-Proof: Backend migrations don’t require test rewrites
- Fast Feedback: In-memory test doubles enable rapid development cycles
- Contract Validation: Ensures all implementations adhere to the same behavioral contract
Contract Testing Architecture
1. Service Contracts (src/contracts/
)
Define interfaces that any backend implementation must satisfy:
src/__tests__/contracts/
)
Generic test suites that validate any implementation against the contract:
src/test-doubles/
)
In-memory implementations for fast, isolated testing:
Why This Testing Strategy
Problem Solved: Traditional testing approaches couple tests to specific backend technologies, making backend migrations expensive and risky. Our Solution:- Define Contracts: Explicit interfaces for all backend services
- Test Contracts: Generic test suites that validate behavior, not implementation
- Use Test Doubles: Fast, controlled implementations for development and CI
- Validate Real Services: Run the same contract tests against actual backend services
- Migration Safety: When switching BigQuery → PostgreSQL, the same contract tests validate the new implementation
- Development Speed: Test doubles provide instant feedback without external dependencies
- Behavioral Focus: Tests validate what the service does, not how it does it
- Regression Prevention: Contract tests catch breaking changes in service behavior
Test Levels
- Contract Tests (
src/__tests__/contracts/*.contract.test.ts
) - Validate service interfaces and behavior - Caching Tests (
src/__tests__/caching/*.test.ts
) - Cache behavior, TTL handling, race conditions - Tool-Level Tests (
*-tool-level.test.ts
) - Test complete MCP tool execution - Integration Tests (
test-*.js
) - End-to-end validation with real backends (for verification) - Improved Tests (
src/__tests__/improved-testing/*.test.ts
) - Using new dependency injection architecture
Running Contract Tests
Before Committing
- Run linters and formatters
- Ensure all TypeScript compiles
- Run contract tests:
npm test -- contracts
- Test any code examples in documentation
- Validate docs.json structure if modified
Test Commands
Documentation Testing
Improved Testing Architecture
The Problem with Traditional Testing
Traditional vi.mock() approach had several issues that made tests unreliable:- Over-Mocking: Module-level mocks with incomplete method coverage causing “Cannot read properties of undefined” errors
- Mixed Strategies: Inconsistent mocking approaches (prototype vs mockImplementation) causing conflicts
- Global State: Tests interfering with each other through shared mocks and circuit breakers
- Implementation Coupling: Tests breaking when services add new methods or change structure
The Solution: Dependency Injection + Mock Factories
We’ve implemented a new testing architecture that solves these problems: 1. Mock Factories (src/test-utilities/mock-factories.ts
)
- Complete, consistent mocks that match real service interfaces
- Scenario-based configuration (success, failure, timeout, etc.)
- Zero “undefined property” errors
src/test-utilities/test-helpers.ts
)
- Standardized setup and teardown utilities
- Assertion helpers for common patterns
- Test data factories for consistent test inputs
- Tools accept dependencies as constructor parameters
- Makes testing explicit and reliable
- Enables easy mock substitution
Using the New Pattern
Old Way (Problematic):Benefits Achieved
- 98.3% → 100% test pass rate for new architecture
- 80% reduction in test setup complexity
- 100% elimination of undefined property errors
- Zero test interference - each test is fully isolated
- Easy scenario testing - configure success/failure/timeout with one parameter
Migration Strategy
Phase 1: New Tests (✅ Complete)- Use new pattern for all new tests
- Establish as standard approach
- Build confidence with working examples
- Convert frequently failing tests first
- Focus on critical path functionality
- Maintain backwards compatibility during transition
- Convert remaining test suites
- Retire old testing patterns
- Update documentation and training
Examples
Seesrc/__tests__/improved-testing/upload-injectable.test.ts
for a complete example showing:
- Simple success scenarios
- Easy failure configuration
- Custom behavior testing
- Partial failure handling
- Metrics and monitoring validation
When to Use Which Approach
Use New Pattern For:- All new test files
- Tests with complex service dependencies
- Tests that need multiple failure scenarios
- Flaky or hard-to-maintain existing tests
- Simple tests that are already stable
- Tests with minimal external dependencies
- Tests scheduled for deprecation
Common Pitfalls to Avoid
- Don’t duplicate OpenAPI docs - Use auto-generation instead
- Don’t create excessive navigation depth - Keep it simple
- Don’t skip frontmatter - Every MDX file needs title/description
- Don’t use absolute URLs - Use relative paths for internal links
- Don’t bypass git hooks - They exist for good reasons
- Don’t make assumptions - Ask for clarification when uncertain
- Don’t assume GraphQL field names match code concepts - Always verify actual schema field names
Refactoring Best Practices
Code-Documentation Alignment
When refactoring code to match updated documentation terminology:-
Plan Systematic Changes: Break large refactoring into logical phases
- Directory/file renames first
- Update file contents systematically
- Update tool registrations and exports
- Fix type definitions across all files
- Test compilation at each major step
-
Update All References: Terminology changes require updates across:
- File names and paths: Directory names, file names
- Function/tool names: Export names, tool registrations
- Content strings: User-facing text, error messages, descriptions
- Type definitions: Interface names, enum values, parameter types
- Documentation: All references in docs and code comments
-
Maintain Consistency:
- Use consistent naming patterns (e.g.,
brand_stories
notbrandStories
for API endpoints) - Update both human-readable text AND programmatic references
- Test that OpenAPI generation works with renamed tools
- Verify all imports/exports resolve correctly
- Use consistent naming patterns (e.g.,
-
Common Gotchas:
- Enum values: Update both definition and usage in switch statements
- Duplicate type definitions: Check multiple files for same interface
- Tool registration: Update both import and addTool() calls
- Export lists: Long export lists at end of index files
- API client calls: Backend API method names may still use old terminology
Lessons Learned from “Synthetic Audience” → “Brand Story” Refactoring
- MultiEdit is powerful but fails if old_string and new_string are identical
- Type definitions may exist in multiple files (mcp.ts, reporting.ts)
- Tool names appear in 4+ places: file name, export name, tool registration, export list
- Prettier formatting should be run after bulk changes
- Testing early and often prevents cascade failures
Lessons Learned from GraphQL Schema Mismatch Investigation
When troubleshooting “Request failed” errors that bypass authentication: The Problem: Assumed GraphQL field names matched code concepts- Code:
brandAgents
,brandAgent
- Actual API:
agents
,agent
- Authentication works (validates API key exists)
- Data queries fail (wrong field names = 400 Bad Request)
- Verify endpoint connectivity - Test basic HTTP requests
- Test authentication separately - Confirm API key works for simple queries
- Test actual GraphQL queries directly - Use curl with real API key to test exact queries
- Verify field names with working queries - Don’t assume, test systematically
- LIST:
brandAgents
→agents
(plural field) - GET:
brandAgent(id)
→agent(id)
(singular field) - CREATE/UPDATE: Parameter structure (
input
object → direct parameters,ID!
→BigInt!
) - Type interfaces: Update response data structures to match actual API fields
Resources
- Mintlify Documentation
- docs.json Schema
- MCP Protocol Spec
- Project OpenAPI Spec:
openapi.yaml