AI Agent Harness for Marketing: Build Autonomous Campaign Production Systems
Build an AI agent harness that produces campaigns autonomously. Complete architecture, prompts, and tool connections included.
yfxmarketer
February 3, 2026
AI agent harnesses let you run multi-session autonomous workflows. Engineering teams use them to build entire applications with zero human intervention between sessions. Marketing teams need the same capability. One campaign brief goes in. Launch-ready assets come out. This guide shows you how to build it.
The harness pattern solves the biggest limitation of AI assistants: context windows. Large campaigns overwhelm any single session. The harness breaks work into discrete tasks, manages state between sessions, and strings together as many agent runs as needed. The result is a system that produces landing pages, email sequences, ad variants, and blog posts in parallel while tracking everything in your project management tool.
TL;DR
AI agent harnesses wrap your coding or marketing agents with persistence and progress tracking. They break large projects into discrete tasks, run each task in a fresh context window, and hand off state between sessions. For marketers, this means feeding in a campaign brief and receiving complete assets without babysitting each step. Time saved: 15-25 hours per campaign launch.
Key Takeaways
- Agent harnesses solve context window limits by stringing together multiple sessions with state management
- The harness pattern has four components: initializer, task list, execution loop, and progress tracker
- Marketing harnesses replace Linear with Monday.com or Asana, GitHub with Google Drive, and Playwright with tracking validation
- Sub-agents provide context isolation so your main orchestrator stays lean
- MCP connections let agents read and write to external tools automatically
- Start with a single sub-agent (email writer), validate it works, then add more
What Is an AI Agent Harness?
An agent harness is a wrapper that provides persistence and progress tracking over your AI agent. Without a harness, you give Claude a large request, it fills its context window, and quality degrades. With a harness, you give Claude a large request, the harness breaks it into tasks, and Claude executes each task in a fresh context window.
The harness maintains state between sessions. Session one creates the task list. Session two completes task one. Session three completes task two. Each session starts fresh, reads state from an external system, does its work, and updates state before ending. The external system (Linear for engineers, Monday.com for marketers) becomes the source of truth.
Anthropic open-sourced their engineering harness in late 2025. It uses Linear for task management, GitHub for code storage, and Slack for progress updates. The architecture translates directly to marketing with different tools.
Action item: Open your last campaign production timeline. Count the hours from brief to launch-ready assets. This is your baseline to beat.
Why Do Marketers Need Agent Harnesses?
Marketing campaigns require multiple asset types created in sequence. Landing page copy depends on messaging strategy. Email sequences depend on the landing page. Ad variants depend on both. Traditional production chains these dependencies into weeks of handoffs.
Agent harnesses parallelize what they can and sequence what they must. The harness reads your campaign brief, generates the messaging strategy, then spawns five sub-agents simultaneously: landing page writer, email writer, blog writer, social writer, ad writer. All five run at the same time. When they finish, a validation agent checks each output. The total time equals the slowest sub-agent plus validation, not the sum of all assets.
Context isolation matters for quality. Your landing page writer sub-agent only sees the messaging strategy and landing page requirements. It does not see the email templates, blog outlines, or ad specs. Clean context produces better output than a bloated context window containing everything.
Action item: List every asset type your campaigns require. Each type becomes a potential sub-agent in your harness.
What Does the Harness Architecture Look Like?
The harness has four core components that work together:
Component 1: Initializer Agent
The initializer runs once at the start of each campaign. It reads the campaign brief, creates tasks in your project management tool, sets up the folder structure for deliverables, and establishes the progress tracking mechanism.
For engineering harnesses, the initializer creates Linear issues and initializes a Git repository. For marketing harnesses, the initializer creates Monday.com tasks and sets up Google Drive folders.
Component 2: Task List (External System)
The task list lives outside Claude’s context window. Engineering harnesses use Linear. Marketing harnesses use Monday.com, Asana, or ClickUp. The external system becomes the source of truth for what needs to be built and what has been completed.
Storing tasks externally serves two purposes. First, it survives session resets. When a new agent session starts, it reads the task list from the external system rather than relying on conversation history. Second, it provides visibility. You see progress in your project management tool without opening Claude.
Component 3: Execution Loop
The execution loop runs repeatedly until all tasks are complete. Each iteration starts a fresh context window, reads the task list, picks the next task, executes it, saves output, marks the task complete, and updates progress. The loop continues until no tasks remain.
Sub-agents handle individual tasks. The landing page writer sub-agent only activates when the execution loop assigns a landing page task. This keeps each agent’s context focused on one deliverable type.
Component 4: Progress Tracker
The progress tracker maintains handoff state between sessions. Engineering harnesses use a meta Linear issue called “Progress Tracker” that summarizes what each session accomplished. Marketing harnesses use a pinned Monday.com update or a progress document in Google Drive.
The progress tracker answers: What did the previous session complete? What should this session work on? Are there any blockers or issues? Each new session reads this before starting work.
Action item: Decide which tools will serve each component. Monday.com for tasks? Google Drive for storage? Slack for updates? Write down your tool mapping.
How Do You Build the Initializer Agent?
The initializer agent transforms your campaign brief into an actionable task list. It runs once per campaign and sets up everything the execution loop needs.
The Initializer Prompt
Copy this prompt into your agent definition file:
SYSTEM: You are a marketing campaign initializer. You read campaign briefs and create structured task lists for autonomous production.
<context>
Campaign brief: {{CAMPAIGN_BRIEF}}
Available sub-agents: landing-page-writer, email-writer, blog-writer, social-writer, ad-writer
Output folder: {{OUTPUT_FOLDER_PATH}}
</context>
Your job:
1. Parse the campaign brief to identify required deliverables
2. Create one task per deliverable with clear acceptance criteria
3. Set dependencies (messaging strategy must complete before channel assets)
4. Estimate time per task based on complexity
5. Create the folder structure for outputs
MUST include these fields for each task:
- Task name (descriptive, under 10 words)
- Assigned sub-agent
- Dependencies (which tasks must complete first)
- Acceptance criteria (how to verify completion)
- Output path (where to save the deliverable)
Output: JSON array of tasks, then create tasks in Monday.com via MCP.
What the Initializer Creates
A typical campaign brief produces 8-15 tasks. The initializer creates:
- One messaging strategy task (no dependencies, runs first)
- One task per landing page variant
- One task per email in the sequence
- One task for blog content
- One task per social platform
- One task per ad platform
- One validation task (depends on all content tasks)
Dependencies ensure the messaging strategy completes before channel assets begin. The landing page task depends on messaging. Email tasks depend on landing page. This sequencing happens automatically based on the brief.
Action item: Write a sample campaign brief with 3-5 deliverables. Run it through the initializer prompt manually to see the task structure it creates.
How Do You Build the Execution Loop?
The execution loop is the core of your harness. It runs repeatedly, picking up tasks and assigning them to sub-agents until the campaign is complete.
The Orchestrator Prompt
This prompt controls the main execution loop:
SYSTEM: You are the marketing campaign orchestrator. You coordinate sub-agents to produce campaign assets.
<context>
Project ID: {{MONDAY_PROJECT_ID}}
Progress tracker: {{PROGRESS_TRACKER_PATH}}
Available sub-agents: landing-page-writer, email-writer, blog-writer, social-writer, ad-writer, content-validator
</context>
Each session, follow this sequence:
1. Read the progress tracker to understand current state
2. Query Monday.com for incomplete tasks via MCP
3. Check task dependencies (skip tasks with incomplete dependencies)
4. Select the next available task
5. Spawn the appropriate sub-agent with task context
6. Wait for sub-agent to complete and save output
7. Mark task complete in Monday.com via MCP
8. Update the progress tracker with session summary
9. If more tasks remain and context allows, continue to step 3
10. If context is filling up, end session with handoff notes
MUST update progress tracker before ending every session.
MUST include specific handoff notes for the next session.
NEVER leave a task in progress without saving partial work.
Output: Session summary with completed tasks and next steps.
How Parallel Execution Works
The orchestrator spawns sub-agents for tasks without dependencies simultaneously. When the messaging strategy completes, the orchestrator sees that landing page, email, blog, social, and ad tasks all have their dependencies met. It spawns all five sub-agents at once.
Parallel execution requires Claude Code or Cowork. Claude.ai runs skills sequentially. If you use Claude.ai, the orchestrator completes one task at a time. Still faster than manual production, but not as fast as true parallel execution.
Action item: Install Claude Code if you have not already. Parallel sub-agent execution cuts campaign production time by 60-70%.
How Do You Build Sub-Agents?
Sub-agents are specialists for each content type. They receive task context from the orchestrator, produce one deliverable, and return the output path. Each sub-agent has its own prompt optimized for its content type.
Landing Page Writer Sub-Agent
SYSTEM: You are a conversion-focused landing page copywriter.
<context>
Messaging strategy: {{MESSAGING_STRATEGY}}
Page purpose: {{PAGE_PURPOSE}}
Target audience: {{TARGET_AUDIENCE}}
Primary CTA: {{PRIMARY_CTA}}
</context>
Write landing page copy with these sections:
HEADLINE: Under 10 words. Lead with outcome, not product name.
SUBHEADLINE: One sentence expanding the benefit.
HERO COPY: 2-3 sentences addressing the primary pain point.
BENEFITS: 4-6 bullets starting with action verbs.
SOCIAL PROOF: Placeholder for testimonial with format guidance.
PRIMARY CTA: Button text (under 5 words) + supporting microcopy.
SECONDARY CTA: Alternative action for visitors not ready to convert.
MUST lead every section with the outcome, not the feature.
MUST keep all paragraphs under 80 words for AEO optimization.
NEVER use: unlock, revolutionize, seamless, cutting-edge, game-changer.
Output: Markdown file saved to {{OUTPUT_PATH}}.
Email Writer Sub-Agent
SYSTEM: You are an email copywriter specializing in nurture sequences.
<context>
Messaging strategy: {{MESSAGING_STRATEGY}}
Sequence type: {{SEQUENCE_TYPE}}
Email count: {{EMAIL_COUNT}}
Landing page URL: {{LANDING_PAGE_URL}}
</context>
Write the email sequence with these specifications per email:
SUBJECT LINE: Under 50 characters. No spam trigger words.
PREVIEW TEXT: Complements subject, does not repeat it.
OPENING HOOK: Personal, references pain point or previous email.
BODY COPY: Under 200 words for nurture, 300 for welcome.
CTA: One primary action. Link text under 5 words.
PS LINE: Optional. Use for urgency, social proof, or secondary offer.
Sequence structure:
- Email 1 (Day 0): Welcome, set expectations, deliver promised value
- Email 2 (Day 2): Educational content, build credibility
- Email 3 (Day 4): Address common objection
- Email 4 (Day 6): Case study or social proof
- Email 5 (Day 8): Direct offer with urgency
MUST include {{FIRST_NAME}} personalization token in opening.
MUST vary CTA text across emails (not all "Learn More").
NEVER use urgency in more than 2 emails.
Output: Markdown file with all emails saved to {{OUTPUT_PATH}}.
Ad Writer Sub-Agent
SYSTEM: You are a performance marketing copywriter for paid ads.
<context>
Messaging strategy: {{MESSAGING_STRATEGY}}
Platform: {{AD_PLATFORM}}
Variant count: {{VARIANT_COUNT}}
Landing page headline: {{LP_HEADLINE}}
</context>
Write ad variants following platform specifications:
GOOGLE ADS:
- Headline 1: 30 characters max, include primary keyword
- Headline 2: 30 characters max, differentiate from H1
- Headline 3: 30 characters max, include CTA
- Description 1: 90 characters max, expand on benefit
- Description 2: 90 characters max, address objection or add proof
META ADS:
- Primary text: 125 characters for optimal display
- Headline: 40 characters max
- Description: 30 characters max
- CTA button: Select from platform options
LINKEDIN ADS:
- Intro text: 150 characters for single image
- Headline: 70 characters max
- Description: 100 characters max
MUST create {{VARIANT_COUNT}} distinct angles, not rewrites of the same copy.
MUST align with landing page messaging for scent continuity.
NEVER exceed character limits (ads will be rejected).
Output: Markdown file with all variants saved to {{OUTPUT_PATH}}.
Content Validator Sub-Agent
SYSTEM: You are a quality reviewer for marketing content.
<context>
Brand guidelines: {{BRAND_GUIDELINES}}
Content to review: {{CONTENT_PATH}}
Content type: {{CONTENT_TYPE}}
</context>
Validate content against these criteria:
BRAND CHECK:
- Tone matches guidelines (confident, direct, no fluff)
- No banned words present
- Terminology follows brand standards
- Voice is consistent throughout
FORMAT CHECK:
- Word counts within limits for content type
- Required sections present
- Headings follow hierarchy
- CTAs are clear and actionable
AEO CHECK:
- Paragraphs under 80 words
- First sentence contains main point
- Keywords in first 10 words of paragraphs
- Content is quotable by AI assistants
COMPLIANCE CHECK:
- Required disclosures present (if applicable)
- Claims are substantiated
- No competitor disparagement
For each criterion, output: PASS or FAIL with specific evidence.
If any critical failures, provide exact corrected text.
Output: Validation report with APPROVED or NEEDS REVISION status.
Action item: Create your first sub-agent prompt. Start with the email writer since email sequences are the most common campaign asset.
How Do You Connect External Tools?
The harness needs to read and write to external tools. MCP (Model Context Protocol) provides the connection layer. You connect once, and your agents interact with Monday.com, Google Drive, and Slack automatically.
Setting Up MCP Connections
Arcade provides an MCP gateway that handles authentication for multiple services. Create an Arcade account, set up your gateway, and add the tools your harness needs:
- Monday.com for task management
- Google Drive for asset storage
- Slack for progress updates
- HubSpot for email deployment (optional)
- Google Analytics for tracking validation (optional)
The Arcade gateway handles OAuth flows. Your team members connect their accounts once. The harness uses those connections for every campaign.
Tool Mapping for Marketing
Here is how engineering harness tools map to marketing equivalents:
| Engineering Tool | Marketing Tool | Purpose |
|---|---|---|
| Linear | Monday.com or Asana | Task source of truth |
| GitHub | Google Drive | Asset storage |
| Git commits | Folder versioning | Progress markers |
| Playwright | Link checker + GA4 validation | Quality checks |
| Slack | Slack | Progress updates |
Environment Variables
Set these environment variables for your harness:
# Arcade MCP Gateway
ARCADE_MCP_URL=https://api.arcade.dev/v1/mcp/your-gateway-id
ARCADE_API_KEY=your-api-key
# Tool Configuration
MONDAY_BOARD_ID=your-board-id
GDRIVE_FOLDER_ID=your-folder-id
SLACK_CHANNEL_ID=your-channel-id
# Model Selection
ORCHESTRATOR_MODEL=claude-sonnet-4-20250514
SUBAGENT_MODEL=claude-haiku-4-5-20251001
Use Sonnet for the orchestrator where coordination quality matters. Use Haiku for sub-agents where speed matters and tasks are well-defined.
Action item: Create an Arcade account and set up your MCP gateway with Monday.com and Google Drive. Test the connection with a simple read operation.
What Does the Complete Workflow Look Like?
Here is the step-by-step workflow from campaign brief to launch-ready assets:
Phase 1: Setup (One-Time)
Complete these steps once before your first campaign:
- Create the folder structure for your harness
- Write your sub-agent prompts
- Set up MCP connections to Monday.com, Google Drive, Slack
- Create a campaign brief template
- Test each sub-agent individually with sample inputs
Phase 2: Initialize Campaign (15 Minutes)
Start each campaign with these steps:
- Complete the campaign brief template with objectives, audience, deliverables
- Run the initializer agent with the brief as input
- Review the task list created in Monday.com
- Approve or adjust tasks before execution begins
Phase 3: Autonomous Production (2-4 Hours)
The harness runs without intervention:
- Orchestrator reads task list from Monday.com
- Orchestrator spawns sub-agents for tasks with met dependencies
- Sub-agents produce deliverables and save to Google Drive
- Orchestrator marks tasks complete and updates progress
- Loop continues until all tasks complete
- Validator agent checks all outputs
- Slack notification indicates campaign is ready for review
Phase 4: Review and Launch (1-2 Hours)
You review the complete package:
- Open Google Drive folder with all campaign assets
- Review each deliverable against brand guidelines
- Request revisions for any content that needs adjustment
- Approve final versions
- Push to execution platforms (CMS, email tool, ad platforms)
Total Time Comparison
| Activity | Traditional | With Harness |
|---|---|---|
| Brief to task list | 2 hours | 15 minutes |
| Content production | 20-30 hours | 2-4 hours |
| Review and revisions | 5-10 hours | 1-2 hours |
| Total | 27-42 hours | 3-6 hours |
The harness saves 20-35 hours per campaign. For teams launching 4 campaigns per month, this equals 80-140 hours saved monthly.
Action item: Run your first campaign through the harness. Track actual time at each phase. Compare to your baseline from the first action item.
How Do You Handle Validation?
Validation replaces manual QA. Engineering harnesses use Playwright to test code functionality. Marketing harnesses need different validation approaches for different asset types.
Content Validation
The content validator sub-agent checks brand voice, word counts, required sections, and AEO optimization. Run it after all content sub-agents complete. The validator reads each output file and produces a pass/fail report.
Link Validation
Before launch, validate all links in your content:
SYSTEM: You are a QA specialist validating marketing content links.
<context>
Content files: {{CONTENT_FOLDER_PATH}}
Expected destinations: {{EXPECTED_URLS}}
</context>
For each link in the content:
1. Extract the URL
2. Verify the URL resolves (200 status)
3. Verify the destination matches expected page
4. Verify UTM parameters are present and correct
5. Flag any broken or mismatched links
Output: Link validation report with PASS/FAIL per link and overall status.
Tracking Validation
For campaigns with conversion tracking, validate that tracking fires correctly:
SYSTEM: You are a marketing ops engineer validating tracking implementation.
<context>
Landing page URL: {{LANDING_PAGE_URL}}
Expected events: {{EXPECTED_EVENTS}}
GA4 property: {{GA4_PROPERTY_ID}}
</context>
Validation checklist:
1. Page loads without console errors
2. GA4 page_view event fires on load
3. Form submission event fires with correct parameters
4. Conversion event fires to Google Ads
5. Meta Pixel events fire correctly
6. UTM parameters captured in hidden fields
7. CRM receives test submission within 5 minutes
Output: Tracking validation report with evidence for each check.
Action item: Create a validation checklist specific to your campaign types. Include brand checks, link checks, and tracking checks relevant to your stack.
How Do You Extend the Harness?
The basic harness handles content production. Extensions add capabilities for more complex workflows.
Extension 1: Historical Data Analysis
Add a strategy agent that analyzes past campaign performance before production begins:
SYSTEM: You are a marketing strategist who builds data-driven recommendations.
<context>
Historical data: {{HISTORICAL_DATA_PATH}}
New campaign brief: {{CAMPAIGN_BRIEF}}
</context>
Analyze historical performance and provide:
1. Top 3 performing channels by ROAS
2. Best-performing subject line patterns
3. Optimal send times from email data
4. Landing page elements with highest conversion correlation
5. Audience segments with best engagement
Apply insights to the new campaign:
1. Recommend channel allocation based on historical ROAS
2. Suggest subject line structures based on past winners
3. Recommend landing page structure based on conversion data
Output: Data-driven strategy document that informs all sub-agents.
Extension 2: A/B Test Generation
Add a variant generator that creates testable alternatives:
SYSTEM: You are a CRO specialist who designs A/B tests.
<context>
Primary content: {{PRIMARY_CONTENT_PATH}}
Test element: {{TEST_ELEMENT}}
</context>
Create test variants following these principles:
1. Change one element per variant (isolate variables)
2. Make changes significant enough to detect differences
3. Align variants with different audience hypotheses
4. Include a control (original) in the test plan
For {{TEST_ELEMENT}}, generate:
- Control: Original version
- Variant A: Alternative approach 1 with hypothesis
- Variant B: Alternative approach 2 with hypothesis
Output: Test plan with variants, hypotheses, and success criteria.
Extension 3: Platform Deployment
Add deployment agents that push content directly to execution platforms:
SYSTEM: You are a marketing automation engineer.
<context>
Content: {{CONTENT_PATH}}
Platform: {{TARGET_PLATFORM}}
Platform credentials: (accessed via MCP)
</context>
Deploy content to {{TARGET_PLATFORM}}:
For HubSpot:
1. Create new email in the specified folder
2. Set subject line and preview text
3. Paste body content
4. Configure sender and reply-to
5. Set up A/B test if variants provided
6. Schedule for specified send time
For Google Ads:
1. Create new responsive search ad
2. Add headlines and descriptions from content
3. Set targeting parameters
4. Configure bidding strategy
5. Submit for review
Output: Deployment confirmation with platform IDs and status.
Action item: Identify one extension that would save you the most time. Build it after validating your core harness works.
What Are Common Failure Modes?
Harnesses fail in predictable ways. Know these patterns and build safeguards.
Failure 1: Context Overflow
The orchestrator tries to do too much in one session. Context fills up. Quality degrades. The fix: aggressive session handoffs. End sessions after 2-3 tasks, even if context window has room. Fresh context produces better output than nearly-full context.
Failure 2: Lost State
A session ends without updating the progress tracker. The next session does not know what was completed. The fix: mandatory progress updates before session end. The orchestrator prompt includes “MUST update progress tracker before ending every session.”
Failure 3: Dependency Violations
A sub-agent starts before its dependencies complete. It produces content misaligned with the messaging strategy. The fix: explicit dependency checking in the orchestrator loop. Query Monday.com for task status before spawning sub-agents.
Failure 4: Validation Gaps
Content passes automated validation but fails human review. Common issues: brand voice drift, factual errors, awkward phrasing. The fix: tighten validation prompts. Add specific checks for your most common revision requests.
Failure 5: Tool Connection Failures
MCP connections drop. The harness cannot read Monday.com or write to Google Drive. The fix: error handling in the orchestrator. If a tool call fails, retry once, then pause and notify via Slack rather than continuing blind.
Action item: Add error handling to your orchestrator prompt. Include retry logic and Slack notifications for failures.
Final Takeaways
Agent harnesses solve context window limitations by breaking large projects into discrete tasks with state management between sessions. Marketing harnesses use the same architecture as engineering harnesses with different tools: Monday.com instead of Linear, Google Drive instead of GitHub.
Sub-agents provide context isolation. Your landing page writer only sees landing page context. Your email writer only sees email context. Clean context produces better output than bloated context windows containing everything.
MCP connections let agents read and write to external tools. Set up Arcade once, connect Monday.com and Google Drive, and your harness manages tasks and stores deliverables automatically. OAuth is handled for you.
Start simple. Build one sub-agent, validate it works, then add more. The email writer sub-agent alone saves 3-5 hours per campaign. Add landing pages, social, and ads once email is reliable.
The harness pattern compounds. Each campaign improves your prompts. Better prompts produce better outputs. Better outputs require fewer revisions. Time savings grow with every campaign you run through the system.
yfxmarketer
AI Growth Operator
Writing about AI marketing, growth, and the systems behind successful campaigns.
read_next(related)
Claude Code n8n Integration: Build Marketing Automations With Prompts
Claude Code with n8n MCP server lets you prompt your way to marketing automations. Build workflows without the visual builder.
Claude Code for Revenue Teams: The Complete Implementation Guide
Claude Code transforms RevOps, sales, and marketing workflows. Learn MCPs, hooks, sub-agents, and skills to 10x your team output.
Claude Code Skills and Agents for Enterprise Marketers: The Complete Implementation Guide
Build AI agents with Claude Code skills to automate 80% of enterprise marketing workflows in weeks.