A new genre of "agentic" UGC ad workflows hit creator YouTube this month: point Claude's computer-use agent at an ad API, give it a product brief in a CLAUDE.md file, and let it generate 500 ads per month on autopilot. The numbers are real. The quality is more complicated. Here's what the workflow actually does, where it breaks, and what a production-grade version looks like.
Source inspiration
Credit: Youri van Hofwegen for the original workflow. This teardown dives into the agentic pattern, quality tradeoffs, and what breaks at ad-spend scale.
The Core Claim: $400/mo vs $3,000–$8,000/mo
The pitch is a straight cost comparison. A traditional UGC agency charges $3,000–$8,000 per month and delivers 10–15 ads. The agent workflow, the claim goes, runs for under $400 per month and produces 500. That's a 40–50× cost-per-ad reduction.
The numbers check out on the generation side. Arcads API pricing plus Claude API credits does land under $400/month at the volumes described. What the comparison doesn't include: the strategist cost of picking which angles to test, the media buyer cost of actually running the ads, and the quality variance across 500 generated ads vs 15 hand-crafted ones.
Translation: the agent workflow isn't a replacement for an agency. It's a replacement for the production step inside an agency. Still a huge win — just not the whole thing.
The Stack: Claude Cowork + Arcads API
Two tools do all the work:
- Claude Cowork. Claude's computer-use agent — can read/write files on your machine, call APIs, and run multi-step workflows. Requires Claude Pro ($20/mo) plus API credits ($10–$20/mo for typical usage).
- Arcads API. A unified video-generation API that proxies to Seedance 2.0, Sora 2, Kling, and Veo 3.1 behind one endpoint. Pro plan required to unlock API access.
The setup is 15–20 minutes:
- Turn on "Browser use" in Claude Cowork settings so the agent can hit Arcads URLs.
- Get an Anthropic API key from
console.anthropic.com, fund it with $10–$20. - Get Arcads Public API credentials (Client ID, Client Secret, Authorization header).
- Ask Claude to create a project folder and drop in two files:
.envwith credentials, andCLAUDE.mdwith the system brief.
The Real Secret: The CLAUDE.md File
The thing that makes this workflow actually work — and the part most creators gloss over — is the CLAUDE.md file. Every Claude Cowork session reads this file first. It's the system prompt equivalent of the entire agent's reasoning scaffold.
A good CLAUDE.md for UGC ad generation contains:
- Script structure template. Every script must follow a defined beat structure: hook (0–3s), problem statement (3–8s), product introduction (8–15s), proof/demo (15–22s), CTA (22–28s).
- Quality checklist. Minimum 9.5/10 on pacing, emotional authenticity, and CTA clarity. Scripts below threshold get regenerated automatically.
- Arcads API reference. Endpoint URLs, payload structure, poll intervals for render completion.
- Brand voice constraints. Words to avoid, tone guardrails, target objections to address.
Without CLAUDE.md, every session starts from scratch and you re-explain everything. With it, you send one prompt — "generate 20 ads for this brief" — and the agent has all the scaffolding it needs.
The Product Brief: Six Sections That Determine Everything
The product-brief.md file is where you turn the agent from a generic script-writer into something that actually understands your product. The template has six sections, and you can fill it in 5–10 minutes:
- Product basics. Name, one-line description, price, URL.
- Customer profile. Demographics, psychographics, stated goal when buying.
- Customer voice. This is the section that matters most. Paste 10–20 real quotes from customer reviews, support tickets, or DMs. The agent matches its script voice to these quotes — which is how you get scripts that sound like your customers instead of ChatGPT's idea of your customers.
- Differentiation. Three things your product does better than the category default.
- Goals and audience temperature. Cold, warm, or retargeting — each gets a different script structure.
- Quantity and ad types. How many ads per run, split across cold/warm/retargeting.
Section 3 — the customer voice paste — is 80% of what determines whether the generated ads sound authentic. Treat it like the most important input in the whole workflow.
The Generation Run
Once the brief is loaded, the run is four prompts:
- "Read the brief and create a plan for 20 scripts." Agent outputs a plan (20 angles, hook types, awareness levels) before generating anything. This is a confirmation step — reject or adjust before moving on.
- "Generate the scripts." Agent writes 20 scripts. Each one is scored internally; only scripts ≥9.5/10 are kept. Outputs as a JSON file with awareness level, hook type, and emotion tagged per script.
- "Pick the actors for each script." Agent opens Arcads in the browser, filters actors by demographic match, picks 4–5 that fit the brief.
- "Generate all videos." Agent submits renders via the Arcads API and polls every 30s for completion. Walk away.
End-to-end time for 20 ads: about 40–60 minutes of agent runtime. Your human time: ~15 minutes on the brief, ~5 minutes reviewing the plan, ~5 minutes reviewing final outputs.
Where the Workflow Breaks
Now the honest part. Here's where this workflow struggles in practice:
1. Quality variance is huge.
The 9.5/10 internal quality gate is self-reported — Claude scoring its own output. Independent review of a batch of 20 generated ads typically flags 4–6 as usable, 8–10 as borderline (okay hook, weak execution), and 4–6 as obvious cuts. Net usable yield: 20–30%. So "500 ads per month" really means 100–150 usable ads per month, which is still great but not the headline number.
2. Persona inconsistency across runs.
The Arcads actor picker filters by demographic match, but the actual face and voice picked can differ run-to-run. If your brand needs consistent persona recognition (and it should — that's how parasocial trust is built), this workflow works against you.
3. No integrated feedback loop.
The "AI learning cycle" step — run ads for 48 hours, feed CTR/CPM data back to the agent — is a manual copy-paste from your ad platform into a Claude chat. There's no automated connection between ad performance and future generation. In practice, this loop breaks after one or two iterations for most users.
4. The agent pattern is fragile.
Computer-use agents work when every step behaves. When Arcads' UI changes, when the API rate-limits, when a render fails silently, the agent either retries forever or produces a partial batch. Debugging an agent that's halfway through generating 20 ads is an experience you only need to have once.
The Opinionated Alternative
The agentic pattern is the right direction. The DIY agent setup is not. UGC Copilot bakes the same pattern into a product with opinions:
- Structured briefs, not markdown files. Product info, customer quotes, objections, and differentiation are first-class fields that the script engine uses directly. No markdown editing, no brittle prompt scaffolding.
- AI Twins for consistent personas. Generate your persona once. Every ad across every campaign uses the same face and voice. Parasocial trust compounds instead of getting reset every batch.
- Quality gate calibrated against live ad performance. Not self-reported 9.5/10 — actual CTR and hold-rate benchmarks from ads that converted.
- Native feedback loop. Connect your ad account; the script engine learns which hooks, structures, and emotional beats actually converted for your product, not the average.
- Multi-model routing under one workflow. Seedance 2.0, Sora 2, Veo 3.1, and Kling O3 pick per-scene based on what each model handles best — without you managing four API keys.
Conclusion
The Claude Cowork + Arcads workflow is a great demonstration of where agentic ad production is heading. The cost math is real, and the scaffolding — CLAUDE.md as system brain, product brief as structured input, quality gate as filter — is the right pattern. What it's missing is the opinionated wrapper that makes the pattern reliable in production. That's the layer UGC Copilot builds.
Frequently Asked Questions
Is the "500 ads per month" number real?
Yes — but read the fine print. The workflow can generate 500 ads per month for under $400 in API costs. The usable yield after quality review is typically 20–30%, so expect 100–150 ads you would actually run. Still an enormous improvement over manual production, just not the headline number.
Do I need Claude Pro, Claude Max, or just the API?
For Claude Cowork (the computer-use agent) you need Claude Pro at minimum ($20/mo). You also need a separate API key with $10–$20 of credits for the agent to call Claude's generation endpoints. Total: roughly $30–$40/month on the Claude side.
Can I use this workflow without Arcads?
In principle yes — any video generation API with a documented endpoint (Seedance, fal.ai, Replicate, Veo direct) works. In practice, the CLAUDE.md file has to describe the new API's payload structure, and Arcads' consolidated model routing is part of why the original workflow is fast. Swapping in individual model APIs means re-writing the agent brain each time.
Will every ad look like a different person?
With the basic Arcads actor-picker approach, yes — each run may select a different actor. For a professional ad account this works against you; consistent persona recognition is how parasocial trust builds over time. This is the specific problem AI Twins in UGC Copilot solve: one persona, reused across every ad, every campaign, every month.