Matt Koppenheffer Home

Building an AI Content Operation from Zero

Zero to 4,000+ Articles in 12 Months

The Situation

The Motley Fool needed to dramatically expand financial content coverage — thousands of publicly-listed companies — at a cost structure human writing couldn't support. No AI content operation existed. I built one from scratch: the system, the team, the process, and the output. Twelve months later, the operation had published over 4,000 articles, achieved quality scores within a tenth of the human baseline, and delivered ~$1M in cost savings.

The Numbers

0
Articles Published
(Mar 2024 - Apr 2025)
0
AI Content Quality Score
(vs 8.5/10 human baseline)
$0
Cost per AI Article
(vs $250 human-written)
$0
Total Cost Savings
Achieved

How the Operation Works

Five components working together to move from raw data to published article with minimal human intervention:

1

Data Sources

SEC filings, proprietary investing content, and external financial APIs provide the input data that feeds the system.

2

LLM Processing

Expert-crafted prompts — built by people with decades of investing experience — transform raw data using language models with domain context baked in.

3

Content Generation

Automated article creation with proper formatting and structure for financial analysis output.

4

Fact Checking

LLM-based verification system validates every claim against source materials before publication — the infrastructure that made legal sign-off possible.

5

CMS Delivery

Direct integration with the content management system for fully automated publishing workflow.

What Makes It Work

Multi-Source Data Integration

SEC filings, proprietary investing content, and external financial APIs — the input quality determines the output quality

Domain-Expert Prompts

Templates built by investors with decades of experience — not generic prompts, but prompts that encode how analysts actually think about companies

Automated Triggering

Web socket integration with the SEC website for real-time processing — articles triggered by filings, not by human queues

LLM-Based Fact Checking

Every claim verified against source materials before publication — the system that got legal approval and made scale possible

Direct CMS Integration

Fully automated delivery to the content management system — no human handoff required in the publishing workflow

Continuous Evaluation

Rigorous testing protocols and human quality assessments built into the operation from the start — not bolted on after problems

Why It Actually Worked

Most AI content experiments fail because they treat the model as the product. This operation worked because it combined deep investing domain expertise with rigorous evaluation processes. The prompts encoded how experienced analysts think. The fact-checking system made every output auditable. The quality bar was set against human content, not against nothing. This wasn't automation for its own sake — it was capturing expert knowledge and applying it at a scale no team of humans could reach.

What I Built and Led

System Architecture

Built the initial Python proof-of-concept and designed the end-to-end pipeline architecture — from data ingestion through publication

Team Assembly and Leadership

Built and led the cross-functional team: external developer, project manager, internal operations, and editorial staff

Legal and Compliance Navigation

Secured legal approval for automated publishing — the critical unlock that made scale possible in a regulated financial publisher

Quality Evaluation

Designed the evaluation framework, personally performed quality assessments, and led continuous testing — the work that kept the output defensible

Strategic Execution

Took the company from no AI content strategy to 34% of premium content being AI-generated — in 12 months, from zero

Building an AI Operation?

I've done this in a regulated industry where the margin for error was zero. If you're trying to move from AI experimentation to a production system that actually ships, let's talk.