Matt Koppenheffer Home

The Infrastructure That Made AI Publishing Legal

Removing the Bottleneck to AI Content at Scale

The Situation

The Motley Fool was publishing AI-generated articles for premium subscribers. The ambition was to scale to thousands of articles per month — comprehensive coverage across all US-listed companies. But there was a hard constraint: every article required a human fact-checker to control for LLM hallucinations. At that volume, human fact-checking was economically impossible and operationally unscalable. We couldn't hire fast enough.

The real problem wasn't just operational — it was existential for the whole AI content strategy. Without solving fact-checking at scale, comprehensive coverage didn't exist. And in financial services, publishing inaccurate information wasn't just embarrassing. It was legally risky and would destroy subscriber trust. This had to be solved before anything else could scale.

How I Built It

Build a Proof of Concept First

Built a prototype LLM-based fact-checking system in Python that reversed-engineered the content creation process. The system extracted individual factual statements from each article, then systematically checked those facts against the original source material used to generate the content. Incorrect statements were corrected in the final output or, if correction wasn't possible with available sources, removed entirely.

This created a verifiable audit trail for every claim in every article — which turned out to be exactly what legal needed to see.

Bring Legal In Early — as a Design Partner

Presented the prototype to cross-functional teams: legal, tech infrastructure, executive leadership. This wasn't a demo to get approval. It was the start of a collaborative process to understand what "good enough" actually meant from legal, editorial, and technical perspectives before committing to production architecture.

Legal's involvement from this early stage was what made eventual approval possible. By the time we requested sign-off, they'd already shaped the system they were evaluating.

Build the Production System

Assembled a focused team: a product manager with prompt engineering expertise and an AI developer. They transformed the prototype into a production-ready "generic fact checker" — modular, so it could integrate into any content pipeline for any LLM-generated content type, not just this one use case.

Over several weeks and hundreds of test runs, we systematically evaluated outputs, identified edge cases, and refined the system against defined quality thresholds.

Prove Statistical Equivalence, Then Launch

Worked directly with Legal to create a phased launch timeline with explicit guardrails. The process was designed to prove — not argue — that the system was as reliable as human fact-checkers:

  • Ran AI fact-checker outputs in parallel with human fact-checkers to build a statistical comparison dataset
  • Established clear thresholds for when human review was still required
  • Defined ongoing monitoring systems for quality control post-launch

We didn't ask legal to take a risk on AI. We showed them the data and let them draw the conclusion.

The Results

What It Unlocked

Legal Approval Achieved

First fully automated content publishing system to get legal sign-off at The Motley Fool

End-to-End Automation

Complete pipeline from SEC filing detection through fact-checking to CMS publication — no human handoff in the chain

📈
Scale That Wasn't Otherwise Possible

Positioned the operation to cover thousands of companies monthly — a coverage footprint impossible with human writers

🔧
Reusable Infrastructure

Modular system used across multiple content pipelines and adopted by the AI division of the Tech Team

Quality Validation

8.4/10
AI Content Quality
(vs 8.5/10 human baseline)
100%
Legal Standards
Met After Testing
Content Types
Supported

Why It Mattered

Removed the Primary Bottleneck

This was the constraint that was preventing everything else. Solving it unlocked the full potential of the AI content operation — without it, the operation didn't exist at scale

Competitive Moat

Comprehensive company coverage became economically feasible at The Motley Fool — and economically impossible for competitors still relying on human-written content

Template for Regulated AI

Demonstrated that AI could handle high-stakes content in a regulated industry when the system was built with accountability — not just accuracy — as the design constraint

What This Taught Me

Three things that apply to any AI system being built for high-stakes or regulated environments:

Build for Accountability, Not Just Accuracy

The key innovation wasn't better content generation — it was the audit trail. By extracting claims, checking them against sources, and documenting every verification step, we gave legal and editorial a system they could be accountable for. Accuracy gets you in the room. Accountability gets you approval.

Legal as Design Partner, Not Gatekeeper

Bringing legal in at the prototype stage transformed them from a potential blocker into a collaborator. They helped define what "provably reliable" meant, which shaped the technical architecture. By the time we sought approval, they'd already shaped the system they were evaluating — there were no surprises.

Statistical Proof Beats Theoretical Arguments

We didn't convince legal with arguments about what LLMs could do. We proved equivalence by running hundreds of parallel comparisons between AI and human fact-checkers. When the data showed comparable performance, approval followed naturally. The data made the decision — not the pitch.

Need to Get AI Over the Legal and Compliance Line?

Building AI systems that have to work in regulated industries — where accuracy, accountability, and legal sign-off are all required — is a specific problem. I've solved it before. Let's talk.