A

Alfonso Morcuende

Verification Gap

In God we trust. All others must bring data

Photo by Denisbin

On January 27, 1967, Gus Grissom, Ed White, and Roger Chaffee were strapped into Apollo 1 on Launch Pad 34. It was a routine plugs-out test: no fuel, no liftoff. At 6:31 PM, a spark ignited the pure oxygen atmosphere inside the cabin. In 17 seconds, all three astronauts were dead.

NASA stopped. Entirely.

The investigation board that convened in the following days included eight specialists and one astronaut: Frank Borman. He was the only person who had flown in an Apollo-era spacecraft, and the first human being to enter the burned capsule after the fire. What he saw there — the melted Velcro, the charred wiring, the evidence of a system that had grown too fast and too blind to what it didn’t know — stayed with him.

When the investigation was done, Borman wasn’t sent back to training. He was sent to Downey, California, to the North American Aviation factory where the command modules were built, with a single mandate: nothing leaves this building unless I say so. He became the on-site manager of the most exhaustive redesign in spaceflight history. A systematic verification that every change was traceable to an identified failure, reviewed by the right people, signed off before proceeding.

He described it later in his own words: “I finally had to put my foot down. Nobody could come out there unless they had approval of Low, Slayton, or me.”

Frank Borman, Apollo Program Resident Manager — NASA

Frank Borman, Apollo Program Resident Manager — NASA

Eighteen months after the fire that killed three astronauts, Frank Borman, Jim Lovell, and Bill Anders climbed into a completely rebuilt Apollo 8 and traveled to the Moon. On Christmas Eve 1968, Borman read from Genesis to half a billion people. A stranger sent him a telegram afterward. It read: “Thank you Apollo 8. You saved 1968.”

The mission didn’t succeed because NASA moved fast. It succeeded because, after the worst failure in its history, someone had the discipline to verify everything before it flew.

Apollo 8 crew — Borman, Lovell and Anders — after recovery

Frank Borman, Jim Lovell and Bill Anders after Apollo 8 recovery, December 1968 — NASA / Wikipedia

The same mistake, fifty years later

In my last article I described Problem-Driven AI, a methodology built around one conviction: the bottleneck was never construction. It was always understanding the problem with enough depth to deserve a solution.

That piece got the most response I’ve had. Many of you wrote variations of the same question: this makes sense as a philosophy, but how do you enforce it when teams are under pressure to ship?

The answer is: you build a gate. The same thing Borman built in Downey. Not a slow-down. A verification system that runs before the machine executes, checks that what’s about to be built is traceable to a real problem and a real decision, and refuses to let anything through that isn’t.

The gap nobody is talking about

When a team uses AI to build, there is a precise moment where human thinking ends and machine execution begins. That handoff is the most dangerous moment in the process.

Not because the AI makes mistakes. Because it doesn’t. It executes with perfect fidelity whatever it receives. A poorly defined problem, an assumption treated as a fact, a solution that nobody validated with anyone: the AI builds all of that. Precisely. At scale. Without warning.

Teams talk about the model, the prompt, the speed. What they rarely talk about is whether the thinking behind the prompt was reviewed by anyone before it reached the machine. Whether the research we did actually supports what we decided to build. Whether what we agreed on in the room is coherent with what we’re asking the system to produce.

That’s the verification gap. The same gap that killed Grissom, White, and Chaffee. Not malice. Not incompetence. The absence of a systematic check between what the team believed was true and what was actually true, before the system was activated.

Problem-Driven AI is not a methodology on paper

Problem-Driven AI was never meant to be a framework that gets read and filed away. Its ambition is to be an actual design process: one where human thinking does the work that only humans can do, and AI executes at the full extent of its potential. The methodology has five phases.

Problem-Driven AI — Five-phase methodology

Problem-Driven AI — Five-phase methodology

The first three — Problem Discovery, Solution Alignment, and Context Engineering — are entirely human. Research with the people who live the problem. Decisions about what to build and why. Translation of that thinking into something the AI can process with fidelity. The last two — AI Build and Market Iteration — are where the machine builds and the system learns.

The question many of you raised was a good one. Between the human thinking and the machine execution, who checks that the handoff was done right?

The PDA Toolkit

The PDA Toolkit is the open source answer. A system of four agents that sits exactly at that handoff and enforces a rule Borman would have recognized immediately: nothing proceeds until it passes verification.

The first three agents act before the AI builds anything. They are quality controls for thinking, not for production.

  • /pda-problem is the first control. Before anything else, the team must have researched the real problem with the people who live it and documented it. This agent reads that document and checks that every claim has evidence behind it, that what hasn’t been verified is marked as an assumption, and that there are no unresolved internal contradictions. It doesn’t suggest improvements. It doesn’t rewrite. It passes or fails — and when it fails, it says exactly why.
  • /pda-solution comes in when the team has decided what to build and why. It checks that every part of the solution responds to something concrete in the problem that was researched. If there are features the team wants to add but that don’t trace back to any real detected need, the agent flags them as speculation. Not as a bad idea. As something nobody validated.
  • /pda-context verifies the technical document describing how to build the solution. It checks that the engineering decisions are coherent with the design and product decisions made before. This control exists for a specific reason: an AI that receives incomplete instructions doesn’t stop to ask. It invents what’s missing. And it invents with complete confidence — which is exactly the kind of error that’s hardest to detect afterward.
  • /pda-ai-build runs only when the three previous controls have passed. At that point, the AI has everything it needs to build: the problem defined, the solution agreed upon, the technical instructions. No important decision remains. The humans already made them. The AI executes.

The first three agents are validators, not generators. If the AI writes the problem research, the team skips the hardest and most valuable work in the design process. If the AI defines the solution, it skips the second. The agents don’t replace that work. They guarantee it was done.

PDA Toolkit — open source verification layer for Problem-Driven AI

PDA Toolkit — open source verification layer for Problem-Driven AI

What comes next

The toolkit is in Beta. That’s not a disclaimer — it’s a design principle. Problem-Driven AI is built to evolve from real use, not from anticipation. The four current agents cover the core verification loop and are fully functional today. Future versions will deepen the controls as real projects reveal where the next gaps are.

The thinking remains yours. The AI executes at its full potential. The verification is what guarantees the two are actually connected.

That’s the gate Borman built. The one the methodology needs — and what the toolkit enforces.