No. The pattern matches what happened with spreadsheets and accountants. When spreadsheets arrived, accountants thought it was the end of their profession. Instead, they became dramatically more efficient and the number of accountants grew tenfold. Agents remove the "ditch digging" of engineering — the monotonous, boilerplate work — so people can focus on creative problem-solving, feature development, and invention.

Speed Without Structure Is Just Expensive Guessing — Guild's CEO on the Real Founder Edge in the Agent Era
Most founders treat velocity like a moat. Ship fast, demo fast, raise fast. But James Everingham has watched that playbook collapse before — at Netscape, at a startup where he raised too much money without conviction, and across a career that spans five companies and a decade inside Meta's engineering machine. His read on the current AI moment is blunt: if your progress depends on memory instead of method, you are building on sand.
On AI for Founders with Ryan Estus, the Guild.ai CEO laid out a thesis aimed squarely at non-technical founders who are excited about agents but shipping without guardrails. The conversation covered why agents are prosthetics rather than replacements, why the iPhone's app chaos is replaying inside enterprises right now, and why the counterintuitive move is to open-source your agents as a security strategy. It also produced one of the clearest framings of the Jevons Paradox in AI that any podcast has aired this year.
"Bionics, not robotics"
Everingham gets uncomfortable when people talk about agents as replacements for people. He has a line he used to use at Meta, and he repeated it here without hesitation.
"We're building bionics here, not robotics."
The distinction matters because it changes what you build agents to do. LLMs have a specific strength profile: they are excellent at taking complex information and making statistical insights from it. The Guild CEO calls these "complex simple tasks" — work that involves wading through enormous volumes of information and surfacing patterns. But deep algorithmic problem-solving is a different story.
At Meta's Dev Infra org, they had a system called Dubdubdub (WWW) — a mindbendingly complex stack that runs most of Meta's software, so hyper-tuned for efficiency that attempting to vibe code inside the kernel would be catastrophic. Senior engineers working on HHVM, LLVM, and the languages team had good reason to keep agents out of certain areas. The capability boundary is real.
But the flip side is equally real. Everingham described an engineer who used an agent to fix tens of thousands of accessibility issues across one of the world's largest source code repositories — all at once. That is the "ditch digging" of software engineering, the monotonous work that slows teams down and prevents them from doing creative problem-solving, feature development, and invention.
The accountant analogy drives the point further. When the first spreadsheet came out, accountants panicked. They thought it was the end of accounting. Instead, accountants became dramatically more efficient, and there are now ten times more of them — all using spreadsheets. Jevons Paradox. The more efficient you make a resource, the more of that resource you ultimately need.
"The best engineers, the most productive engineers, delete more code than they write."
That line landed in the context of productivity metrics. A lot of CEOs and leaders are mandating more code output, but Everingham pushed back hard. Lines of code and diffs are interesting short-term velocity metrics, but they do not translate to feature velocity. You might be generating a lot more action without traction. The bionic framing forces a different question: not how much output are my engineers producing, but how much of the ditch digging have we removed so they can focus on the work that actually compounds.
The iPhone problem is happening again
When Everingham describes the current state of agents in enterprise, he reaches for a specific analogy that anyone who lived through the early smartphone era will recognize immediately.
"When the iPhone first came out and there were no enterprise controls and people were just installing apps on their phones. Nobody knew what was happening. We think that's sort of happening with agents right now."
The parallel is precise. In the early iPhone days, employees started installing apps on their phones with zero enterprise oversight. IT departments had no visibility into what was running, what data was being accessed, or what security risks existed. It took years for mobile device management to catch up.
Agents are following the same pattern. Engineers and employees are spinning up agents on personal machines, giving them access to internal systems, and nobody has a centralized view of what is happening. The recent security issues with open-source coding agents have only made the problem more visible. As Everingham put it, these are small problems — for now. But without a centralized system, they compound.
The solution he described is essentially an internal app store for agents. A centralized platform where you can see what agents exist, what they have access to, what they did, and maintain auditable logs for both debugging and compliance. Enterprises in regulated industries cannot function without governance layers, and even unregulated companies need to know what their agents touched when something goes wrong.
This also connects to the vendor-neutrality point that keeps surfacing across Everingham's interviews. Enterprises do not want to be locked to one model or one vendor. They want to bring their own keys, bring their own model, and run agents on enterprise servers rather than employee laptops. A control plane that is model-agnostic gives them that flexibility.
The cost of intelligence is collapsing
There is a pattern Everingham keeps returning to across multiple podcast appearances, but on this episode he stated it with a clarity that deserves its own section.
"Every time we've seen a core input sort of go down in value, society reorganizes around it."
The internet collapsed the cost of distribution. Commerce, media, communication — everything reorganized. Electricity did it before that. The industrial revolution before that. Each time a core input drops dramatically in price, the entire economy restructures around the new reality.
AI is doing the same thing to intelligence. What companies used to pay for — human cognitive labor — is getting cheaper at a rate that compresses the usual adoption cycle. And these cycles are accelerating. The internet moved fast. This is moving faster than anything Everingham has seen, and he was at Netscape for the browser wars.
The question that matters for founders is not which model wins. Everingham is explicit: model capability will converge. The deltas will shrink. Models will become a commodity, the same way browsers eventually became a commodity. The value migrates to the services built on top. The Amazons and Googles of the agent era have not been built yet.
This is why he tells college students not to study AI. It is going to be a commodity. Study first principles instead — math, physics, economics, human psychology. Study the things that will compound on top of collapsing intelligence costs, not the collapsing input itself.
Open-source your agents — it is a security strategy
This was the most counterintuitive argument in the conversation, and Everingham delivered it with the confidence of someone who has watched the open-source security debate play out across multiple technology cycles.
The instinct for most companies is to keep agent code closed. Proprietary logic, competitive advantage, security through obscurity. But the former Meta Dev Infra leader pushed back on that instinct directly.
"You're basically just hiding it from the people that would help make it more secure."
The logic: skilled hackers will reverse engineer your agent code regardless. Keeping it closed does not protect you from adversaries with real capability. What it does is prevent security researchers, community contributors, and your broader ecosystem from finding and fixing vulnerabilities before they are exploited. Open-sourcing invites more eyes, and more eyes surface more problems faster.
This is not a theoretical position. It connects directly to Guild's approach — building a managed software center for agents where engineers can browse, fork, and extend agents, with a public layer for community contribution. The centralization and governance layer sits underneath, controlling access and audit trails, while the agent code itself benefits from open scrutiny.
The argument also ties back to the iPhone analogy. When everyone is installing agents without visibility, the security problem is already there. Hiding the code does not make it go away. A centralized, open, governed system does more for security than a closed, fragmented one.
"Don't start a company to start a company"
Everingham has started five companies. One of them — Luminate — was, by his own telling, a mistake born from the wrong motivation. He raised too much money without product-market fit, without the conviction that the problem was one he had to solve, and spent six and a half years grinding against a wall.
"Don't start a company to start a company. Go wait for that observation that you can't not go do."
The distinction he draws is between burnout and lack of inspiration. He used to confuse them. In hindsight, the periods where he felt burned out were really periods where he was not inspired by the problem. Luminate was that. He wanted to start a company. He did not have the compulsion that comes from seeing something you cannot walk away from.
Guild was different. At Meta, running Dev Infra, he watched what happened when agents were centralized and governed inside the developer infrastructure. He saw the community dynamics, the viral adoption, the engineers becoming internal celebrities for the agents they built. And he could not stop thinking about what that would look like as a product outside of Meta.
The former Instagram head of engineering is candid about the failure's silver lining. The timing of Luminate's end put him at Instagram, which became one of the defining experiences of his career. Kevin Weil, his product partner at both Instagram and Libra, is now an investor in Guild through Scribble Ventures. The network compounds, even through the failures.
He offered one of his favorite lines — a joke poster he once saw that read, "The purpose of your life could be to serve as a warning to others." The self-deprecation masks a real lesson. He has the scar tissue to know the difference between wanting to build a company and needing to build a specific one. Guild is the latter.
The investors reflect the conviction. Google Ventures led the round — they were also the lead on his first company, the very first company GV ever funded. NFX, Lobby Capital, and Scribble Ventures rounded it out, every firm connected to Everingham through deep personal and professional relationships built over decades.
Speed without structure is fragility
This is the thesis of the entire episode, the thread that runs underneath every other section. The host opened with it. Everingham validated it from every angle.
The current agent landscape rewards speed. Founders are shipping demos, closing pilots, watching usage spike. But underneath, there is no clean experiment trail, no controlled variables, no reliable way to rewind the tape. If the engineer who built your core agent left tomorrow, could you recreate what they built from scratch?
That is the reproducibility problem. And it is not a nice-to-have. It is the difference between a company that compounds and a company that gets lucky for a while.
Everingham sees this pattern in how companies are measuring productivity. Writing more code is not the same as being more productive. Speed metrics — diffs shipped, lines written, agents deployed — can look like progress while masking fragility. The real question is whether your system produces the same results under controlled conditions, whether you can trace every decision, and whether your governance layer can withstand a compliance audit.
At Meta, the code review problem illustrated the scale of this challenge. One code review agent could not possibly cover the breadth of concerns — SOC 2 compliance, GDPR requirements, graphics specialists, hardware specialists. They estimated needing probably 500 different specialized agents just for code review. Across all industries, that number is probably millions. Managing that many agents without a control plane is not a scaling problem. It is an impossibility.
Protocols are part of the answer but not the whole answer. Everingham acknowledged that MCP is already falling short on some things people need. The protocols have to evolve. Orchestration layers need to mature. Human intervention remains critical — AI gives a different answer every time, unlike a calculator that produces the same output for the same input. Humans in the loop are not a concession. They are a requirement.
The founder advantage is not who ships first. It is who can compound correctly — with reproducibility, governance, and institutional memory baked into the infrastructure from day one. Everything else is fragility disguised as progress.
Your agents need *structure* before they need speed.
Guild is the agent control plane for teams that want reproducibility, governance, and centralized management — not just another demo. If you are a founder building with agents and want to compound correctly, not just ship fast, Guild is built for you.
Frequently asked questions
A centralized platform for running, managing, and governing AI agents in an enterprise. It provides access controls, auditable logs, debugging visibility, and compliance layers. Think of it as the equivalent of mobile device management for the iPhone era, but for agents — a system that gives organizations visibility and control over what agents exist, what they can access, and what they have done.
Keeping agent code closed creates a false sense of security. Skilled adversaries will reverse engineer it regardless. Open-sourcing invites security researchers and community contributors to find and fix vulnerabilities before they are exploited. You are "basically just hiding it from the people that would help make it more secure."
Far more than most people expect. At Meta, code review alone required an estimated 500 specialized agents — SOC 2 specialists, GDPR specialists, graphics and hardware specialists. Across all industries, that number is probably millions of specialized agents. A single general-purpose agent cannot handle the breadth of domains in any sufficiently complex organization.
Not AI. Study first principles — math, physics, economics, human psychology. These are the disciplines that compound on top of collapsing intelligence costs. Studying AI itself is like studying electricity or browser development — the input that is becoming cheap, not the services that will be built on top of it.
They feel similar but have different root causes. Burnout comes from overwork on something that matters to you. Lack of inspiration comes from grinding on something you do not have deep conviction about. Confusing the two leads founders to push through when they should pivot, or to start companies just to start companies rather than waiting for the observation they cannot walk away from.
If your progress depends on memory instead of method, you are building on sand. Reproducibility means every test is deliberate, every run is logged, every improvement is traceable. It is the difference between a company that compounds correctly and one that just gets lucky. When your agents produce different results every time, you need institutional memory and controlled experiments to know whether you are actually improving.
Yes. Enterprises want to bring their own keys and their own models. The model landscape changes every quarter, and betting your infrastructure on one provider creates unnecessary risk. A model-agnostic control plane lets organizations switch between providers without rebuilding their agent infrastructure.