When Generating Code Becomes Cheaper Than Reading It

When Generating Code Becomes Cheaper Than Reading It
The economic inversion that's reshaping software development
Last week I watched an AI agent write, execute, and discard 47 different scripts in under three minutes to solve a single data transformation problem. None of that code was saved. None of it needed to be. The agent generated exactly what it needed, used it, and moved on.
That moment crystallized something I'd been sensing for months: we're approaching a world where the act of compiling code creates instant legacy. Where the traditional virtues of maintainability, readability, and documentation become relics of an era when code was expensive to produce.
Welcome to the age of Just-In-Time Software.
The Economics That Change Everything
Here's the math that's reshaping software development: code generation costs are dropping exponentially while human developer costs remain flat or increase. Anthropic's recent engineering blog revealed that their code execution approach with MCP achieves a 98.7% reduction in token usage. Cloudflare's "Code Mode" demonstrates similar efficiency gains. When generating code becomes this cheap, the calculus of software development fundamentally shifts.
Think about what we optimize for today:
- Maintainability - because code lives for years and multiple developers touch it
- Readability - because humans need to understand what other humans wrote
- Documentation - because institutional knowledge must persist
- Testing - because bugs in long-lived code compound over time
- Code review - because mistakes are expensive to fix later
But what if code doesn't live for years? What if it lives for seconds?
The Zerg AI Thesis: Software as Biology
Idan Beck at Zerg AI has been articulating this vision through what he calls "Just-in-Time Software" - the idea that as code generation speed increases and cost approaches zero, software starts behaving more like biological systems than engineered artifacts.
In biology, cells don't maintain themselves indefinitely. They're generated, they serve their purpose, and they're replaced. The system's resilience comes not from the durability of individual components but from the ability to rapidly regenerate them.
Apply this to software: instead of maintaining a codebase for years, you regenerate the code you need at runtime. The "source of truth" shifts from the code itself to the intent specification - the prompt, the requirements, the desired outcome.
This isn't science fiction. It's already happening in production systems.
MCP Code Mode: The Infrastructure Is Here
Both Anthropic and Cloudflare have independently arrived at the same architectural insight: it's more efficient to let AI agents write and execute code than to hide everything behind pre-defined tool interfaces.
Anthropic's approach, detailed in their November 2025 engineering blog, presents MCP servers as code APIs in a filesystem structure. Instead of loading thousands of tool definitions upfront, agents explore a directory tree, read only the tool definitions they need, and write code that orchestrates multiple tools in a single execution.
The results are striking:
- Progressive tool loading - agents discover tools on-demand rather than loading everything upfront
- Data filtering in execution - process 10,000 rows in the sandbox, return only the 50 that matter
- Control flow without token waste - loops and conditionals execute in code, not through repeated model calls
- Privacy-preserving data flows - sensitive data moves between systems without ever entering the model's context
Cloudflare's "Code Mode" reaches similar conclusions from a different angle. Their insight: LLMs are trained on billions of lines of code. Asking them to generate JSON tool calls is like asking a novelist to communicate only through form submissions. Let them write code - it's what they're good at.
What Changes When Code Is Disposable
If we accept that code generation is becoming cheap enough to be disposable, several traditional software engineering concerns get inverted:
Maintainability Becomes Irrelevant (Mostly)
Why optimize for maintainability when you can regenerate? The 47 scripts my agent wrote last week would have failed every code review for style, documentation, and structure. They also worked perfectly and were gone before anyone could complain.
But here's the nuance: some code still needs to be maintained. The orchestration layer, the intent specifications, the security boundaries - these persist. The shift is from maintaining implementation code to maintaining specification code.
Security Becomes Paramount
When code is generated at runtime by AI systems, the attack surface transforms. You're no longer just worried about vulnerabilities in your codebase - you're worried about prompt injection, sandbox escapes, and malicious code generation.
This is why every serious implementation of runtime code generation emphasizes sandboxing:
- Amazon Bedrock AgentCore Code Interpreter - AWS's fully managed service for secure code execution in isolated sandbox environments, with containerized execution, dynamic resource allocation, and enterprise compliance
- AWS Lambda with Firecracker - microVM-based isolation using AWS's open-source Firecracker virtualization technology, providing hardware-level isolation for each function execution with tenant isolation mode for multi-tenant applications
- Vercel Sandbox - ephemeral compute primitives for untrusted code
- WebAssembly - capability-based security with memory isolation
- Modal's execution environments - 50,000+ simultaneous sandboxes for coding agents
- Daytona's Runtime for OpenHands - secure AI code execution
The research backs this up. A 2025 arXiv paper on "Security Vulnerabilities in AI-Generated Code" found that AI-generated code contains vulnerabilities at rates comparable to human-written code - but the velocity of generation means more total vulnerable code unless sandboxing is rigorous.
Dependencies Become Liabilities
Here's a prediction that might be controversial: the era of massive dependency trees is ending.
When code is generated on-demand, you want minimal dependencies. Every dependency is:
- A potential security vulnerability
- A compatibility constraint
- A cold-start penalty
- A supply chain risk
The languages and runtimes that will thrive in the JIT code era are those that can execute quickly with minimal external dependencies. This has implications for which programming languages win.
Which Languages Win in the JIT Code Era?
Not all programming languages are equally suited for disposable, runtime-generated code. The winners will optimize for:
- Fast cold start - no lengthy compilation or JIT warmup
- Minimal runtime - small memory footprint, quick initialization
- Strong sandboxing - memory safety, capability-based security
- AI training data abundance - models need to be fluent in the language
Based on these criteria, here's my speculation:
Tier 1: Clear Winners
Python - Despite its performance limitations, Python dominates AI training data and has excellent sandboxing options. The ecosystem is unmatched for data manipulation and API integration. Expect Python to remain the default for JIT code generation.
JavaScript/TypeScript - V8's isolation capabilities, ubiquitous runtime availability, and massive training corpus make JS/TS ideal for browser and edge execution. Cloudflare's entire Code Mode infrastructure runs on Workers.
WebAssembly - Not a language per se, but the compilation target that enables secure execution of code from any source language. NVIDIA's research on "Sandboxing Agentic AI Workflows with WebAssembly" and Microsoft's new "Wassette" project for WebAssembly-based AI agent tools point to WASM as the universal secure runtime.
Tier 2: Strong Contenders
Rust - Memory safety without garbage collection makes Rust ideal for the secure runtimes that execute JIT code. The Lyric project ("A Rust-powered secure runtime for AI-Agent") exemplifies this. Rust won't be the language agents write in, but it will be the language that runs the sandboxes.
Go - Fast compilation, simple deployment, strong concurrency. Go's simplicity makes it easy for AI to generate correct code, and its static binaries eliminate dependency hell.
Tier 3: Declining Relevance
Java/C# - Heavy runtimes, slow cold starts, complex ecosystems. These languages optimized for long-running enterprise applications, not ephemeral execution.
C/C++ - Memory unsafety is disqualifying for untrusted code execution. The security risks of AI-generated C code are too high for production use.
The New Software Stack
If this vision plays out, the software stack of 2030 looks radically different:
┌─────────────────────────────────────────┐
│ Intent Layer │
│ (Prompts, Specs, Requirements) │
├─────────────────────────────────────────┤
│ Orchestration Layer │
│ (Agent frameworks, MCP servers) │
├─────────────────────────────────────────┤
│ Generation Layer │
│ (LLMs, Code models) │
├─────────────────────────────────────────┤
│ Execution Layer │
│ (WASM, V8 Isolates, Sandboxes) │
├─────────────────────────────────────────┤
│ Security Layer │
│ (Capability systems, Audit logs) │
└─────────────────────────────────────────┘The code that humans write and maintain lives in the Intent and Orchestration layers. Everything below is increasingly generated, executed, and discarded.
The Counterarguments
I should be honest about the limitations of this thesis:
Not all software is disposable. Operating systems, databases, compilers - these require the kind of deep optimization and long-term maintenance that JIT generation can't provide. The question is what percentage of code falls into this category. My guess: less than 10%.
Debugging becomes harder. When code is generated and discarded, traditional debugging approaches fail. You can't set a breakpoint in code that no longer exists. New observability patterns will need to emerge.
Regulatory and compliance challenges. Many industries require code audits, change tracking, and approval processes. Ephemeral code generation doesn't fit neatly into these frameworks. Expect regulatory adaptation to lag technical capability.
The training data problem. AI models are trained on human-written code that follows maintainability best practices. If we stop writing maintainable code, what do future models train on? This could create a quality degradation spiral.
What This Means for Developers
If you're a developer reading this, here's my practical advice:
Learn to specify, not just implement. The skill that matters increasingly is the ability to clearly articulate what you want, not the ability to write the code that does it. Prompt engineering is just the beginning - expect more sophisticated specification languages to emerge.
Understand security deeply. As code generation democratizes, security expertise becomes more valuable, not less. Someone needs to design the sandboxes, audit the outputs, and catch the edge cases.
Master the orchestration layer. Frameworks like MCP, LangGraph, and agent orchestration tools are where human developers add value. Learn to compose AI capabilities, not compete with them.
Stay curious about the edges. The 10% of software that can't be JIT-generated is where deep technical expertise remains essential. Compilers, databases, security infrastructure, performance-critical systems - these domains reward traditional software engineering skills.
The Uncomfortable Question
Here's what keeps me up at night: if code becomes disposable, what happens to the craft of programming?
I learned to code by reading other people's code. By studying elegant solutions. By understanding why certain patterns emerged. If code is generated and discarded, where does the next generation learn?
Maybe the answer is that "programming" as we know it becomes a specialized skill, like blacksmithing - respected, occasionally necessary, but not the mainstream way things get built.
Or maybe the craft evolves. Maybe the new craft is in the specification, the orchestration, the architecture of systems that generate and execute code. Maybe we're not losing programming - we're abstracting it.
I don't know. But I do know that the economics are inexorable. When something becomes cheap enough, usage patterns change. Code generation is becoming cheap enough.
Compile = Instant Legacy isn't a prediction. It's a description of what's already happening at the frontier. The question is how fast it spreads.
References and Further Reading
- Code Execution with MCP: Building More Efficient Agents - Anthropic Engineering, November 2025. The technical foundation for code-based tool orchestration.
- Code Mode: The Better Way to Use MCP - Cloudflare Blog, September 2025. Cloudflare's parallel discovery of code execution benefits.
- Just-in-Time Software: When Code Writes Itself - Zerg AI Whitepaper. The biological metaphor for disposable software.
- Sandboxing Agentic AI Workflows with WebAssembly - NVIDIA Technical Blog. Security architecture for AI code execution.
- Introducing Wassette: WebAssembly-based Tools for AI Agents - Microsoft Open Source Blog. WASM as the universal secure runtime.
- Security Vulnerabilities in AI-Generated Code - arXiv, October 2025. Large-scale analysis of security issues in AI-generated code.
- Vercel Sandbox Documentation - Ephemeral compute for untrusted code execution.
- On the Future of Software Reuse in the Era of AI Native Software Engineering - arXiv, 2025. Academic perspective on how AI changes software reuse patterns.
- Provably-Safe Multilingual Software Sandboxing using WebAssembly - USENIX. The security foundations of WASM sandboxing.
- Dynamic Code Orchestration: Harnessing LLMs for Adaptive Script Execution - arXiv, 2024. Early academic work on runtime code generation patterns.