A look under the hood of Codex
OpenAI has taken an unusually transparent step by publishing a detailed technical breakdown of how its Codex CLI coding agent operates under the hood. Authored by OpenAI engineer Michael Bolin, the post offers one of the clearest looks yet at how a production-grade AI agent orchestrates large language models, tools, and user input to perform real software development tasks.
At the core of Codex is what OpenAI calls the agent loop: a repeating cycle that alternates between model inference and tool execution. Each cycle begins when Codex constructs a prompt from structured inputs: system instructions, developer constraints, user messages, environment context, as well as available tools, and sends it to OpenAI’s Responses API for inference.
The model’s output can take one of two forms. It may produce an assistant message intended for the user, or it may request a tool call, such as running a shell command, reading a file, or invoking a planning or search utility. When a tool call is requested, Codex executes it locally (within defined sandbox limits), appends the result to the prompt, and queries the model again. This loop continues until the model emits a final assistant message, signaling the end of a conversation turn.
While this high-level pattern is common across many AI agents, OpenAI’s documentation stands out for its specificity. Bolin walks through how prompts are assembled item by item, how roles (system, developer, user, assistant) determine priority, and how even small design choices, such as the order of tools in a list, can have major performance implications.
One of the most notable architectural decisions is Codex’s fully stateless interaction model. Rather than relying on server-side conversation memory via the optional previous_response_id parameter, Codex resends the entire conversation history with every request. This approach simplifies infrastructure and enables Zero Data Retention (ZDR) for customers who require strict privacy guarantees.
The downside is obvious: prompt sizes grow with every interaction, leading to quadratic increases in transmitted data. OpenAI mitigates this through aggressive prompt caching, which allows the model to reuse computation as long as each new prompt is an exact prefix extension of the previous one. When caching works, inference cost scales linearly instead of quadratically.
That constraint, however, places tight discipline on the system. Changing tools mid-conversation, switching models, modifying sandbox permissions, or even reordering tool definitions can trigger cache misses and sharply degrade performance. Bolin notes that early support for Model Context Protocol (MCP) tools exposed exactly this kind of fragility, forcing the team to carefully redesign how dynamic tool updates are handled.
Prompt growth also collides with another hard limit: the model’s context window. Since both input and output tokens count against this limit, a long-running agent that performs hundreds of tool calls risks exhausting its usable context.
To address this, Codex employs automatic conversation compaction. When token counts exceed a configurable threshold, Codex replaces the full conversation history with a condensed representation generated via a special responses/compact API endpoint. Crucially, this compacted context includes an encrypted payload that preserves the model’s latent understanding of prior interactions, allowing it to continue reasoning coherently without access to the full raw history.
Earlier versions of Codex required users to manually trigger compaction; today, the process is automatic and largely invisible – an important usability improvement as agents take on longer, more complex tasks.
OpenAI has historically been reluctant to publish deep technical details about flagship products like ChatGPT. Codex, however, is treated differently. The result is a rare, candid account of the trade-offs involved in building a real-world AI agent: performance versus privacy, flexibility versus cache efficiency, autonomy versus safety. Bolin does not shy away from describing bugs, inefficiencies, or hard-earned lessons, reinforcing the message that today’s AI agents are powerful but far from magical.
Beyond Codex itself, the post serves as a blueprint for anyone building agents on top of modern LLM APIs. It highlights emerging best practices: stateless design, prefix-stable prompts, explicit context management, that are quickly becoming industry standards.