OpenAI Speeds Up Codex Workflows with WebSockets in Responses API
OpenAI has introduced WebSocket support to its Responses API, slashing latency in Codex agent workflows by 40%. This upgrade allows the latest GPT-5.3-Codex-Spark model to achieve speeds of over 1,000 tokens per second (TPS), a massive leap from the 65 TPS of earlier versions. These advancements are central to enabling faster and more efficient AI-driven coding assistance, a key feature of Codex’s functionality.
Codex, OpenAI's AI-powered coding assistant, operates by scanning codebases, building context, making edits, and running tests in rapid succession. Historically, these tasks involved numerous back-and-forth API calls, which added significant latency. The issue became more pronounced as model inference speeds improved, exposing API overhead as the bottleneck. Addressing this, OpenAI reengineered the Responses API, introducing a persistent WebSocket connection to minimize redundant processing.
Previously, the Responses API treated each request as independent, requiring full conversation histories to be sent and processed with every call. The new WebSocket implementation caches reusable state in memory, allowing only incremental changes to be sent during follow-up requests. This streamlined approach reduces overhead and makes better use of both server-side GPUs and client-side resources.
Key optimizations included:
- Caching tokens and model configurations to bypass repetitive tokenization.
- Eliminating unnecessary network hops to streamline API-to-inference communication.
- Enhancing safety classifiers to process only new inputs instead of entire histories.
- Overlapping non-blocking processes like billing with subsequent requests.
The decision to adopt WebSockets over alternatives like gRPC bidirectional streaming hinged on simplicity and minimal disruption to developers. The WebSocket mode preserved the familiar API interaction patterns, allowing developers to integrate it seamlessly without major changes to their workflows.
The results have been striking. In alpha tests with key partners, startups reported up to a 40% improvement in agentic workflow speeds. Codex users, including those leveraging platforms like Vercel and Cursor, saw significant reductions in latency. Additionally, Codex achieved its target of 1,000 TPS for GPT-5.3-Codex-Spark and even recorded bursts of up to 4,000 TPS in production traffic.
For AI developers and enterprises, these changes underscore the growing importance of optimizing API infrastructures to keep pace with accelerating model inference speeds. As OpenAI’s advancements demonstrate, improving the systems surrounding AI models is as critical as enhancing the models themselves. By addressing API bottlenecks, OpenAI ensures that performance gains from faster inference hardware, like the Cerebras GPUs used by GPT-5.3, are fully realized by end users.
Looking ahead, the implementation of WebSocket support in the Responses API sets a new benchmark for speed in AI agent workflows. It positions OpenAI strongly as demand for high-performance, real-time AI capabilities continues to grow across industries. With the API updates now in full production, developers can expect smoother, faster integrations—an essential step as AI tools become foundational to coding, productivity, and beyond.