OpenAI Symphony kanban board with autonomous AI coding agents filing pull requests automatically, powered by GPT-5.3-Codex and ElixirSymphony converts Linear tickets into merged pull requests without a developer ever touching the keyboard.

OpenAI’s Symphony Turns Linear Tickets Into Merged PRs — No Developer Required

Six weeks after its quiet release, Symphony’s 15,400 GitHub stars tell one story. The engineering teams frantically reading its SPEC.md tell another: autonomous coding agents have arrived, and they’re watching your Jira board.

On March 4, 2026, OpenAI pushed a repository to GitHub called Symphony with almost no fanfare. No keynote. No splashy blog post. Just a SPEC.md, a reference implementation written almost entirely in Elixir, and an Apache 2.0 license. Within four days, the repo had 8,700 stars. By late April it had crossed 15,400, landing it inside the top 3,000 repositories on all of GitHub.

What people were racing to read was a specification for something the AI coding space has been promising for years but hadn’t quite delivered: a system that watches your project management board, claims tickets automatically, runs isolated coding agents to completion, and files pull requests back to your repository without a human ever touching a keyboard. Not a copilot. Not a suggestion engine. An autonomous engineer.

The speed of community interest wasn’t accidental. Engineering managers have spent two years stuck in what practitioners now call “AI pilot purgatory” — tools that help but don’t eliminate the supervision bottleneck. Symphony’s bet is that the bottleneck isn’t AI capability. It’s the workflow. Fix the workflow, and the capability was already there.


What Symphony Actually Does

Strip away the hype and Symphony is, at its core, a ticket-to-pull-request pipeline. It polls a Linear board every 30 seconds, looks for eligible issues, claims them, spins up isolated coding agents powered by gpt-5.3-codex, runs each agent through implementation, and surfaces a finished pull request with CI status and a walkthrough video proving the work was done.

That last part, the proof-of-work video, is worth pausing on. It’s not a diff. It’s a screen recording. The agent shows its work the same way a contractor would: here’s what I built, here’s it running. That’s a deliberate design choice, not a feature tacked on for demos.

“A ticket moves across the board, agents implement, a verified PR appears.”

Nirant, AI Engineer, LinkedIn, March 8, 2026

Nirant’s framing is precise. The board moves. The PR appears. The developer never touched the ticket. That’s the entire value proposition, written in eleven words.

The system isn’t meant to handle every ticket in your backlog. Symphony’s WORKFLOW.md configuration caps concurrent agents at 10 by default and limits each agent to 20 turns per run. These aren’t hard limits, they’re tunable, but they’re sensible defaults that prevent a runaway agent from burning through your Codex API budget on a single misbehaving issue. The framework OpenAI shipped is an engineering preview, and those guardrails reflect a team that’s thought carefully about what happens when things go wrong.

Engineering Preview status: Symphony’s GitHub repository carries 6 total contributors as of late April 2026, with 4 active committers. The latest commit, on March 27, was a GitHub Actions workflow pin. The small team size signals that OpenAI is leading development directly, not handing it off to the community yet.

The Linear-First Design

Symphony’s reference implementation is built around Linear, the project management tool popular with fast-moving engineering teams. That’s not an arbitrary choice. Linear’s data model is structured, its API is stable, and its issue states map cleanly onto the ticket lifecycle Symphony needs to manage: open, in-progress, verified, closed. The SPEC.md suggests the orchestration layer is abstract enough that other issue trackers could plug in, but Linear is the only confirmed integration in the current release.

Unconfirmed: Some early coverage has reported Jira support as a near-term addition. As of late April 2026, this hasn’t appeared in the official repository or specification documents. Treat Jira integration claims as speculative until OpenAI confirms.

Under the Hood: Why Elixir?

The choice of programming language here is the most technically interesting decision OpenAI made, and it’s the one that got the most attention from practitioners who looked past the headline. 95.4% of Symphony’s codebase is Elixir. Not Python. Not TypeScript. Elixir.

If you haven’t spent time in the functional programming world, that might read as an exotic choice. It isn’t. It’s a very deliberate engineering decision that says a lot about what OpenAI thinks the real challenge of agent orchestration is.

“Symphony’s core challenge is not computation, it’s managing many long-lived, concurrent, failure-prone agents.”

Saran Menon, AI/Software Engineering Analyst, LinkedIn, March 9, 2026

Elixir runs on the BEAM virtual machine, the same runtime as Erlang. BEAM was built to power telecom switching systems that couldn’t go down. The core design principle baked into the runtime is this: when something fails, it fails in isolation and restarts cleanly, without taking anything else with it. In telecom that means a dropped call doesn’t crash the switch. In Symphony’s case, it means a hallucinating agent doesn’t kill the other nine agents working in parallel.

“When one agent crashes, and they will, it triggers a supervised restart with full error context while every other agent continues working. This is the kind of thing you’d spend months building in Python or TypeScript — process isolation, supervision strategies, graceful degradation. In Elixir, it’s a first-class language feature.”

sjramblings, Independent Developer — sjramblings.io, March 11, 2026

That’s the crux of it. The Erlang/BEAM supervision tree model, which Elixir inherits natively, solves the hardest operational problem in running autonomous agents at scale: graceful failure. You don’t want your orchestration layer to be a house of cards where one bad LLM response brings down the whole system. Symphony’s runtime choice means it isn’t.

Concurrency

BEAM’s lightweight processes handle hundreds of simultaneous agent runs without thread-management overhead.

🛡️

Fault Isolation

OTP supervision trees restart failed agents automatically, preserving all other concurrent runs.

🔄

Long-lived Processes

BEAM excels at processes that run for minutes or hours — exactly the profile of an autonomous coding session.

📡

SSH Worker Support

A March 11 commit added SSH worker support to the Elixir reference implementation, expanding deployment options.

Key Configuration Specs: The Numbers That Matter

Symphony’s WORKFLOW.md is worth reading closely if you’re evaluating deployment. The configuration parameters tell you exactly how OpenAI sized the system and where the costs live. Here’s what the current spec shows:

Parameter Default Value What It Controls Why It Matters
Max Concurrent Agents 10 Agents running simultaneously Caps API cost burn; tunable for larger teams
Max Agent Turns 20 per run LLM calls before agent stops Prevents infinite-loop agents on ambiguous tickets
Polling Interval 30,000 ms (30 sec) How often Linear board is checked Determines ticket pickup latency
Turn Timeout 900,000 ms (15 min) Max time per individual turn Allows complex reasoning without hanging processes
Read Timeout 300,000 ms (5 min) Max time per I/O read Prevents stuck file or network operations
Default Model gpt-5.3-codex LLM powering each agent Tight Codex integration; not model-agnostic by default
License Apache 2.0 Usage rights Permissive; enterprise use without copyleft concerns

The 15-minute turn timeout is the number that surprises most people encountering it for the first time. It’s long. But when you think about what an autonomous agent actually does, reads context, reasons about architecture, writes code, runs tests, interprets failures, retries, 15 minutes per reasoning step is conservative, not generous. These aren’t chatbot responses. They’re engineering sessions.

Who Wins, Who Worries

Every new infrastructure layer reshuffles who benefits and who’s exposed. Symphony is no different, and it’s worth being clear-eyed about both sides of that ledger.

Engineering Managers

The upside is obvious: a team of 10 developers with Symphony running 10 concurrent agents is, in theory, shipping work that used to require 20 people. The risk is subtler. When Symphony’s Linear integration becomes the de facto entry point for all implementation work, the board becomes a single point of failure. An ambiguous ticket description doesn’t stall one developer, it wastes 15 minutes of Codex API time and produces a PR that needs to be thrown out. Ticket quality suddenly matters in a way it didn’t before.

Individual Developers

Senior developers who spend their time on architecture, system design, and code review are probably fine. The work Symphony automates, picking up a clearly-scoped ticket and implementing it to spec, is disproportionately the work of junior developers. That’s not a neutral observation. The industry needs junior roles to exist, both for the work they do and as a pipeline for the senior engineers of tomorrow. Symphony doesn’t resolve that tension. It sharpens it.

OpenAI

Apache 2.0 licensing looks generous. But Symphony is built to use gpt-5.3-codex by default, and every agent run is a Codex API call. The open-source release is also a distribution strategy. The more teams adopt Symphony’s orchestration model, the deeper Codex becomes embedded in their development workflows. That’s worth more, long term, than keeping the orchestration layer proprietary.

“Unlike traditional AI coding tools that act as co-pilots requiring constant human supervision, Symphony introduces a fully autonomous pipeline. Within four days of its release, the repository amassed 8.7K stars, swiftly scaling past 15.2K stars on GitHub.”

Epsilla Engineering Team — epsilla.com, April 18, 2026

Enterprise Security Teams

This is where the honest conversation gets uncomfortable. An autonomous agent that reads your codebase, interprets tickets, writes production code, and files pull requests has access to a lot of sensitive surface area. Symphony’s current documentation acknowledges security as an open challenge. Prompt injection, where a maliciously crafted ticket description manipulates an agent into doing something unintended, is a real attack vector. So is secret leakage: an agent that logs its reasoning steps could inadvertently expose environment variables or credentials it encountered during a run.

Security note: OpenAI has not published a formal threat model or security audit for Symphony as of late April 2026. Engineering teams evaluating deployment should conduct their own security review, particularly around agent execution sandboxing, secret handling, and PR review gating before any automated merge capability is considered.

The Market Symphony Is Entering

Symphony didn’t arrive in a vacuum. The autonomous coding agent market was valued at $6.4 billion in 2025, with projections putting it at $91.2 billion by 2034, a 38.5% compound annual growth rate. That’s not a niche. That’s one of the fastest-growing segments in enterprise software.

The competitive picture is equally crowded. Anthropic’s Claude Code, Microsoft’s Copilot Agents, and Google’s various AI development tools are all chasing the same prize. But Symphony’s approach differs in one important architectural respect: it treats the issue tracker, not the IDE, as the primary interface. That’s a different bet about where enterprise AI will live.

The autonomous AI coding agent market is growing at 38.5% CAGR, projected to reach $91.2 billion by 2034. Symphony entered this market in March 2026 with open-source licensing, immediately gaining 8,700 GitHub stars in its first four days, an adoption velocity rare for infrastructure tooling.

The key question isn’t whether Symphony works. The GitHub star count and community interest suggest it works well enough to attract serious attention. The question is where it breaks — which classes of tickets produce wasted runs, which codebases confuse the agents, which team workflows don’t map cleanly onto a Linear-centric model. Those answers will come from the teams now forking the repository and running their own experiments. The agent orchestration space is moving fast enough that a six-week-old framework is already prompting architectural decisions at production engineering teams.

What Symphony does to the broader agent framework conversation is force a vocabulary shift. AI developer tools that require engineers to prompt, supervise, and review every AI action are starting to look like a transitional technology. Symphony’s model, manage work, not agents, is a clean articulation of where the category is heading. Whether Symphony itself becomes the standard or gets leapfrogged by something built on its spec is an open question. The spec is the part that matters.

Frequently Asked Questions

What is OpenAI Symphony?

OpenAI Symphony is an open-source agent orchestration framework released in March 2026. It watches a Linear issue board, automatically claims eligible tickets, spawns isolated AI coding agents using GPT-5.3-Codex, and files pull requests upon completion, without requiring a developer to supervise the process.

When was OpenAI Symphony released?

Symphony was open-sourced on March 4, 2026, when OpenAI published the repository at github.com/openai/symphony under an Apache 2.0 license. Major media coverage followed on March 5, and the repository reached 8,700 stars within its first four days.

Why is Symphony written in Elixir?

Symphony uses Elixir (95.4% of the codebase) because Elixir runs on the BEAM virtual machine, which provides OTP supervision trees for fault-tolerant process management. When an individual coding agent fails, the BEAM runtime restarts it in isolation without disrupting other concurrent agent runs, a critical property for reliable autonomous orchestration.

How many concurrent agents can Symphony run?

Symphony’s default WORKFLOW.md configuration caps concurrent agents at 10, with each agent limited to 20 turns per run. Both limits are configurable parameters, not hard ceilings. The defaults are designed to balance throughput with cost control on the underlying Codex API.

Does Symphony work with Jira?

As of late April 2026, Symphony’s confirmed integration is with Linear. The SPEC.md suggests the orchestration layer is designed to be abstract enough for other issue trackers, but Jira support has not been confirmed in official documentation or repository commits. Some early coverage has claimed Jira integration is planned, but this should be treated as unverified until OpenAI confirms it.

Is OpenAI Symphony free to use?

The Symphony framework itself is free and open-source under the Apache 2.0 license, which permits enterprise use without copyleft restrictions. However, Symphony’s reference implementation is configured to use OpenAI’s GPT-5.3-Codex model by default, which requires a paid Codex API subscription. Agent runs generate API costs proportional to usage.

What are the security risks of using Symphony?

The main security concerns include prompt injection via maliciously crafted ticket descriptions, potential exposure of secrets or credentials encountered during agent execution, and the risk of autonomous code being merged without adequate human review. OpenAI has not published a formal threat model for Symphony. Enterprises should implement PR review gating and audit agent execution sandboxing before production deployment.

How popular is Symphony on GitHub?

As of April 25, 2026, Symphony had 15,400 GitHub stars, 1,300 forks, and a global repository rank of approximately 2,913, placing it in the top 3,000 of all repositories on the platform. It reached 8,700 stars in its first four days after release, an adoption velocity considered exceptional for infrastructure tooling.

The Bottom Line

Symphony is the clearest signal yet that the AI coding assistant era is giving way to something structurally different. For two years, “AI-assisted development” meant a developer with a better autocomplete. Symphony means a developer managing a queue. The code still gets reviewed. The PRs still get merged by humans, for now. But the middle step, picking up a ticket, understanding the scope, writing the implementation, running the tests — that step is now optional for a human to perform.

That’s not a claim about the future. It’s a description of what Symphony’s GitHub stars represent: thousands of engineering teams reading the spec and thinking, seriously, about how their workflows would have to change for this to run in production. Some of them are already running it. The commit history and fork count say so.

Whether Symphony specifically becomes the dominant standard or gets absorbed into a larger platform, Microsoft’s or Google’s or OpenAI’s own, matters less than what it proves. The agent orchestration layer for software development now exists. It’s open-source, it’s in Elixir, and it’s already watching your Linear board.

Watch For
01 OpenAI’s first major Symphony update post-engineering preview, specifically whether gpt-5.3-codex remains the only supported model or the spec opens to third-party LLMs. Expected Q2-Q3 2026.
02 Enterprise security audits and formal threat models for autonomous coding agents, Symphony’s deployment in production environments will likely trigger the first published security research on prompt injection via issue trackers.
03 Competitor responses from Anthropic, Google, and Microsoft — each will need an answer to the “manage work, not agents” framing that Symphony has introduced to the enterprise AI conversation.
04 Labor market data on junior engineering hiring in companies that have adopted autonomous coding agents at scale, the displacement question won’t be theoretical for much longer.
Stay ahead of the curve. More on AI agents and developer tools at NeuralWired.
Explore AI Agents

Leave a Reply

Your email address will not be published. Required fields are marked *