Ben Donnelly
AI coding agents are powerful. But without understanding your system, they're flying blind.
Every engineering team has felt it. You fire up Cursor, Claude, or Windsurf, feed it a task, and watch it confidently generate code that misses the point entirely. It doesn't know that service A talks to service B through a message queue. It doesn't know that the function it's refactoring is called by three other services in production. It doesn't know that the last time someone touched that endpoint, it caused a P1 incident.
AI coding agents are remarkable at writing code. They're terrible at understanding your system.
That's the problem Opentrace solves.
The current wave of AI-assisted development has a blind spot. Tools like Cursor and Windsurf give AI agents access to your codebase — files, functions, maybe a few open tabs. But a codebase is not a system. A system is code, infrastructure, runtime behavior, deployment state, ticket history, and the institutional knowledge that lives in your team's heads.
When an AI agent suggests a "simple refactor" without knowing that function handles 50,000 requests per second in production, or that the service it depends on was flagged in last week's incident review — that's not intelligence. That's autocomplete with extra steps.
Opentrace builds a living, multi-layered knowledge graph of your entire engineering ecosystem. Not just code — everything.
Source code and structure. Every repo, file, class, function, and dependency is mapped and connected. OpenTrace doesn't just index files — it understands how your system is built. Which services call which. What depends on what. Where the boundaries are.
Infrastructure and deployments. Clusters, namespaces, deployments from AWS, GCP, and Kubernetes. The real topology of your system as it actually runs — not just what's committed to Git.
Runtime observability. Traces, logs, and metrics from Grafana, Datadog, and Dash0 are tied directly back to the code and services that produce them. When something is slow, you can trace the latency path right back to the function responsible.
Change and project management. Issues, comments, and activity from GitHub, GitLab, Linear, and Jira. Opentrace captures why decisions were made — not just what changed.
All of these layers are unified into a single, queryable graph. And that graph is exposed to your AI tools via MCP (Model Context Protocol), which means Claude, Cursor, and other AI agents can query it natively.
Here's a real example from our own workflow. We recently needed to migrate away from Clerk's discontinued Firebase integration in our API. In a traditional setup, an AI agent might look at the auth files and suggest a straightforward swap. But by querying Opentrace, the agent could see the full picture: which services depended on Firebase tokens, what middleware was involved, which endpoints would be affected, and what the test coverage looked like across the entire authentication chain.
The result was a migration plan that accounted for backward compatibility, zero downtime, and a multi-authenticator approach that supported both token types during the transition. The kind of plan that normally requires a senior engineer with deep institutional knowledge — generated in minutes because the AI had the context it needed.
Another example: need to understand the blast radius of changing a shared utility function? Instead of grep and prayer, you query the Opentrace graph. It shows you every service that calls that function, the deployment state of each, their current error rates, and any related open tickets. Impact analysis that used to take hours of Slack pinging and code archaeology takes seconds.
We chose a graph-based approach deliberately. Systems are inherently relational. A function is defined in a file, which belongs to a repo, which deploys as a service, which runs in a namespace, which serves traffic that generates traces that link to incidents that reference tickets. These connections matter.
Traditional observability tools collect data in silos — metrics here, logs there, traces somewhere else. Traditional code intelligence tools understand syntax but not architecture. Opentrace connects all of it into a single model where relationships are first-class citizens.
This means you can ask questions that span layers: "Show me all services that depend on this database table and had error rate spikes in the last 24 hours." Or: "What changed in the last deploy of this service, and are there any related open issues?" These aren't hypotheticals — they're the kinds of queries that flow naturally through a connected graph.
Opentrace exposes its entire knowledge graph through MCP, making it a natural extension of any AI workflow. When Claude or another MCP-capable agent connects to OpenTrace, it gains access to tools like search_nodes, traverse_dependencies, find_path, and get_neighbors — allowing it to explore your system's architecture, trace dependency chains, and assess impact before writing a single line of code.
This isn't a bolted-on integration. The MCP interface is how we use Opentrace ourselves. Our own AI agent uses the same graph to analyze codebases, investigate incidents, and plan changes. The same tools available to the agent are available to you in Claude, in your IDE, or in any MCP-compatible client.
We call it vibe engineering — and we mean that seriously. When AI truly understands your system, engineering changes from a cautious, archaeology-heavy process to a fluid, high-confidence one.
Ship complex changes safely. Large refactors, new services, migrations — they stop being gambles. Opentrace shows exactly what will change and how behavior will shift before anything hits production.
Eliminate tribal knowledge bottlenecks. The context that used to live only in a senior engineer's head is now in the graph. New team members, AI agents, and cross-functional collaborators all have access to the same deep system understanding.
Reduce operational incidents. By mapping how your entire system behaves, Opentrace spots weak links and failure chains before they become pages. Context-aware engineering means fewer surprises in production.
Opentrace is currently in early access. We're working with teams who want to be at the forefront of AI-driven engineering — teams who are tired of their AI tools operating with partial context and are ready to give them the full picture.
If that sounds like your team, we'd love to hear from you.
OpenTrace integrates with GitHub, GitLab, Bitbucket, AWS, GCP, Kubernetes, Grafana, Datadog, Dash0, Linear, Jira, and more. Learn more at opentrace.com.