OpsLevel's new MCP Server powers your AI Assistant with real-time context
As AI assistants become integral to developer workflows, the need for real-time access to accurate service information has never been greater. That's why we've built the OpsLevel MCP server.
Imagine an on-call engineer at 2 AM asking their AI assistant: "Which team owns the payment service that's throwing 500 errors, and what's their escalation path?" Instead of hunting through Slack, PagerDuty, and internal wikis, they get an instant, accurate answer.
Our goal is to meet developers where they work and spend their time, rather than forcing them to navigate away from their IDEs or chat tools to find the information they need. Our MCP server serves as a vital link, allowing Large Language Models (LLMs) such as those behind GitHub Copilot, Cursor, and Claude to seamlessly integrate with your OpsLevel developer portal. This integration unlocks an AI assistant that truly understands your unique software ecosystem, capable of answering complex questions about services, health, ownership, and more.
The challenge: critical information fragmented across systems
Developers face a persistent challenge: service information is scattered across monitoring dashboards, wiki pages, chat histories, and configuration files. Whether troubleshooting a production issue, integrating with a new API, or simply understanding service dependencies, every minute spent hunting for service documentation, owner contact details, or architectural context interrupts the development flow and slows progress.
Our MCP server eliminates this friction, allowing developers to access critical information and take action within their coding assistant or chat tool, staying focused on resolution rather than information gathering.
By integrating OpsLevel data, you'll significantly enhance what's possible within these AI assistants, leading to increased efficiency, faster task completion, and an overall improved developer experience.
Our differentiator: data quality and a holistic AI strategy
While the proliferation of MCP servers is exciting, we also recognize that the ecosystem is still evolving, and standards are forming in real time. A critical insight we've gained is that an MCP, by itself, is not a silver bullet. The effectiveness of any AI assistant powered by an MCP is directly tied to the quality of the data in the underlying Internal Developer Portal (IDP). As the saying goes, "Garbage in, garbage out" still applies. If your software catalog is incomplete, outdated, or misaligned with your team's actual work, even the most sophisticated AI model won't deliver helpful results.
Our customers report significantly faster incident resolution when their service catalog data is complete and current—which is why our AI Engine automatically validates catalog accuracy.
This understanding forms the bedrock of our holistic AI strategy at OpsLevel. We view the MCP server as one piece of a broader vision to make your developer experience more intelligent and reliable. A primary focus for us is foundational: ensuring the data in your IDP is accurate, comprehensive, and consistently synchronized. Our AI Engine already assists customers by automating catalog completeness and identifying inconsistencies, and we are continually expanding these capabilities. The effectiveness of our MCP is underpinned by OpsLevel's commitment to accurate and comprehensive IDP data, ensuring reliable AI responses.
The power of an IDP-backed MCP: connected context
This commitment to data quality also highlights a crucial advantage of having an MCP on top of an IDP, rather than individual MCPs for each underlying system. While standalone MCPs for tools like Git, Jira, or security scanners might provide context within their specific domain, they operate in isolation. OpsLevel's IDP, however, connects and enriches data from across all these disparate systems. This means our MCP can surface a much stronger, connected, and holistic context, resulting in more actionable insights and informed input for developers.
The OpsLevel MCP leverages this rich, unified data, to provide developers with immediate context on services, APIs, documentation, standards, and relevant contacts needed to build new features.
Building with our customer community
Our journey with the MCP server has been, and continues to be, a collaborative effort with our customers. We've been actively engaging with them to understand where MCP truly delivers value, which use cases are most impactful, and where there are still opportunities for improvement.
Through these conversations and alpha testing, we've identified and refined key use cases:
Operational Assistant: For on-call engineers and SREs, the MCP can quickly surface component ownership and metadata to help triage production issues. It can scan and summarize runbooks and troubleshooting guides to provide rapid incident response. During a recent customer incident, their SRE used our MCP to instantly identify that a failing service had 12 downstream dependencies and surface the on-call rotation for each affected team—all within their Claude conversation. Customers have expressed a desire for a "universal search" to answer engineer's questions about services and docs, and for help with creating campaigns and actions in OpsLevel itself. It's also proving valuable for understanding service ownership and on-call rotations, often being faster than using a UI.
Coding Assistant: For developers, our MCP can provide in-IDE context around API docs and other component metadata essential for building against different services. It can help with API usage and best practices, generate service-specific boilerplate code, enhance dependency awareness and versioning, and even assist with campaigns by proposing code changes. Customers are particularly interested in getting context around standards directly in their IDE.
These real-world applications are critical to our development process, ensuring that the OpsLevel MCP server delivers tangible value where it's needed most.
What's Next?
We're just getting started with what's possible with MCP. Our roadmap includes expanding operational use cases, deeper integration with incident management workflows, and enhanced automation capabilities. We'd love to hear how you envision using AI-powered service catalogs in your organization.
If you're exploring MCP or curious about how AI can make your developer portal smarter and more effective, we'd love to hear from you and collaborate on shaping the future of developer tools.
Ready to get started? Check out our demo videos to see the MCP server in action, explore our documentation for setup instructions, or dive into the code on our GitHub repository.