All The Ways to Work with Data in OpsLevel

Lets review the ways you can ingest and manipulate data in OpsLevel. Whether you are new to OpsLevel, or a wily veteran, you’re certain to learn something along the way!

Methods for Ingesting and Manipulating Data in OpsLevel

Here are the five tools you can use to perform CRUD (create, read, update, delete) operations in OpsLevel:

  1. OpsLevel UI
  2. OpsLevel YAML
  3. OpsLevel CLI
  4. OpsLevel GraphQL API
  5. OpsLevel Terraform provider
  6. OpsLevel Kubernetes Sync

Let’s briefly break down what each one can do and when each is best used:

OpsLevel UI

The lowest friction way of adding and editing information is interacting directly with the OpsLevel UI. This option lets you click a button and fill-in basic information about your new service or repository. All updating and deletion is performed by navigating the UI.


  • easy to get started quickly (no special setup to use beyond username and password)
  • most accessible for those who aren’t engineers
  • great way to answer quick one-off questions, consume reports, or track service maturity levels


  • this won’t scale if you have a large number of services to maintain
  • not the quickest way to bulk-add information or services
  • not great if multiple teams are having to catalog all of their services
  • editable by any user with edit permissions (not stored in code)

OpsLevel YAML

Everyone loves some YAML, ammiright!? :). But seriously, this is a great way to store all of your service metadata as code (aka config-as-code or GitOps) as your single source of truth (you must use one of OpsLevel’s git integrations to use YAML).

You create a yaml file and then it is scraped automatically by OpsLevel once your repository is integrated, and your service has all of its information populated and ready to go. This is ideal for central platform teams that are onboarding lots of different product teams and need a single, easy path that supports adding information in an asynchronous, independent fashion.


  • single file can be templated for use by all teams
  • just include the file in the root of your repo and it will be scraped
  • once this file is registered, the UI is locked so that this file in code is the single source of truth about the service
  • generate and download a YAML file from OpsLevel for any service, no matter how it was originally created


  • have to deploy this file to all repos
  • will need clear documentation and help to ensure fields that the org cares about will be populated with information
  • with many people being able to submit this at once, naming conventions will be hard to enforce or standardize

OpsLevel CLI

Power users, this is for you! OpsLevel has a command-line tool–built on top of our GraphQL API–that gives you the ability to perform basic CRUD functions on all OpsLevel items.

This tool can be used to create resources like services, groups, teams, and checks–all programmatically.

For instance, you can query authentication directories and extract your list of users and groups there and then feed that into our CLI so that it can loop through them all and create them in OpsLevel! The CLI can be installed via direct download of binary for your OS or via homebrew.

The CLI is very useful for not just programmatically doing things, but also to fix one-off things that may need corrected or added in special scenarios. For teams that aren’t yet ready for full-on Terraform functionality, this is a great intermediary tool to use to get there.

And of course, it’s built for those of you who would rather live in your terminal than a UI.


  • able to perform CRUD operations on all OpsLevel objects
  • good way to programmatically script complex operations in OpsLevel
  • easy setup with homebrew and getting a token
  • commands constructed very similar to k8s patterns, so small learning curve


  • with great power comes great responsibility… :)
  • not idemopotent like our Terraform Provider

OpsLevel GraphQL API

If you’re interested in capabilities of the CLI, but want more language flexibility, you can interact with our GraphQL API directly.



  • you don’t get to benefit from the abstractions built into the CLI

OpsLevel Terraform Provider

If you keep all of your infrastructure-as-code as we do, then we have great news for you. We have an OpsLevel Terraform provider so that you can provision all OpsLevel objects via Terraform.

This unlocks the ability to get really creative in how you provision and track all of your metadata with OpsLevel.

We recommend using the OpsLevel Terraform module as part of your base service templates so that you get instant tracking of your new service and the code to maintain that information in OpsLevel lives directly alongside the code you provisioned the infrastructure with!

Using Terraform is a pattern that we highly recommend (though there is a learning curve involved) and use ourselves to make sure we have complete auditability. Using Terraform for both creation and cataloging ensures that nothing is missed.

This option also has the benefit of taking a task off of developers’ plates.

No more nagging and pressing teams to get their services into OpsLevel. This also allows naming and ownership conventions that are typically clearly defined in Terraform to be used to construct the metadata to be passed to OpsLevel. It doesn’t get much cleaner or more concise than doing it this way.


  • ability to have the OpsLevel be informed of your new service at creation time
  • code to track your services lives alongside your infrastructure code
  • infrastructure team can use clearly defined names and ownership roles in Terraform to compose the metadata for OpsLevel
  • naming and ownership conventions for the org can be enforced at time of creation
  • no longer a developer concern to make sure they get their services into OpsLevel


  • requires a high level of infrastructure-as-code discipline
  • harder to use this option if you already have a lot of things manually added to OpsLevel. You will have some work to do in order to switch over.

OpsLevel Kubernetes Sync

Our final tool to look at is a unique one, but one that is very useful if you use Kubernetes. Having worked on teams before where, despite the best intentions and policies, things got deployed into our cluster(s) without SRE or Platform teams knowing–that later caused cost problems and security incidents–this option can be a lifesaver.

We built the Kubernetes Sync tool so that everything in the cluster gets sent to OpsLevel. All services are discovered and there are ways to configure the tool to scrape certain namespaces, de-duplicate found services, etc.

We have documentation that covers this in depth on our docs page, but it’s an ideal way to see what is running in your cluster. This gives you visibility to all of those cluster tools and services that aren’t normally thought of as “services” such as coreDNS, kubeproxy, kubelet, etc.

Tracking those tools is helpful for Platform/SRE teams that manage clusters, giving them full visibility into any security, versioning, or compatibility issues that develop over time.

This tool is the only one of the group here that we’ve looked at that gets you a snapshot/working picture of how things actually are, not how they are intended to be. This helps teams trying to track maturity continuously and gives the most realistic picture of what their org is actually running at any given time.

The k8s sync tool isn’t just meant as a means to play “catch up” on what is actually out there in the world. If you are intentional about starting off with it and using it instead of one of the tools above, then this is a perfect way to keep your OpsLevel information fresh–like, real-time fresh!

There are a few things to consider before using this tool. First, if you use this, this will be THE WAY all things k8s get tracked in OpsLevel. Careful consideration would need to be given to how you use the other tools so they don’t overlap/conflict with this tool.

Second, with this tool, your mileage will definitely vary based on the complexity of your cluster and setup of your cluster. There is a configuration file that will need some trial-and-error testing before it gets the scraping just right. Let us know if we can help with this process.

Overall, when this tool gets set up and is running, it gives you an unparalleled view of what is running in your cluster. Combined with some precisely crafted checks in OpsLevel, you will quickly discover things on the Service Maturity rubric that may be “unauthorized” for deployment in your cluster.


  • syncs all services in k8s to OpsLevel
  • discovers drift or unauthorized things deployed to k8s
  • another way to avoid developer responsibility for getting their services into OpsLevel


  • time to tinker with and fully craft config file with your cluster nuances
  • need to have a plan in place with what may overlap/conflict with this tool

Putting It All Together

Ok, so now that we’ve seen what each tool can do and its accompanying pros and cons, let’s wrap up with a discussion on putting the different pieces together.

We suggest being intentional about your data ingest strategy so that you can most efficiently capture and maintain all of your organization’s services and metadata within OpsLevel.

The largest consideration is where you stand within your own company/team as far as maturity of tooling as you choose a tool to anchor your strategy around. There is no right or wrong answer here, and we’ve worked our hardest to make tooling that meets you wherever you are on this spectrum.

Let’s get into the weeds a bit and talk specific scenarios:

  1. Smaller org or not fully into Infra-as-Code: the place to start is the CLI (assuming you have the expertise and time to learn it). It really does get you the most bang for your buck, as it lets you do some power moves like programmatically working with data and OpsLevel entities.

  2. Command line tools not your thing: then the UI and using OpsLevel YAML is your best path forward. Utilizing the UI to add some services and repos and putting a few YAML files in your repos and then writing up some documentation (e.g. naming conventions) about how you’d like others in your org to follow your pattern is a good methodology to get started.

  3. Don’t forget: You can also download a YAML file for any service, regardless of how it was first created–a very useful pattern if you’d like to migrate towards a full config-as-code model.

  4. Familiar with Terraform: use our provider to annotate and add create all of your metadata in OpsLevel in one giant Terraform plan and apply!

  5. Discovered services: If you’re sending deploy events or custom event check payloads to OpsLevel, you can use these data sources to bootstrap your catalog–or keep it up to date over time.

No matter where you find yourself in one of these scenarios (or even a mix of a few of them), if you use Kubernetes at all, definitely check out the capabilities of the kubernetes sync tool and see if that is the right fit. It really is a quick way to get some big wins and I especially recommend it for finding and reporting about those pesky things that take up cluster resources because someone was just trying something out and deployed it or something got mis-typed and now you have an entire service(s) that are just out there doing nothing in your cluster.

Here at OpsLevel, we have tooling for you to really mix-and-match and “choose-your-own-adventure” as you start using OpsLevel to catalog and track the maturity of your services. As always, we’d love to talk shop with you about our tooling and are happy to make recommendations specific to your needs.

Learn how to grow your microservice architecture without the chaos.

Not ready for a demo? Stay in the loop with our newsletter.