Codify your SaaS Apps: The Answer to the Unmanaged SaaS Jungle

Eran Bibi

When we adopt SaaS tools, we tend to see them as just that–tools, and not what they eventually actually are: additional siloed, unmanaged clouds with their own proprietary inventory of services, objects and resources.

Infrastructure drift, unmanaged resources, ghost assets, these are all well-known “silent killers” in our clouds.  Whether AWS, GCP, Kubernetes, Azure or anything else, when deploying our services to multiple clouds, we know that a unified inventory and management of our cloud resources is complicated, and there are many great tools out there that are looking to help solve this growing complexity.

One thing that is often overlooked though is where our SaaS tooling comes into the mix.  When we adopt SaaS tools, we tend to see them as just that–tools, and not what they eventually actually are: additional siloed, unmanaged clouds with their own proprietary inventory of services, objects and resources.

A phenomenon we have encountered often, when helping companies overcome drift, is a common neglect of cloud infrastructure tooling, such as CloudFlare, Okta, Mongo Atlas, Datadog, Git and many other popular SaaS platforms and tools that are part of our core operations.  How can we make these SaaS clouds immutable, versioned, scalable and monitored if these extensions aren't codified?  Is state drift in Okta less troubling than drift in your IAM roles, for example? How can we guarantee proper monitoring if our Datadog dashboards make it possible for anyone to cause drifts?

‍These are just some of the questions that come to mind when we see this recurring anti-pattern in cloud operations today. But you may ask, why does this matter?

‍If a growing understanding is dawning on DevOps engineers that it is much safer and less error prone to codify cloud resources, including the inherent benefits of managing these resources like all other code––whether the git history, peer reviews, PR automation and policy enforcement, it seems that SaaS service have not yet undergone a similar evolution and epiphany.  If clickops for cloud configuration has mostly been abandoned for IaC practices, SaaS tooling is predominantly still configured manually via the UI with minimal codification. Not surprisingly this leads to many similar problems you’d find in your cloud operations.

Putting the Git in GitOps

When it comes to Kubernetes and cloud native systems, that are so commonly associated with GitOps practices, which is considered the best practice and modern way for managing complex Kubernetes operations––the git part of gitops is all but neglected when it comes to managing these systems.  I’ll explain.

If we look at the top downloaded Terraform providers for SaaS applications that are not clouds, the list and data is extremely compelling:

  • DataDog/datadog 32.4M+
  • integrations/github 16.9M+
  • cloudflare/cloudflare 16.8M+
  • newrelic/newrelic 12.5M+
  • hashicorp/consul 9.7M+
  • PagerDuty/pagerduty 8.6M+
  • grafana/grafana 5.4M+
  • gitlabhq/gitlab 4.8M+
  • mongodb/mongodbatlas 4.2M+
  • okta/okta 4M+
  • elastic/ec 3.2M+


While there is an increasing trend towards codifying these resources through Terraform providers, if we take the number of downloads of the AWS Terraform provider or the next most popular cloud Azure, these are 750M+ and 127M+ respectively, placing the next most popular non-cloud provider at about 5% adoption (and ultimately codification).

‍This is because not only would this codification need to be done for each SaaS tool individually, once these tools are configured through clickops, just translating this to IaC is an extremely complex undertaking (particularly in large organizations with multiple dashboards, clouds, services and other dependencies and resources).

‍If we come back to thinking about how we convert our git operations to be GitOps native, we’d likely need to follow a post similar to this one that walks you through the process of managing your Github organization with Terraform.  And this is just one tool of many in a huge stack of SaaS tooling that would need to undergo a similar transformation, this is another example of a post that walks you through a similar transition for Datadog.  And the list goes on.  Now imagine having 10s of tools, in large organizations––multiple teams and clouds.  The task is daunting just to think about.  Until now.

Codifying Your SaaS

When thinking about the critical aspects of codifying your SaaS, there are a few angles it was important for Firefly to focus on to make this transition truly valuable for all DevOps teams.  The first layer of value is in the unified inventory of both cloud assets and SaaS assets in a single place. Just this alone enables DevOps teams to search, understand, and classify assets across all clouds––the operational ones or the tooling clouds.  Something that wasn’t possible before from a single dashboard or tool.


The next aspect is actually getting all of these tools and assets managed as code.  If this has become the cloud standard, it’s not clear why this hasn’t happened for SaaS apps too.  We’ve spoken about the benefits of managing everything as code, but eventually once managed as code with the relevant guidelines and internal engineering practices applied, these can then be automated as part of CI/CD processes and the relevant gating & guardrails applied here too.


Doing so manually would require engineers to translate all of their manual configurations (that are not always found in a single place in the UI, across the many layers of their application) into the relevant code configurations, and usually many times over if there are multiple applications, dashboards or tools.  This is now possible at the click of a single button, for all SaaS tools, in one place.


If we take a look at a typical Firefly dashboard, we can see that typical SaaS tools have as low as 20%+ of codified resources, vs. 50% in cloud service providers.

The companies that have flipped this number and codified these resources, were able to enjoy the IaC advantages of faster deployment cycles and standby configuration templates for disaster recovery scenarios.

Photo by Josh Hill on Unsplash