Containerization, orchestration and open standards for metrics, logs, traces, and flows have ushered in a new era of open source monitoring tools that will become the foundation for instrumentation in the enterprise.
Open source monitoring tools are not something new. In the distributed application era, projects like Nagios gained some level of adoption. However, we see much more momentum in the cloud native era and it’s driven by a few factors:
1. An open source tech stack. Modern apps built with open source components such as MySQL, Kafka, and Elastic, running on Linux, containerized with Docker and orchestrated with Kubernetes. The entire stack is open source, so an open source monitoring plane is only a natural extension.
2. Open instrumentation standards. Historically, hardware and operating systems were always instrumented for metrics and control and they provided proprietary configuration mechanisms. Over time, application virtual machines such as JVM provided structured instrumentation frameworks (JMX) which were adopted, albeit slowly. Containers and Kubernetes have dramatically accelerated the instrumentation standards trend. Containers publish metrics in a consistent format regardless of the VM upon which they are running. Kubernetes has spawned several supporting projects around telemetry including Prometheus for time series metrics, Loki & FluentD for logs, and Jaeger for tracing. Further, portability across cloud environments, whether private, public or hybrid is also improved when there are common standardized orchestration and associated instrumentation.
3. Collect telemetry once to support a broad set of use cases. Instrumentation can be used for many use-cases beyond monitoring and observability. Forward-leaning organizations are creating their own telemetry practices where they are taking ownership of their data, collecting it once in a central location, and providing it as a service for multiple business purposes including security, capacity planning, customer experience management, and operational analytics. Monitoring vendors charging customers for access to ‘their’ instrumentation are numbered.
4. Migration to the cloud has changed priorities. With production level deployments increasing, the focus has shifted from basic monitoring infrastructure and network to increased attention on CI/CD, orchestration and application performance
5. Transparency & Cost of Proprietary Tools. Finally, proprietary tools, especially those run in a SaaS/subscription delivery model have become material to the operating expense of cloud applications. If you’re paying nearly as much to monitor your applications as what it costs you to run/host them, then you have to think of another way.
We are not alone in this thinking. In their April 2020 APM Magic Quadrant, Gartner declared “by 2025, 50% of new cloud-native application monitoring will use open-source instrumentation instead of vendor-specific agents for improved interoperability, up from 5% in 2019.”
Like all open source tools, there’s a need for hardened and supported distributions, scale and value-add tech. That’s where OpsCruise comes in – we’ve built a technology layer that provides visualization of the topology of cloud applications, combined with instrumentation coming from the open source monitoring tools to provide powerful insights on what’s happening with your applications. We use next generation topology-aware ML that builds a behavioral profile of your applications – eliminating the need to select metrics, tune thresholds or sit in war rooms attempting to fault isolate among multiple tools. What’s more, we embed and provide supported distributions of the popular CNCF monitoring tools. Finally, you’re not locked in – you can keep that open source instrumentation layer in place and feed it to other tools down the road.