Observability vs Monitoring: Mythbusting
Key Points
- Myth 1: APM and observability are not interchangeable; APM focuses on visibility inside monolithic runtimes, while observability is designed for complex micro‑service ecosystems and must cover every component, from front‑ends to legacy back‑ends.
- Myth 2: “Log love” – relying solely on logs for diagnostics – is an anti‑pattern because it eliminates real‑time monitoring, causing issues to be detected only after they impact users.
- Effective observability combines metrics, traces, and logs with proactive monitoring to detect and address problems before they affect end‑users.
- Integrating monitoring data with log information accelerates troubleshooting and prevents the destructive consequences of a logs‑only strategy.
Sections
Full Transcript
# Observability vs Monitoring: Mythbusting **Source:** [https://www.youtube.com/watch?v=IQn3W8EedvA](https://www.youtube.com/watch?v=IQn3W8EedvA) **Duration:** 00:10:29 ## Summary - Myth 1: APM and observability are not interchangeable; APM focuses on visibility inside monolithic runtimes, while observability is designed for complex micro‑service ecosystems and must cover every component, from front‑ends to legacy back‑ends. - Myth 2: “Log love” – relying solely on logs for diagnostics – is an anti‑pattern because it eliminates real‑time monitoring, causing issues to be detected only after they impact users. - Effective observability combines metrics, traces, and logs with proactive monitoring to detect and address problems before they affect end‑users. - Integrating monitoring data with log information accelerates troubleshooting and prevents the destructive consequences of a logs‑only strategy. ## Sections - [00:00:00](https://www.youtube.com/watch?v=IQn3W8EedvA&t=0s) **Untitled Section** - - [00:03:29](https://www.youtube.com/watch?v=IQn3W8EedvA&t=209s) **Observability Pricing Myths Debunked** - The speaker highlights how proactive monitoring reduces incident impact, then dispels the “sticker shock” myth by comparing flat‑per‑host pricing with usage‑based models for observability tools. ## Full Transcript
Observability? Monitoring? Aren't they the same thing? That's what a lot of people think. But
we're going to debunk that myth and five others today in this video. Let's get to it.
Myth number one: Just a name. What do I mean by that? I'm actually talking about how a lot of
people think that there's not much difference between application performance monitoring and
observability. But the truth is, it's they're built for two completely different problem
sets. Application performance monitoring or APM was built around the concept of seeing inside
runtimes. Now things like Java or .Net. But the reason it works for APM is because in the world of
runtimes, especially monolithic runtimes, all your backend systems are tied to that same runtime.
And all your front end requests are coming in to that same runtime. So if you have visibility here,
then you can see everything that's going on in your system. But observability is built
for modern applications that are built on top of microservices environments with much more
complex systems. And while there are some runtimes that might be in those systems,
that's not enough for you to understand exactly what's going on throughout the
entire microservice application. So the only way to really do that is with observability,
which has the ability to monitor all parts of the system--even going back to your backend systems
like your mainframe, so that you see a full picture of everything that's going on in the world
of your applications. Well, that was myth number one. Now, let's take a look at myth number two:
Log love. What's log love? Log love is actually referring to something I found out a few years
ago from an analyst who I was talking to, and he asked me if I had heard about this situation where
people were taking their metrics, traces and logs, which everyone thinks of as observability and not
doing any monitoring, but rather just writing all that information into log files. And when
a problem occurred, going to the logs to solve the problem. This isn't just a myth, this is an actual
anti-pattern. And anti-pattern is something that seems like it's a good idea, but it's actually
the exact opposite result. And in this case, the exact opposite result can be destructive to your
environment, to your applications, to your business. Let me explain why. If you're not
monitoring, that is, looking at all the different pieces of your environment, plus seeing how your
end users are being affected by monitoring their performance as well, and doing this in real time as
it's happening. Then when you find out there's a problem, say, through a trouble ticket,
that means that you're already too late to help yourself. But by monitoring, you have the ability
to catch things before they happen. And the other nice thing is by tying all the monitoring pieces
together with your log information that you do have, you actually speed up your troubleshooting.
So not only do you have the chance to get in front of incidents before they impact your users, but
when an incident does occur, you have the ability to solve it much faster. So when I think of M/T/L
in the world of monitoring versus logging, I just like to put a little "2" on the M for metrics,
monitoring, traces and logs. Now that we've looked at two myths, let's look at myth number three:
Sticker shock. What are we talking about? You probably want to hear about pricing and cost
for observability tools. And it's something that we should look at because the reality is that
observability tools can be expensive, but they don't have to be expensive. Let me explain why.
There is one way of pricing where everything, all the features that you get and all the things that
you do are inclusive and forecastable and known, such as charging per host in your environment.
This allows you to have a very steady price quarter by quarter, year by year, based on the
number of things that you're actually monitoring. But there is another way that some observability
solutions and monitoring solutions charge, and that is to charge you for other things around the
system, such as the number of applications that you're running or maybe the number of users that
you have using your product, that's actually using the observability tool itself. Maybe
they're just looking at the amount of data that you're sending through the system. They might even
charge you just for debugging. And what happens when this comes in is you don't know what's going
to happen ahead of time. So you're going along at a fair clip. And then one or more of these things
happen and you end up with a quarterly surprise charge. How big is that surprise charge going to
be? There's evidence out in the marketplace that it could be $50 million or more. So when you're
looking at your observability solutions, you have to keep this myth in mind, not because it's
definitely going to be there, but because there's ways around it. But you want to be looking for
solutions that have all inclusive pricing and have a fair and forecastable way of giving you that
price. Okay. We're halfway through the myths. Now we're about to talk about myth number four: Who, me?
That sounds really weird. Let's talk about it. A lot of people think that observability
is built to be used only by site reliability engineers or SREs. But the situation with modern
applications-- and this is actually something that goes beyond traditional monitoring capabilities
--is that observability allows us to give the information from the systems to the individual
people and organizations that need it. So, for example, you could get end user information to
your marketing team. You can get performance of different runtimes to your development
organization. You can look at the system as a whole sent to your DevOps team. Or, of course,
your SRE team and other IT personnel that need to see what's going on. You can even include
your business users and give them the information they need. The fact is that observability takes
all the data that in traditional monitoring is put just through your Ops power users and
makes it democratized, giving everyone a view of the data that they need so that they can do
their job as an application stakeholder. Now, we've made it to myth number five:
Pick favorites. Pick favorites? What do you mean by that, Chris? Well, we've talked about the fact
that we have all kinds of ways of pricing things and observability tools. But one of the things
that happens is that it takes a lot of effort to get traditional monitoring tools working. So
while a lot of organizations have anywhere from 8 to maybe 20 or even hundreds of applications, the
truth is that traditional monitoring tools require way too much effort and work and cost to give you
all this information. So you usually have to draw a line and pick your favorite applications that
you're going to monitor and the rest don't get any monitoring at all. Why do I not like this? I don't
like this because if you have an application, it's important to somebody. Or as I like to say,
every application is important to somebody. And that means that there are stakeholders of those
applications, including business owners and application owners and developers,
that need the information that comes from observability. You shouldn't have to pick. And
that's why observability gives you this broader, better view of the entire system, as opposed to
making you pick just a few applications "just in case". Okay, we're at the last myth. Myth number six:
DIY. The truth of the matter is you can build monitoring yourself, but you shouldn't
build monitoring yourself. Let me explain why. When you add monitoring to all the pieces you
have to do. Plus, when you add the idea of being able to measure everything to the front end. Oh,
and don't forget, you need to detect changes like when a service disappears. Or maybe when a new
service appears within the system. Doing all that manually requires you to slow things down. And as
you're trying to accelerate your development, as you're trying to get better as an IT organization,
have better performing applications-- slowing things down is a bad idea. In fact, it leads
to lower quality applications. You want to speed things up, and the only way to speed things up is
to automate. And that's why you want to look for an observability solution that automates things,
that automates discovery, that automates mapping the system, that automates monitoring end users,
and does that for all of the different users that you're going to bring into place across
all the different applications that you have to monitor and can see fully across the entire
system and trace everything it needs to trace. It needs to do this automatically, or else you're
going to slow down your development systems and ultimately result in lower quality applications
and unhappy customers. And that's not what we want. So look for automation and stay away from
manual observability. Thanks for watching. Before you leave, make sure you hit like and subscribe.