Monitoring in the entire technical world is terrible and continues to be a giant, confusing mess. How do you monitor? Are you monitoring things the wrong way? Why not hire a monitoring consultant!
Today, we’re talking to monitoring consultant Mike Julian, who is the editor of the Monitoring Weekly newsletter and author of O’Reilly’s Practical Monitoring. He is the voice of monitoring.
Some of the highlights of the show include:
Observability comes from control theory and monitoring is for what we can anticipate
Industry’s lack of interest and focus on monitoring
When there’s an outage, why doesn’t monitoring catch it?” Unforeseen things.
Cost and failure of running tools and systems that are obtuse to monitor
Outsource monitoring instead of devoting time, energy, and personnel to it
Outsourcing infrastructure means you give up some control; how you monitor and manage systems changes when on the Cloud
CloudWatch: Where metrics go to die
Distributed and Implemented Tracing: Tracing calls as they move through a system
Serverless Functions: Difficulties experienced and techniques to use
Warm vs. Cold Start: If a container isn't up and running, it has to set up database connections
Monitoring can't fix a bad architecture; it can't fix anything; improve the application architecture
Visibility of outages and pain perceived; different services have different availability levels
Copy Construct on Twitter
Baron Schwartz on Twitter
Charity Majors on Twitter
- Digital Ocean
Full Episode Transcript
This week’s episode of Screaming In The Cloud is generously sponsored by DigitalOcean. I’m going to argue that every cloud platform out there biases for different things. Some bias for having every feature you could possibly want offered as a managed service at varying degrees of maturity. Others bias for, “Hey, there’s some money to be made in the cloud space. Can you give us some of it?”
DigitalOcean biases for neither. To me, they optimize for simplicity. I told some friends of mine who are avid DigitalOcean supporters about why they’re using it for various things, and they all said more or less the same thing. Other offerings have a bunch of shenanigans, root access, and IP addresses. DigitalOcean makes it all simple. “In 60 seconds, you have root access to a Linux box with an IP.” That’s a direct quote albeit with profanity about other providers taken out.
DigitalOcean also offers fixed-price offerings. You always know what you’re going to wind up paying this month, so you don’t wind up having a minor heart issue when the bill comes in. Their services are also understandable, without spending three months going to cloud school. You don’t have to worry about going very deep to understand what you’re doing. Its click button or make an API call, and you receive a cloud resource. They also include very understandable monitoring and alerting.
Lastly, they’re not exactly what I call small-time. Over 150,000 businesses are using them today. Go ahead and give them a try. Visit do.co/screaming and they’ll give you a free $100 credit to try that. That’s do.co/screaming. Thanks again to DigitalOcean for their support in Screaming In The Cloud.
Corey: Welcome to Screaming In The Cloud. I'm Corey Quinn. Joining me today is Mike Julian who’s the Editor of Monitoring Weekly newsletter. He is the author of O’Reilly’s Practical Monitoring and is a strong, fierce, independent monitoring consultant.
Welcome to the show, Mike.
Mike: Hey, Corey. Thanks for having me.
Corey: Always a pleasure to talk with you. You've done a lot of things over the past year. You've been working on a monitoring newsletter. You’ve written a book and you've been telling those of us bring you into engagements that we’re monitoring things wrong. What brought you to a place of being effectively the voice of monitoring for 2018?
Mike: I don't know that I go that far. However, I started up monitoring weekly and now like everyone knows me as the guy who runs Monitoring Weekly, I guess.
That was cool, but starting that was more there’s nothing out there for this like DevOps Weekly, SRE Weekly, and Cron Weekly, like why are all these newsletters named weekly, that’s kind of weird now that I think about it.
I started this and it's been good. I have a lot of followers now and then the book launched. My book Practical Monitoring released last December after two years in the works, and it's a lot of work. Really, it’s more that all of these things just kind of come together, but I’ve been working and monitoring forever. 2006 is when I really started getting into it.
The past year has been a culmination of many years of work and now I’m finally telling people about it.
Corey: Wonderful. Before we dive too far into that, let’s get something out of the way and irritate at least half of anyone listening to this. Is it monitoring or is it observability? Whatever you answer, by the way, you’re going to give rise to a thousand “well, actually”s.
Mike: Why not both? There’s been some really great discussion between the bunch of people online. Charity majors come to mind, Copy Construct on Twitter and Baron Schwartz have all had fantastic takes on this.
Observability really comes from control theory. You hit Wikipedia and look it up, but the idea is to go with charities take on it which I think is probably the most concise. Monitoring is for the things that we can anticipate or at least can reasonably anticipate.
If I know that, let’s say, Redis, when I’m monitoring Redis, I really care about key evictions, so I’m just going to set some alerts up on that and I’m going to have a dashboard all ready ready to go so I can pay attention to that sort of thing.
Once you start getting really complex infrastructures, especially in microservice architectures, the things that you can anticipate, there’s not a lot of them. At this point, now you have to have an application that is actually observable. A way for you to really dig into the application and the infrastructure to understand actually what’s going on with it and to ask questions of it that you can't anticipate.
Observability versus monitoring, it’s not an either/or to me, it’s not a versus, we need both. Yes, let’s see, did I just upset 100% of people? We'll find out.
Corey: Exactly. The only consensus is that you’re wrong. Wonderful. I love it when we can bring people together like that, but let's back up a second. In the abstract, we take a look at the entire technical world as it stands today and I think we can reasonably say that monitoring is terrible, in general.
As soon as I ask a question like, "How do I monitor?" I can't get the rest of the sentence out before I have a pile of vendors who are jumping all over me trying to sell me something. It’s overwhelming. It’s just not something that I am equipped to or want to deal with, so I shrug, I give up, and I hire monitoring consultants like you.
Why is that the way that things have evolved? Every other area of technology has more or less gotten better over the past decade, but it feels like monitoring is still a giant confusing mess.
Mike: I was wondering that myself. It’s really weird to me. We have ops engineers, they’ll spend entire days thinking hard and deep about Deployment Methodologies or Config Management or Kubernetes like take your pick and these people will spend just days and weeks thinking about that one problem.
Yet, as an industry we gloss over monitoring and we think, “Oh, God, it’s that thing. I don't want to do that.” We just spend as little time as possible on it, which is really weird considering that monitoring, at some point, we’re going to get woken up in the middle of the night like it's just an inability and better monitoring can help with that, so why not focus more on it?
Why do all of these vendors pushing everything, it was else because as an industry we push monitoring out of our own sphere of influence into someone else’s because, historically, we didn't want to deal with it. Our tools sucks, we didn't build better ones, so we said implicitly, “Hey, vendors, here, come and build this for us.” As vendors want to do, they run with it, now you end up with hundreds of different tools all doing roughly the same thing and now things are just more confusing than when we started.
Corey: When I first got to a point of giving up on my old approach of using things like Nagios or what not or bolting them together myself and reach out to have someone else start building these things for me, it came from a place of not wanting to be responsible for a game in which there was no way to win. Anytime there’s an outage there’s always the question of, "Why didn't the monitoring catch that?"
In the event that a monitoring does catch an issue before it becomes a big issue, great, you fix it and you feel good. But it certainly doesn't get the visibility of three hours of downtime and the entire C-suite taking up residence behind you, in your open plan office and you get a stress test.
Mike: Yes. That whole question, that whole scenario of, “Hey, we just had an outage, why didn't monitoring catch it?” It’s kind of a counterfactual like maybe the absurdity of the question could be called out of, “Why did we have an outage to begin with? Why didn't anyone anticipate that unforeseen event?" Like, "Wait a minute, it was unforeseen because it’s unforeseen for a reason." There are a lot of things in systems and complex stuff that we don't foresee.
Computers go sideways in the middle of the night because they just do. Monitoring is never going to be perfect. The applications, the infrastructure, they always change. Monitoring always has to evolve with them. There’s going to be a lot of things that we just can't anticipate.
Corey: A question that I've always wondered too is if I take my understanding of monitoring and I start rolling things out, okay, so I build a web app, now it’s 2018, I’m probably not going to roll Nagios out for a modern architecture. Nothing against them, it’s just not how we roll anymore.
But I’ll bolt in Datadog, I’ll probably put New Relic in for application performance monitoring. I’ll bring in together something like Sumo Logic to wind up doing logging work. I’ll have some form of tracing involved. Maybe, I pay someone else. Maybe I do it myself. Maybe I run Prometheus for time series. Maybe I can pass that buck.
I’ll bring honeycomb in to look at high cardinality events and very quickly my personal positioning is that I fix the horrifying AWS bill, but I just built a monitoring system that cost more to run than the application it’s supposed to care about if I continue down that path. At what point is this almost a failure of the tools and systems that we're running in that they're this obtuse to monitor?
Mike: Yes, I agree with you, because monitoring tools tend to be fairly specialized. Once you start thinking about all of the different specializations that you need to cover, you end up with eight to a dozen different tools that all do something very specific. They’re really good at that one thing, but now the bills for monitoring are obscene, that's a ton of stop and all of that data going in is not cheap.
With that said, it’s usually more expensive to run it yourself anyways, so maybe it’s a good idea to outsource it all. I just upset another half of your audience, so now we're at 150% of people upset. Outsource everything, folks, it’s great.
Corey: Past a certain point, it makes sense. If it’s not your core competency, why devote time, energy, and personnel to it?
Mike: Absolutely. I’ve talked to companies here in San Francisco, when I asked them like, "How many people do you have building out all of these monitoring tools that you have custom-built?" They're like, "Well, I’ve got this many people.” I’m like, "You’re spending $10 million a year on salaries alone to run a monitoring infrastructure, why don't you go spend half of that on paying someone else to do it for you?" Sometimes their response is, “I never actually thought of that.”
Corey: You’re the exact opposite of what a job creator is.
Mike: Exactly. This stuff is expensive to do, but at the same time can you really afford not to? If you’re running Twitter for pets, Mr. Corey Quinn, you’re probably not doing a whole lot of revenue. I imagine that makes a few cents a year, so it probably doesn't make sense to spend several thousand a year to monitor it.
With that said, if your company is making several hundred million a year then why don't you spend a couple of million to monitor it and just stop paying attention to bill, it's fine. At some level this becomes risk mitigation.
Corey: To that end, as we move into a direction where everything is cloudier than it used to be, monitoring seems to have gotten orders of magnitude worse. In the things that historically we could peg to a relatively high degree of certainty have now become non-deterministic. Latency between two given instances now can be all over the map.
It seems that by moving into AWS or Azure or GCP or any of the other large scale players, you’re getting an awful a lot in terms of business capability for that migration. But monitoring has somehow managed to get even worse than it used to be. Is that just me being terrible at my job?
Mike: We can't take that off the table.
Corey: Very fair.
Mike: Yes and no. Things have gotten different. Once a company starts moving infrastructure to Azure and Amazon and GCP and all of these other cloudy environments, the way we have to think about infrastructures is different. When I’m running on-prem and I have a datacenter a couple of floors below me, I can generally expect that latency between any given server is going to be sub-millisecond and that latency is probably going to be pretty static.
If it starts to rise, I probably have a network problem and I can go fix it right then or, at least, I will know where it’s at. But when you outsource infrastructure to someone else like, say, Amazon, you give up some control. In exchange you get a lot of stuff, so it’s probably not good. But that means that how you monitor your systems and how you manage them changes. I think this is what a lot of companies don't really get.
They overlook that moving to Amazon is not a forklift operation, it’s a re-architecture. What that means is how I monitor changes from looking at latency and saying, "Oh, well, one millisecond is fine," and I can generally expect it to be one millisecond and I care about this individual sever, now I actually care about the service.
The service could be any number of different servers behind it or any number different resources. Rather than monitoring at individual resource level, I should start monitoring at the service level and that gives me a much better leading indicator of what’s going on with the service I’m providing to customers.
Yeah, maybe a core idea underneath that is rising latency between instances, but how do I know that that latency actually impacted anyone. You have to completely change how you think about monitoring and how you think about management once you do move to the cloud infrastructure.
Yes, monitoring is more complex than it was but we gained a lot in exchange for that. It’s not that it’s worse, it’s just that it's different.
Core: That makes a fair bit of sense. Taking a step back from a service level perspective and I have to be fair, I’m going to pick on Amazon here just because that's the cloudy environment in which I have the most experience.
If Azure or GCP somehow has a revolutionary amazing counterpoint to this, please call it out, but today, for example, I can shove metrics from instances, and services, and applications into CloudWatch relatively easily, which is generally, from my experience, where metrics go to die.
The dashboards are not intuitive, it’s not easy to get a terrific view point into things, and I get to sense that it’s incredibly powerful, I’ve never yet seen an environment where CloudWatch metrics were set up in such a way to give actionable and meaningful insight into the environment.
Mike: It’s quite possibly one of the worst UIs I’ve ever seen, like it’s gotten so much better. What it looked like even two years ago is unusable. Nowadays, at least you can actually use it and understand what you’re trying to do, but you're right, it’s still a little weird.
Metrics do tend to go there to die like a lot of people don't realize that there are a ton of stuff there that they didn't asked to be put there, it just is. It comes for free. I mean, for some value they're free.
Corey: That’s the problem with CloudWatch metrics, we’ll figure that part out later.
Mike: Right, and then if you want to pull them out into, say, some other third-party tool, those API calls cost money, so have fun. Your bill for, say, DataDog, depending on how often DataDog is hitting that API could actually change.
A lot of these tools throttle all the stuff on the back-end, so you don't actually see this, but if you have a lot of metrics being pulled out, then you may have to pay more for those API calls in order to get the data quicker and have it more up to date.
Corey: I had a warning come through once from a monitoring provider, who I won't name, that complained that throttling API calls go ahead and request a limit increase to this or otherwise we’re going to have delayed output for you in our time series database. Okay, so I made the request and, bless their hearts, AWS support came back and said, "We can do that, but it’s going to cost you over 30 grand a month if we do. Are you sure?"
At which point, yeah, I put up with delayed graphs, that seemed to be the better answer for most of the problems I was looking at.
Something else that Amazon has been getting into lately is the idea of distributed tracing with the idea of server lists with container workloads, the idea of tracing calls as they move throughout the system. There are open source offerings in this place like Zipkin, but they’re behind what they term X-Ray. Have you have any deep dives into that yet that you can speak to?
Mike: I have not run into a single person that’s actually using X-Ray.
Corey: I have, but I’m not sure that it counts, because they were Amazon employees.
Mike: I’m pretty sure they're the only ones using it and I can't imagine that they are doing so willingly.
Corey: Is that because tracing itself is immature or is that because X-ray as a tracing implementation has roads to go yet?
Mike: I get the sense that looking at the product, Amazon released that tool, released that service mostly to say they had something. Everyone I know using tracing, they tend to fall in two camps, one is they’re using tracing and they’ve spent dozens of people hours making this thing work or not dozens of people hours, but dozens upon dozens and hundreds and thousands of hours making tracing work for them.
Then, there’s another section of people that they have some sort of tracing tool and they say, "Yeah, we have it, but we don't actually find much value in it," which makes me think like what exactly is the purpose here, because all of the people that are talking about tracing are giving the same demos and the demos look awesome. But I haven't really found anyone that's actually using the stuff in production successfully.
Corey: For this podcast and for my newsletter last week in AWS, which is written in the same vein as a lot of the stuff you did for monitoring weekly just a few weeks behind that. Thanks for the tips on that, by the way.
Mike: My pleasure.
Corey: A lot of this is done by a series of Lambda functions that I’ve stitched together and as I do this…
Mike: My God, what have you built?
Corey: It’s a Frankenstein architecture. One of these days I’ll take people through it in a blog post or a podcast and everyone can look at my secret shame. As I build this Lambda functions it always ask me, “Would you like to enable tracing on this function?” My response is, “Hahaha, no,” and then I move on. But the way that these functions are built, each one does a specific purpose. I have one that validates my links. I have another one that copies all of my content from my Pinboard account and shoves it into DynamoDB, primarily due to poor life choices on my part.
If one of those functions fails, I don't have any alerting or awareness into it until someone reaches out and says, “Hey, your archive hasn't updated itself in six months." You want to take a look at that, and then, "Oh, crap. I figure out what the problem is." I jump in and I fix it, but asking other people that tell me when my stuff is broken feels borderline awful. What should I do if I care about the availability of specific Lambda functions or any type of serverless function in that context? I haven't found a good answer yet.
Mike: No. The fun thing about monitoring Lambda and really monitoring serverless is that the answer is seemingly, "Good luck, sucker," so I'm sorry to have to say it, but good luck. With that said, there are some techniques that you can do with that, they're not great but they work pretty well.
I was reading an article earlier today from the Honeycomb blog where someone from [...] which we’ll throw in the show notes, they log at minimum the start of request in the end of the request and it's not tracing, although it really sounds like that, it is actually the straight logs.
This would actually apply really well to a lambda function or a series of lambda functions. That way you can say every time a request comes in and the request leaves, you could make a call out to another logging system that says, "This Lambda was started, this Lambda ended," and just keep passing those messages along so you could reconstruct the entire path of your series of Lambda functions.
Corey: For better or worse those start, stop, and invocation duration metrics show up in CloudWatch by default, the CloudWatch logs specifically. Oh, heavens, I've looped myself and here we are back, stuck in the conversation.
Corey: It seems to me that a number of different Lambda use cases presuppose that you’re in line for user requests. A lot of what I’m using them for is back-end processing, so it doesn't matter as much for me, but the idea of a warm start versus a cold start, if the container isn't already up and running having been triggered recently, it has to set up database connections that might pre-exist.
You wind up with a widely variable start time for invocation of a function depending on whether or not it's been invoked recently. You wind up with a 95th percentile in some cases of function behavior that has wild outliers as a direct result in either direction. That just seems to be adding another log to the fire of monitoring problems.
Mike: Yes, I can see that. This is where I come down to one of the things that I was really adamant in my book and then I tell all of my clients is that monitoring can't fix a bad architecture. If your application sucks, then adding more monitoring to it isn't going to suddenly make it better. If you keep having things break and you say, "Well, my solution is let’s add more monitoring to it." It’s still going to keep breaking. You haven't fixed anything. Monitoring can't actually fix anything.
In situations like that, instead of looking at more monitoring, you should be looking at improving the application architecture. Yes, monitoring may help you, but it’s not going to solve that sort of problem.
Corey: I guess we've sort of been dancing around this a little bit in some of our previous conversations, but I've noticed that whenever something breaks and I’m seeing behavior that I can't quite explain in an AWS context, the first thing I do is I pull up Twitter. Take a look not for my own crappy application, because frankly no one, besides me, generally tends to care about that, but rather, is this something that’s going on globally.
The Amazon status dashboard is invariably going to be saying, "Green. Everything is fine. I’m gaslighting you. It’s your terrible code. Our platform is terrific." But at 3:00 in the morning, particularly, DevOps Twitter comes to life during one of these moments.
The solidarity is incredibly touching as everyone scrambles to figure out what the heck just happened. Is that the best we’ve got today for global awareness of these platforms that are increasingly becoming, I guess, a monolith?
Mike: Yeah, that’s a tough problem, because Amazon is not ever going to be completely forthright with us on the state of their infrastructure. They're never going to tell us exactly when S3 goes down or doesn't go down or, "Hey, we actually host the status page necessary and we can't access it."
They're not going to tell us these things until someone says, "Hey, maybe they’re lying to us." Once you start relying on all those stuff, it gets really difficult because they’re not going to tell you all this, which means you’re left looking at your own monitoring which may or may not rely on their monitoring, which is another problem to understand the state of their infrastructure.
I know that there are some companies out there that they rely so heavily on Amazon and have such a level of traffic that they can see when Amazon services start going sideways before Amazon knows or at least before Amazon tells anyone. I guess really what it comes down to is trust but verify and hope that your system isn't going to be hugely impacted by Amazon doing wonky stuff.
Corey: We’d like to hope, anyway, the other side of it too and there is safety in this. When Amazon takes an outage, which credit where due, is becoming far less frequent than it used to be, that’s the day the internet is broken. It’s not your site that gets called out in the front page of the New York Times. The internet is just in a terrible state and suddenly that blame passes over and lands at the feet of the cloud provider in question. Is that a viable business strategy?
Mike: This really comes down to what’s the level of risk that you’re willing to accept. It’s a once in a blue moon that those events happen and are you willing to accept that? Amazon has their SLA that they provide to customers. For the most part, they keep it, but they tend to have a habit of breaking the SLA all at once with like a four hour outage.
Whereas when we think about those SLAs and when we reason about what's the expected downtime that they might have, we’re thinking about it in terms of some sort of regularity. We expect them to maybe have an outage of 15 minutes every month. That’s not actually true. They're much better at running infrastructure than we are, so their outrages tend to be really big outages and last for hours at a time.
Can you really afford to be down for four to six hours once every two years?
Corey: Credit where due, not only have the outages gotten better, but there’s an incredible amount of engineering and intelligence that goes into all of these providers' platforms. There’s also been an improvement in the messaging coming out of them.
We experience an issue across less than 1% of our load balancer fleet is reassuring to the general internet, but if you’re in that less than 1% of the load balancer fleet and none of your systems are responding, it feels like they're trivializing your pain. To their credit, I don't see the messaging taking that tone anymore.
Mike: I would really love to know exactly how many 1% of load balancers are.
Corey: I get the sneaking suspicion that not only will they never tell us, but that number is almost completely irrelevant within days or weeks after they give one.
Mike: Oh, I’m sure.
Corey: It’s dynamic.
Mike: Right, and the interesting thing about all of that is Amazon have such a large fleet and so many customers doing just tons and tons of traffic and lots of resource utilization that 1% is an impossibly large number, I’m sure. 1% of load balancer fleet is probably larger than the entire infrastructure that most of us are used to thinking about.
Mike: Yeah, if you’re within that 1%, well, that’s still a pretty big number.
Corey: There’s also the question of visibility of outages. I freely admit that I bought monitoring tools in the past. That if they took a two-week outage, I don't think anyone at the company I was at would have noticed or cared because that was how long we would go in some cases, between looking at certain dashboards.
Whereas, other companies, a great example of this is Slack, given the way that they tend to interact and be used across so many different places, it's almost like everyone has a monitoring client open with focus on their desktop or mobile device at all times. If they more or less drop a single packet, it feels like the world is ending. There’s a definite perception bias, as far as how outages and pain is perceived.
Mike: Absolutely. I expect different services have different availability levels, depending on what that service is. I have the system that I use to send out emails for monitoring weekly, I only log into it every week, like once a week. If it’s down for six days out of the week, I don't really care, but if Slack is down for five seconds, well, I’m freaking out.
Corey: Have you ever had that moment where, "Oh, Slack is down. That’s unfortunate. I should tell someone," and you pull up Slack to do it?
Mike: Yes, I have. It is terrible. It is awful and I feel bad.
Corey: Yes, at least I’m not alone in that particular neurotic convulsion. Thank you.
Mike: Yeah, after awhile I put the computer down and I just stare at a wall, because I’m not sure what to do with myself anymore.
Corey: In conclusion to sum up what we’ve just spent half an hour talking about, monitoring is painful, it is likely to remain so for the foreseeable future. But the path forward as best I can tell is to come to a realistic understanding and assessment of what it’s going to take to get proper visibility into you application for its risk tolerance, would that be a fair summation?
Mike: Yeah. Monitoring has improved dramatically over the past five years even and it improves every day. There is so many incredible smart people thinking hard about this problem. They're actively improving things, so it’s getting better every single day. I’m looking forward to what it’s going to look like in another five years.
Corey: It’s strange, I was about to argue that point because I remember back when I first started at a university many years ago. Our monitoring system when I got there, the help desk would let us know if there was a problem. It seems horrifying, but no one ever decided not to enroll in a university for another semester because the website was broken, at least, 10 years ago, and then I wound up rolling out Nagios.
It felt like it was a better monitoring story than anything I've had in recent memory, but the other side of the coin from a fairness perspective is today I have hundreds of instances or thousands, depending upon which environment I’m talking about, in auto-scaling groups, in containers and what not.
There I had three mail servers, four web servers, and a database. That was it. Nagios was pretty good at keeping an eye on systems that never changed in any meaningful way. If Nagios went off, there was an actual problem. At one point, it was 120 degrees server room. In other times, it was that something had gotten unplugged. The set of problems that were likely to occur were much smaller than and less nuance than they are today.
It isn't that I think that monitoring has gotten worse. I think you’re right, it has gotten probably better. It’s just that our environments have gotten so much more complex and I found new and interesting ways to catch fire.
Mike: Yep, I agree.
Corey: Well, thank you so much for joining me today, Mike.
Mike: Yes, thank you for having me.
Corey: Is there anything that you got going on that you want to mention or tell our listeners about?
Mike: Yes, I was talking with you before we joined on this that I’m working on a new project. By the time this is published, it will be live. I’m working on a new project to help engineering teams hire great people. You can find out all about it at mikejulian.com which will also be in the show notes.
Corey: Perfect. Thank you so much for your time and I'm sure we’ll speak in future days.
Mike: Absolutely, I'm looking forward to it.
Corey: My name is Corey Quinn and this is Screaming in the Cloud.