It is easy to pick apart the general premise of Cloud agnosticism being a myth. What about reasonable use cases? Well, generally, when you have a workload that you want to put on multiple Cloud providers, it is a bad idea. It’s difficult to build and maintain. Providers change, some more than others. The ability to work with them becomes more complex. Yet, Cloud providers rarely disappoint you enough to make you hurry and go to another provider.
Today, we’re talking to Jay Gordon, Cloud developer advocate for MongoDB, about databases, distribution of databases, and multi-Cloud strategies. MongoDB is a good option for people who want to build applications quicker and faster but not do a lot of infrastructural work.
Some of the highlights of the show include:
Easier to consider distributed data to be something reliable and available, than not being reliable and available
People spend time buying an option that doesn’t work, at the cost of feature velocity
If Cloud provider goes down, is it the end of the world?
Cloud offers greater flexibility; but no matter what, there should be a secondary option when a critical path comes to a breaking point
Hand-off from one provider to another is more likely to cause an outage than a multi-region single provider failure
Exclusion of Cloud Agnostic Tooling: The more we create tools that do the same thing regardless of provider, there will be more agnosticism from implementers
Workload-dependent where data gravity dictates choices; bandwidth isn’t free
Certain services are only available on one Cloud due to licensing; but tools can help with migration
Major service providers handle persistent parts of architecture, and other companies offer database services and tools for those providers
Cost may/may not be a factor why businesses stay with 1 instead of multi-Cloud
How much RPO and RTO play into a multi-Cloud decision
Selecting a database/data store when building; consider security encryption
Corey: This week’s episode of Screaming In The Cloud is generously sponsored by DigitalOcean. I’m going to argue that every cloud platform out there biases for different things. Some bias for having every feature you could possibly want offered as an added service at varying degrees of maturity. Others bias for, “Hey, we heard there’s some money to be made in the cloud space. Can you give us some of it?”
DigitalOcean biases for neither. To me, they optimize for simplicity. I polled some friends of mine who are avid DigitalOcean supporters about why they’re using it for various things, and they all said more or less the same thing. Other offerings have a bunch of shenanigans, root access, and IP addresses. DigitalOcean makes it all simple, “In 60 seconds, you have root access to a Linux box with an IP,” that’s a direct quote albeit with profanity about other providers taken out.
DigitalOcean also offers fixed-price offerings. You always know what you’re going to wind up paying this month, so you don’t wind up having a minor heart issue when the bill comes in. Their services are also understandable, without spending three months going to cloud school. You don’t have to worry about going very deep to understand what you’re doing. Its click a button or making API call, and you receive a cloud resource. They also include very understandable monitoring and alerting.
Lastly, they’re not exactly what I would call small-time. Over 150,000 businesses are using them today. Go ahead and give them a try. Visit do.co/screaming and they’ll give you a free $100 credit to try that. That’s do.co/screaming. Thanks again to DigitalOcean for their support to Screaming In The Cloud.
Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Jay Gordon who's a cloud developer advocate at MongoDB. Welcome to the show, Jay.
Jay: Hey, Corey. It's good to talk to you.
Corey: Good to talk to you, too. It's always good to see you at conferences and have conversations. Of course, on Twitter, you have a verified account which is how we know that you're important. Anytime you're willing to spend time talking with me, I'm thrilled to give you a platform for it.
Jay: I don’t know how important I am as much as I filled out a form and it worked out, but it's kind of cool to have a verified account. However, certain people find it is a really, really easy way to say, "Oh, yeah. Whatever. Blue check," and pass you along.
Corey: I hear you. I filled out a form once and, now, people think I know what I'm talking about with cloud computing. It's amazing how the world tends to unfold that way.
Jay: Yes. On the subject of seeing you in conferences, one of the things that I really do feel is lucky being part of a DevOps, cloud whatever community, is there's so many cool people I kind of spend time talking with so it's one of those lucky things that I can say I've been able to meet so many awesome people. Corey, you're one of them so thank you for having me on today.
Corey: No, thank you for the flattery. It'll get you everywhere. What should we talk about today?
Jay: One of the things that I want to talk about, obviously, because I work at MongoDB is I like talking about databases. Distribution of databases, I think, is a really interesting kind of subject because we're kind of in a situation now where it is far easier to consider distributed data to be something that's reliable and available than distributed data not being reliable and available. One of the things that I've been thinking about is multi-cloud strategies. I know you've planned on doing a talk soon about it and about cloud agnosticism so I thought it would be a really cool subject to talk about today.
Corey: Absolutely. In fact, when you brought that subject up to me earlier, you weren't aware that I was the person who was giving that talk. It was, "What is this?" and, now, of course, you're taking a much kinder tone than the axe you were bringing to grind, originally. Please, hit me where it hurts. The talk has been given in a couple of conferences so far and it's going to be given a few more, and it's called the Myth of Cloud Agnosticism.
It started off as a blog post that I put up on ReactiveOps' blog and it's probably the talk that I've given that is the most likely to be misunderstood so far. I say that, having given a talk called Heresy in the Church of Docker that more or less slaughtered a technology everyone loves. The challenge is, is that it's easy to pick apart the general premise of cloud agnosticism being a myth. If you come at it from a perspective of, "Well, what about this following list of very reasonable use cases?" I agree with that.
The point of the talk is that, in the general case, when you have a workload that you decide that you want to put on multiple cloud providers, it is usually a crappy idea in that you want to be able to push a button and deploy this application to GCP, press the button again and now it's on Azure, press it again and it's on AWS, press it again and it's on Ted's Taxidermy and Cloud Hosting, and you wind up in a scenario where trying to build that, first off, is not exactly trivial because these providers all implement things very differently but also in trying to maintain that as you continue to build because providers change, some more than others.
The ability to continue working with these things becomes more and more complex as your application gains in complexity, and it's a non-trivial amount of effort. If you take a step back and look at what the stated objective of this is, it's, "Well, what if our primary cloud provider disappoints us and we have to move in a hurry?" Okay, that's a fair question, but if you take a look at the competitive landscape, companies don't do that very often.
When they do, it makes headline news on whatever provider they're moving to. They talk about it in the keynote stages. They have press releases about it. It's not something that companies tend to do on an ongoing basis. Instead, people wind up spending a lot of time buying the option in a way that doesn't really work at the cost of future velocity because you wind up giving up a lot of the built-in primitives and advanced services that you can get from any of these providers as long as you focus on that.
That's the theme of the talk and it's a lot more nuanced than that but, once people hear that, they start to either nod and say, "Yeah, I see what you're going for," or they start coming at me with the knives of, "Well, actually." "Well, what if you're pagerduty where you need to more available than any given provider?" Okay, that's valid. "What if people's lives depend on this?" Well, yeah, building in multi-provider redundancy is awesome, but if you see all of GCP, Azure, AWS, Digital Ocean or Ted's Taxidermy going down, except for that last one, you're going to see that's the day the internet is on fire. In many use cases, that's not the end of the world for most companies.
Jay: It really actually depends, though, because you have to really look at it this way: If the viability of my business is completely based on the fact that a service that I provide is online and, at any time I have any real major outage, that I not only actively lose money but lose velocity of my product itself and the understanding people have of it. I like to kind of think of it like this: When we built out network infrastructure for servers, systems and data centers in, say, maybe the Late '90s or some of the Early 2000s–and, even now, we do this–we always look to ensure that all critical paths have some sort of redundancy when it comes to going in and out of the datacenter network, correct?
Jay: I've kind of taken some of this thought process and, while I do believe you get the more flexibility in the cloud world to be able to say, "You know what? It's time to pivot fast," there's still work involved, but I still believe that, no matter what, there should be a secondary option when a critical path could come to a breaking point for several reasons. One of them, obviously, is network outage, systems outage or something. We saw AWS accidentally fat-finger some DNS a few years ago, and look what that did to us all. Do you know what I mean?
Corey: Absolutely. To stretch your analogy possibly to the breaking point, in a large environment I was exposed to for a while, they ran studies where they wound up having redundant routers. Invariably, they saw far more failures from that redundancy based on heartbeat failures where they wound up with a split-brain scenario and had to effectively take the entire cluster down more than they saw router failures themselves. The same model, to some extent, starts to apply when you look at this from a multi-provider perspective in that it is far likelier that you're going to wind up causing an outage in the handoff between one provider to another then you're going to see a multi-region single provider failure that disrupts your site to the point of it going down. I'm not saying it's impossible–and there are edge cases around this–but I do think that, for these small startups that are just dipping their toes into the water and experimenting with cloud architecture, one of the things that they optimize for should not be running on any provider under the sun at the push of a button.
Jay: Sure. I think the one thing, though, that we saw kind of happen in the last maybe three years is the exclusion of cloud-agnostic tooling. At first, we saw Kubernetes become this thing that we knew. You can run it on Google's cloud but then you still have more and more availability on different kind of cloud networks to the point where all three major clouds have some sort of Kubernetes engine implementation because they know that, without providing an existing kind of Kubernetes cluster to spin up on a whim, there is a larger kind of barrier into them getting people onto those workloads, those workloads onto those clouds.
I've kind of looked at it like this, is that the more we've unified and created tools that do the same thing regardless of provider, I think the more we're going to be able to see that there's more agnosticism that comes from implementers. When I say "implementers", building, say, a Kubernetes solution that is going to easily be put on any cloud based on what your business is doing at that day. If Amazon has major issues and all you really need to do is modify DNS to get things going in a new direction, is that really a terrible strategy to take?
Corey: This is where it becomes incredibly workload-dependent. In some cases, you wind up in a scenario where data gravity dictates your choices. "Oh, we're going to go ahead and have all of our containers spin up in GCP instead of AWS. We'll save 20¢ an hour per container," but they're also going to wind up spending 20 grand, moving the data they need to process over into that environment.
Jay: That's really a big, big point, is that, no matter what, bandwidth still isn't free.
Corey: Exactly. It is, only ingress.
Jay: But moving your data out of one network into another tends not to be free, and we have to kind of go through that. I've kind of seen this before because I've worked with people on inbound migrations et cetera like, at MongoDB, we have our own cloud service for databases called Atlas, and one of the big things is getting, say, self-hosted databases onto the service via migration process. One of the reasons why I've seen some people do this is they need to go from one particular provider to another, and we give them easier tools to do it than doing it manually.
We have a live migration tool. You just pop in a host name and it'll slurp the data from wherever you're hosting it already to where you want it to be. That could easily mean I've got it running on a standalone machine, say, on AWS and I want to move it to Google Cloud. We've created tooling around that, and I think that it gives people options. This is another one of these things I've heard around multi-cloud–and you can tell me if you think this is a real valid reason–is the fact that certain services are just available on one particular cloud just based on licensing.
TensorFlow, I think, is the easiest way to think about it, is that, if you do go to a multi-cloud strategy, that you could easily push data from one particular cloud to another and be able to utilize a lot of these different services so that you can say, "You know what? If I need to do a big ML model on something and I want to do it with tools that Google has provided, I can easily have my data within there and not have to go through the whole rigmarole of migration."
Corey: Absolutely but, as a counterpoint, you're also getting into one of the early called-out edge cases of this where the idea of having multiple cloud providers for a business is not necessarily a terrible one. My campaigning against being cloud-agnostic generally is restricted to the workload-level. If you have a particular application or a particular workload that you're trying to get to work across multiple providers, that's often painful, but if you're talking about having the web services live in AWS and the machine-learning piece that chews on that data living in GCP, that's a very viable model that I wouldn't argue with you on.
Jay: One of the things that I really like the idea about is, one, I like big, major services kind of being the way you handle maybe more persistent parts of your architecture. I like the idea of using a database service like what Amazon provides or what we provide with Atlas, the fact that you can just spin up databases and use native tooling around them for these providers because they've all got some sort of way to move data in and out, whether it's using AWS CLI or using some sort of native. There are always ways that you can go and grab this data and do something with it and easily move it to other providers because I like, on the fly, being able to say, "I want to just connect my frontend application to this database and I don't want to have to worry about reconfiguring the database."
Corey: Right, and that's, really, one of the challenges in the market right now, with the way that AI machine-learning have evolved. If your data fits on a drive, you're probably not going to have success. The single biggest predictor right now of success in any form of machine-learning is whether you have a large-enough dataset to operate on. When you're into the multi-petabytes, now you're starting to have some serious opportunities but–you're right–there, it becomes very difficult to relocate that data or at least any reasonable percentage of it.
Jay: Yeah, because you're still dealing with the same things that we've always dealt with in the past, and that's data transfer, you're limited but whatever your throughput is on your lines and, more than anything, your costs. I know we talked about it just a minute ago but I still believe that one of the big reasons why probably people haven't selected multi-cloud is the upfront kind of thought process around getting data over and what it would cost them to run two infrastructures simultaneously. I think that that probably is a leading factor why companies are choosing to stick with one cloud.
Corey: To an extent, yes, but, in other cases, when we're talking large enterprise scale where they're doing deals in the eight to nine-figure annual ranges with these large cloud providers, it becomes very difficult, first off, to wind up making a compelling case for putting all of that in a single provider and not looking either irresponsible or like you're being paid off underneath the table so that becomes one challenge of it. The other is that there's an idea that you can then wind up having some negotiating room by transitioning workloads as leverage during contract negotiations.
That is often a bit of a red herring. If you take a look globally at any company's use of cloud providers, one thing you almost never see is the number getting smaller. Invariably, people tend to expand their footprint; they don't to reduce it very often. Heck, I spent most of my time working on optimization of AWS bills and, a year later, I find that most of my clients are spending more than they were when I started; they're just doing it more efficiently.
Things continue to grow. That is what companies aspire to do. This is not a bad thing. Instead, it turns very much into an arena of focusing on what it is exactly the company needs to do and cost, surprisingly, is not much of a driver behind corporate decision-making as people tend to believe it is. Companies are willing to spend money in order to expand into new markets, to advance new features, to grow revenue and it just comes down to a unit economics discussion.
Jay: Here's a question for you, then: How much do you think, say, RTO and RPO play into people's decisions to maybe consider multi-cloud whereas a failure, say, on something that needs to be restored may be because of the network, you can't restore it to that local cloud and you can do it onto another one? I'm curious if you think that that specific case, not necessarily disaster recovery but maybe just a portion of a disaster recovery. I'm just trying to think if people really look to replace what would be just, "Throw back, get the backups, restore them," as opposed to, "Let's just spin up a new environment on a different cloud and work through that and restore it based on what we already have."
Corey: Let's define terms for those who didn't grow up building DR plans for fun. RPO is when, at the time an incident occurs, what is the maximum exposure of lost data. In other words, if you're restoring from backup, how long has it been since your last backup? RTO is, from the time site goes down or is impacted, how long it will be until you're back up and running and able to service customers at some baseline level. It's a great question. The right answer around a lot of this is going to be extremely workload-dependent.
For high-availability services where latency is critical–I'll go back to pagerduty as a good example of that–the tolerance for failure is extremely low. You're not going to be able to talk around that with, "We're just going to get a six-hour outage," and it's fine. An example of this in the other direction is, a few years back, I was trying to buy something–I think it was a package of socks–on Amazon because I lead an exciting life and that's what I buy, and it threw a 500 error, which was bizarre. It was one of those things you don't see very often.
I tried it again and it threw the error and, in a lot of DR plans, there would be an assumption built into this: that, therefore, because I could not buy those socks at that moment, I either never bought those socks again and I'm walking around barefoot one day out of seven or I, instead, went to another provider and went through all the rigmarole of setting up an account and ordering the socks then. In practice, I waited an hour, the problem went away and I bought my pair of socks and whatnot. I went on with my life.
Now, the fact that worked is based on two specific facts about that use case: one, that it's a purchase that I'm making intentionally. If this were a company that were serving ads, I'm not going to come back and view those ads later in time; that opportunity is lost and, two, this doesn't happen every third time I try to buy something on Amazon. If it did, I'd probably be spending a lot more money at target instead because there's a reputational damage of being the site that's always down where people no longer want to trust you with their purchases.
Corey: I'm not sure that answers your question about how this applies to DR concepts but it does tend to lead to the problem of, when one of these large providers is having problems, there are a lot of problems that are second and third-order effects. For example, […] one go down because that's what it does; it breaks. There have been a number of failure cases over the years where, suddenly, other regions become overloaded, provisioning calls take longer and longer because everyone is failing over. They're saturating links, they're hammering APIs that don't generally see that level of traffic within a couple of magnitude and, to that end, people have to start planning for things like that when they're building out a DR plan.
In many cases, a lot of the automated tools–when I pointed an AWS account–will say, "Ah, here's a bunch of idle instances. Turn them off." That's a DR site and we kind of want to be able to fail over there within seconds. We're not going to have time to wait for a laden provisioning back plain; we want them up and running now. It's the same type of principle of what are the disaster types you're planning for and how do you intend to wind up handling the failover process, not to mention fail back which is a whole separate category of issues. Rest assured, regardless of what you plan for, you're missing something. The only way you find all of the corner cases and edge cases is to live forever.
Jay: Yeah, and even when you get there, you'll probably miss out on a few of them.
Jay: One of the things that I also wanted to kind of talk about is you had mentioned to me that you really haven't spent a lot of time around databases, and I was curious if there were any kind of questions around the world of databases, especially distributed databases, that you'd like to hear a little bit more about.
Corey: Absolutely. I wound up playing around with databases a bit in my youth. I could set up my sequel. I can set up replication. I can pass it over to a professional when I see something I don't recognize, which is pretty much everything past that point, and then I wash my hands of it and move on with life. The challenge, of course, from my perspective is, when I'm building something new, what database or what data store do I wind up selecting for weird, arbitrary use cases? There are times where, in some cases, I'm dealing with small enough data volumes that flat file in CSV format living in S3 is more than sufficient for what I do.
There's also the argument to completely over-engineer something using a bunch of very bleeding-edge systems that are effectively acid-complaint, global world-spanning databases. Google's Cloud spanner and I believe Cosmos DB from Azure tend to qualify there as well as the upcoming announced Aurora Master Multi-Region. The consensus that emerges, though, consistently is that, whatever I'm deciding to use, I'm wrong for using it, and it's always challenging for me to figure out what is the right answer.
One thing that has been earning a global condemnation for is using Amazon's Secrets Manager where the idea is it holds secure data, encrypted, using KMS and provides that to your applications for 40¢ per month per secret. That just sounds like an expensive database so couldn't I theoretically just store all of my transactions in that and call it good and people look at me with a look of horror and start backing away slowly?
Jay: You still have to look at databases as a transactional thing. I guess the best way to kind of look at databases are active or operational pieces of infrastructure, and I guess that people are still kind of concerned on whether encryption level or being able to store your secrets at that level is good enough. I don't know if that's the real right term, is "good enough", but there's always been the secondary use cases for stores. I think about shafting and I think about the encrypted data bag or think about other tools.
Corey: That sounds like a spectacular insult to throw at someone.
Jay: Yes, or, like encrypted yaml so that you can keep the secrets for your yaml-based products. Now, we get more modern with KMS and Vault. They're all out there. All these products exist and I guess you wouldn't be that far off by calling them databases, but some of the things that they tend to not do is have distribution data available, ways to do queries around that. You're basically just providing one particular use case, and that is, "I need information. Provide me that information. Decrypt the information that I want."
In databases, you can do that as well. You can basically make a query that – on some encrypted data to return it back to you unencrypted–that's possible–but I think that the one thing that's the differentiator between something that's a key store or even, for that matter, for SR, and something that's a database is the ability to really run very complex queries against that. With non-ODD, you can run complex aggregations against a lot of that data, and you don't have to do that from, say, a client level; you can do it all on the server side because we have an aggregation framework that allows you to ask really complex questions without having to put the load, say, on the client's side if you're putting over that data in some sort of application that eventually gets presented to the user.
I think that, ultimately, while those services–I see them as simple databases. They don't really have data structure methods. They have file systems, at least in the way of S3 is concerned. I can't really speak to what particular data structures that KMS has or Vault but the biggest thing that I can say that differentiates the two is the level of security and encryption around the data store or S3 private key stores provided and, obviously, data stores are more focused on performance and returning information to you.
Corey: Absolutely. The idea of using Secrets Manager as a MongoDB replacement is completely ludicrous. The reason I tend to go in that direction is because it exposes a half dozen different misconceptions that many people, including me, tend to have around data stores in their entirety, and being able to address those in a somewhat reasoned way starts to shed light on how some of these tools can or should be used. I wound up building a bunch of the pipeline process that builds my newsletter every week using DynamoDB because it was there.
I didn't have to manage any infrastructure myself. It's just effectively given to me under the permanent free tier and, last time I checked, I was storing 150K in that at any given point in time. Realistically, I'm at a point where a flat file would probably have been sufficient were it not for a couple of edge-case S3 race conditions. This wound up more or less being my first outing into the world of non-relational data stores, and my brain is full and I wish to be excused. It winds up being something that isn't exactly intuitive to the way that I see the world.
Jay: The one thing that's great about MongoDB is, one, you can run it basically anywhere and it's one of the big core tenets of the product itself, is the fact that we say that if you have a CPU, you could probably run MongoDB on that particular system. It's there and while it's not in, say, RDS or one of those services, because of our licensing method, we had to go out and say, "Let's go ahead and create our own cloud and let's put it on all three of the major clouds so that you have those options to go wherever it is you wanted and let's make it easy to get started."
Just like AWS and Dynamo, we put together a free tier for people, also. What we're also kind of trying to do is give people great use cases and reasons. We have a developer advocacy team that I'm on and we kind of show people that databases make sense and MongoDB makes sense for applications because of the way data is stored. We look back at how databases are traditionally kind of thought about, and they're thought as tabular. When I say "tabular", I think of it like a bunch of tabs or just parts of an Excel spreadsheet.
As data becomes more complex, using that Excel spreadsheet ends up becoming more kind of ridiculous and data became far more complex to manage and we saw a revolution around JSON. I don't want to call it a revolution but we saw people finding JSON to be a much more reasonable way to manage data because it reads like a menu instead of going through pages and pages of typewriter text, and ordering them, and flattening them, and then taking that data and processing it, and then building SQL migrations so that you can get data from one place to another. It just became very difficult.
Corey: I'm still going to expose some of my ignorance here. Just from an old-school hands-on hardware configuration type, I still find yaml more understandable and readable than I do JSON. There's probably something profoundly wrong with me.
Corey: Absolutely. I think the one thing that we can all agree on is that XML is terrible.
Jay: Yes, and that's the big thing that Mongo provides, that you don't have to do it; you don’t have to sit and write XML for, say, an ODM. You don't have to go ahead and spend time building an XML for an ODM. You don't have to spend a whole bunch of time speaking of XML, writing ODMs and managing objects with your database because everything in MongoDB is considered an object so you can query everything so the one thing that's nice is that you don’t really have to spend time in XML at all.
That, to me, is definitely an uptick of why people would want to select one over the other as far as, say, relational databases compared to, say, something like MongoDB. I'm not here to sell people MongoDB but I think it's a good option for people that want to build applications quicker, faster and they don't necessarily want to do a lot of the infrastructural work. There's Atlas and it does a lot of those things that people really trust in the cloud lately with services.
Corey: Okay. This has been helpful and I definitely appreciate your taking the time to go through this with me, but I have one question that is probably going to ruin our friendship before we call it an episode. That is quite simply: Every time I refer to something, be it Mongo, be it almost anything else as a database, I'm reminded of the tagline of that service, which is, "It's not a database; it's a fill-in-the-blank document store, data store, et cetera, et cetera." Why? What's behind that nomenclature war?
Jay: I think a lot of it has to do with the fact that there are several companies that start kind of calling themselves no-SQL and they all did it at the same time but they presented data in different ways, whether it's white-column or in-document JSON. It just became one of those things where people really wanted to call these things different names, and I think it was because they were trying to differentiate themselves from just being referred to as a no-SQL database.
At MongoDB, we consider ourselves a document database, and that's because our primary goal is store JSON-based documents in a database and have it easily retrieved. It doesn't mean that you can't store binary data, but that binary data will be stored within JSON documents and an encrypted or, say, encoded method. Why there's these wars over what things are called, it's really been difficult to tell because I think it's just about differentiating yourself from the rest of the crowd. That really has been the only thing that I could think of.
Corey: Good to know that there's not some key distinction there that I've just been asleep at the wheel for 15 years and missed.
Jay: There are key value stores, there are document stores that are all kind of, in the end, doing something not far off, which is providing you an answer to a key and a value; it's how big that key and value can be within the total document that really differentiates it between, say, what's something in Dynamo as compared to something that's in MongoDB.
Corey: That makes an awful lot of sense.
Jay: Yeah, it's the richness of data, if you will.
Corey: Thank you so much for taking time out of your day to speak with me.
Jay: No problem. I always enjoy speaking with you, Corey. It's one of those great, great luxuries that people that work in technology get, is, once in a while, we'll find you and we'll get to talk to you.
Corey: When they're really unlucky, I start talking at them and all heck breaks loose.
Jay: I don't know if it's all heck but it's certainly entertaining, nonetheless.
Corey: Thank you. My name is Corey Quinn. This has been Jay Gordon from MongoDB and this is Screaming in the Cloud.