Do you have to deal with data protection? Do you usually mess it up? Some people think data protection architecture is broken and requires too many dependencies. By the time a business needs to backup a lot of data, it’s a complex problem to go back in time to retrofit a backup solution for an existing infrastructure.
Fortunately, Rubrik found a way to streamline data protection components. Today, we’re talking to Chris Wahl and Ken Hui of Rubrik.
Some of the highlights of the show include:
Transform backup and recovery to send data to a public Cloud and convert it to native format
Add value and expand what can be done with data - rather than let it sit idle
Easy way for customers to start putting data into the Cloud is to replace their tape environment; people hate tape infrastructure more than their backups
Necessity to backup virtual machines (VMs) probably won’t go away because of challenges; Clouds and computers break
Customers leaving the data center and exploring the Cloud to improve operations, utilize automation
Business requirements for data to have a level of durability and availability
People vs. Technology: Which is the bottleneck when it comes to backups?
Words of Wisdom: Establish an end goal and workflow/pathway to get there
Full Episode Transcript:
Hello and welcome to Screaming In The Cloud with your host, cloud economist, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming In The Cloud.
Corey: This week's episode of Screaming In The Cloud is generously sponsored by DigitalOcean. I would argue that every cloud platform out there biases for different things, some bias for having every feature you could possibly want offered as a managed service at varying degrees of maturity. Others bias for, "Hey, we heard there's some money to be made in the cloud space. Can you give us some of it?"
DigitalOcean biases for neither. To me, they optimize for simplicity. I called some friends of mine who are avid DigitalOcean supporters about why they're using it for various things and they all said more or less the same thing. Other offerings have a bunch of shenanigans, root access, and IP addresses but DigitalOcean makes it all simple. In 60 seconds you have root access to a Linux box with an IP, that's a direct quote albeit with profanity about other providers taken out.
DigitalOcean also offers fixed-price offerings. You always know what you're going to wind up paying this month, so you don't wind up having a minor heart issue when the bill comes in. Their services are also understandable without spending three months going to cloud school. You don't have to worry about going very deep to understand what you're doing, it's click button or make an API call and you receive a cloud resource. They also include very understandable monitoring and alerting.
Lastly, they're not exactly what I would call small-time. Over 150,000 businesses are using them today. Go ahead and give them a try or visit do.co/screaming and they'll give you a free $100 credit to try it out—that's do.co/screaming. Thanks again to Digital Ocean for their support of Screaming In The Cloud.
Welcome to Screaming In The Cloud, I'm Corey Quinn. This week we're doing something a little bit different. Historically, a lot of guests that I've had on this show come from companies that were "born in the cloud" and that's great. But there's an entire world out there of companies that didn't start in the last 10 years—that have built out infrastructure over decades. In many cases, we're shortchanging them to start remedying that slightly.
What I'm doing today is I have two guests from a company called Rubric. We have Ken Hui, who's a Cloud Solutions Architect, and Chris Wahl, who's the Director of Technical Marketing. Welcome to the show, folks.
Chris: Hey, Good to be here.
Ken: Good to be here too, Corey.
Corey: You historically have been working at a company that provides data protection. What does that look like? Let's start at the beginning here and unfold what it is that Rubric does. Take it away, it's your pitch.
Chris: Hi, I'm Chris. We had our first product come out in 2015, which is also when I joined the company. One of the major reasons I joined, data protection was something I had to deal with a lot as an architect and a consultant, it's not fun. I didn't really enjoy it that much and I often kind of messed it up. In my opinion, because the architecture is kind of broken and there's a lot of dependencies there and if you look at what it takes to back something up, there's 11 different components sourced from different vendors that you kind of have to cobble together to do that.
There's certainly a lot of opportunity to streamline that and Rubric came to me and said hey, we've kind of figured out a much better way to do this, where we're using these crazy things that the infrastructure and operations teams don't normally use, like an API and declarative policies, and things of that nature to make it so that it’s really just a matter of defining at a business level, what your RPO—your recovery point objective, and things like that are into a policy. You just apply it to objects and objects are things like virtual machines, which were our first use case all the way through databases and file sets and things like that.
The system has the intelligence to go out and figure out the best way to provide that protection—take those backups, store them in its system, replicate them somewhere, and archive them somewhere, and things of that nature. "Are you interested?" I said, "Absolutely." These are the folks that founded the company and had done some really interesting things at Google and Facebook and the normal things that you see in a Silicon Valley startup.
That's why I joined and the original focus of the product, the Version 1.0 was specifically what you said, Corey. A lot of long-tail companies that have built out a pretty impressive presence on-prem with all sorts of gizmos, virtualization, storage, and what not to where you could literally deploy this very small looking friendly white appliance into the data center using a lot of the techniques from, the big boys out there—the fangs of the world to build out a distributed file set and a series of nodes instead of dual controllers and things like that, so that you could provide data protection but with all kind of the updates that have happened over the last 20 years that I think a lot of backup companies just haven't had a chance to take advantage of.
Our first use case was literally backing up VMware virtual machines into the appliance, de-duping it and doing all that kind of jazz and then shooting it up to Amazon S3 for long-term retention. Obviously, we built out our repertoire. From there, we now protect most of what you'd find in the data center and we support most of the public cloud and infrastructure for long-term retention and archive and even protecting those areas. That's kind of the where, what, and why of Rubric from a 101 level.
Corey: Which makes an awful lot of sense. One of the challenges that I would imagine you'd experience in that market is that by the time a company needs to backup an awful lot of data—by that point, it's a hard and complex problem. It's very difficult to sell a product that's built around, "Oh, just go back in time and talk to me four years ago when you were setting out all of this stuff." Retrofitting a backup solution to an existing infrastructure is always painful, always difficult unless you know something I don't, which you might.
Chris: It's true that it is. It's a challenge no matter what you select and we're obviously not targeting Greenfield opportunities because that would be crazy. Data has only accumulated over time, especially if you believe in data gravity, which I do. That's why, for a lot of folks, because we started with VMware virtualization, it wasn't a huge lift to say, "You know what, deploy it for that use case," because right now, what you're dealing with doesn't meet your service level agreements, if you have them. It doesn't really meet the complexity needs that you have.
In a lot of cases, you're spending a minimum of 20%, if not more of your time just babysitting the system. The opportunity cost is fairly good from an ROI perspective to introduce a technology that can alleviate a lot of the operational headache and potentially be cheaper from a CapEx perspective while unlocking a lot of new use cases.
I think that's the ultimate point where people get really interested because we're transforming backup. I'm just going to call a spade a spade, like backup and recovery—it's this thing that everyone needs. But we're transforming it from this insurance policy where it's literally just—take this data and hold it, good luck, it's just there just in case. Hopefully, it actually works when you pull a tape to recover it to unlocking use cases like making sure that you're able to send that data to a public cloud and convert it to the native format of that cloud and use it to be able to live mount, which is our technology—lets you kind of zero provision a workload based on a snapshot from a previous backup and start putting that into your pipeline for introducing a new version or software regression testing or something like that.
Just teasing apart, all these things we would like to do with data versus just having it sit there kind of idle—the Maytag man sitting on top of the washing machine just hoping that it breaks doesn't add a lot of value. I think that's been something that's been very tangible for customers. People like use cases and they like to expand what they can do with their data.
Corey: Backups are very easy, it's just the restores are difficult. I look back in my own history, and I've done backup projects and data centers. I've done backup projects and cloud, but I can't recall ever having done a hybrid style project for that. What changes as these companies start moving certain workloads out of the data center and into the cloud? What are you seeing as that starts to manifest?
Ken: This is Ken. I'll take a shot at that, I think what we what we find is that lot of customers are still trying to dip their toes and they're trying to figure an easy way to start putting some of the data into the cloud. An easy way to do that is to replace your tape environment because no one likes tape. If there's something that people hate more than the backups, it's probably the tape infrastructure.
Corey: I'm right there with you. I started my career many moons ago, selling tape drives for the AS400. For those who've never dealt with IBM mainframes, I envy you. It was a difficult thing to do because first, everyone hates tape. Secondly, we were competing against IBM on price when it comes to backups. Why is our data unrecoverable? We saved $2000 on a tape drive. It turns out that's not really a defensible position and that this company isn't in business anymore. I'm right there with you, please continue.
Ken: We find a lot of our customers say, "What let's try to get rid of our tape. Let's use something like S3 or Azure Blob Storage as a replacement for tape." It's actually a good learning ground for them to start finding out what they need to do to set of connectivity to the cloud or to find out whether the cloud storage is as durable as we think.
I still talk to a lot of customers who are worried that when they put the data in S3, it's actually going to be less protected than if they just kept it on tape on site. There's a lot of education that has to be done and that's a proving ground for them to be able to say, "I can actually trust the cloud for storing my data and know that can recover it when I need it.
That's kind of the first initial use case. Once people start being comfortable kind of moving some of their data to the cloud, then start thinking about, "Well, how can I actually get my data—my workload to the cloud, so I can take a Linux server that I'm running on my data center, and stand that up in an AWS or in Azure." Again, that's where one of the things we try to help customers or we can say, "Now that you've got data up there, let's help you figure out how to convert that into an EC2 instance or an Azure VM, so it's pretty easy for you to automatically spin up these instances and create a replica of your environment in the cloud and you can start playing around with that and start educating yourself on how to actually use a lot of these cloud services."
Corey: One of the challenges that I've seen in this is that people talk a great game about building out things that are emphasizing infrastructure as code. They build cattle instead of pets and whenever you talk about things like backing up VMs, people clock at you and say, "Oh, you shouldn't be doing that, you should have everything built programmatically, and you just need the data itself." That's great in theory, but the real world’s messy. The beautiful architecture diagrams we see in white papers don't exist here in the real world. Embracing that messy reality, in theory, is something that should never have to happen. In practice, there are billions of dollars going into individual companies to solve these specific problems every year. I don't see that going away anytime soon.
To some extent, looking at a company like Rubric, it's easy to almost discount you folks as irrelevant. No one really cares about backups in this sense. They shouldn't need to worry about that and by that same logic, you shouldn't need to hire any operations people whatsoever because code should be perfect the first time and you just run it and it's there until the hardware gets replaced, it's a fiction.
I don't see that you're ever going to get away from having to worry about these things. That's what clouds do, they break. So do computers, for that matter, just in more understandable ways.
Chris: I don't disagree. As you're saying that I'm thinking about the other challenges that people have, we certainly do love open source, infrastructures code, making declarative pipelines, and things like that. I think we have over 50 different open source projects, just with Rubric alone on GitHub. But there's also the other parts of the challenge, such as we have a product that interrogates every snapshot taken from your infrastructure and it could be a SQL database, an Oracle database, virtual machine, or whatever. What we're looking for is things like Ransomware and encryption worms and just other security threats that are entering the workspace not through a technical means but often through a people means and then kind of hiding out somewhere.
We're actually able to look at the deltas between kind of a known good state versus that bad state that gets introduced. We do have things like our software as a service offering that's looking at all these different instances of the Rubric software running in your enterprise, we can start to do those deltas and actually dive in and crack open a snapshot from a metadata perspective and say, "Okay, I think there's something nasty going on here, either malicious actor Ransomware, some type of operational oops—I guess someone's still trying to move fast and break things and accidentally deleted a whole bunch of stuff."
We're able to alert security team, an ops team, or someone like that and say, "This doesn't look right. We feel there's something wrong here." In fact, when we look deeper we found there's some variant of WannaCry or whatever the topic to Azure is for a Ransomware a security threat is and then we're able to coordinate with all those running instances of the Rubric software to roll back to that known good state versus paying the ransom or hiring some forensics team to come in here and try to fix this stuff for an exorbitant amount of money.
Just kind of pointing out that there's certainly more to it than just VMs are bad and containers are good and surreal, this is better, and things like that. There's also the security threats and things like GDPR to consider that just make human element start to be introduced to the pipelines that you're trying to build kind of more from a technological perspective.
Corey: As you take a look across your customer base and see people starting the process of doing a migration, what lessons Do you see people learning the hard way over and over again? I mean, what advice would you give someone who's considering time to leave the data center and start at least exploring cloud to make their data challenges a lot more manageable?
Ken: From my perspective, I think probably the number one thing that I see is that people often so think of the cloud as someone else's data center. We hear that all the time, right? They don't really put the work into thinking there's some changes I have to make operationally when there's APIs that you can consume and things you are going to automate. That's very different than getting onto a vSphere console and provision machines by hand and updating them by hand.
A lot of times when customers don't do this, they don't think about that kind of operational day two stuff. They just want to get excited about moving things to the cloud and then when they get up there they wonder how come things don't work quite the same way that they expect or they don't get all the value like the agility that people are always talking about.
When I talk to customers, I always tell them you need to think through how you operationally change the way you manage instances or how you want to manage databases in AWS when Azure or Google as opposed to what you're doing on premises.
Chris: I would add to that too. Sometimes it's just level 101 stuff if you will. I've had folks come to me and say, "How do I replicate S3 or backup S3, just in case there's a problem there?" You're kind of teased into that—what kind of problem are you looking for. "What if one of Amazon's availability zones goes down?" I'm like, "Well, you can control that more at the storage layer. and you can set up the durability and the reliability at the bucket level," and things like that.
When you take folks like myself who worked on-prem forever and you take the constructs of an on-prem world and you try to cram those, kind of like a cookie cutter through the dough into the public cloud a lot, it's pretty messy. I think a lot of it is just basic knowledge of how these different technologies work, what causes them to break, what causes them to lose performance or have some type of data degradation type issue. That's just standard if you're going to be an engineer in any environment, you just have to learn what makes things blow up. That's still a new thing for a lot of folks.
Ken: Yeah and I think a clear example of that, Corey, is probably—I'm sure when you talk to your customers and you advise them to use different availability zones, that is just a standard best practice. But when I talk with customers, I'm amazed at how many customers are only using one availability zone for most of the applications. Typically, those are the customers who run their database on premises, they move it to the cloud, and they don't even think about, "Well, how do I make it highly available by spreading things across availability zones?" They just keep that one machine running in one availability zone and they wonder how come when there's a problem, they don't get that high availability that one talks about in the cloud.
Corey: I recently appeared on Real World DevOps and talked a little bit about this. We talked about the S3 apocalypse a year or two ago, where S3 went down for something on the order of eight hours in US East and there was a knee jerk reaction afterwards, where a lot of operations people suddenly started doing cross-region replication and getting copies of all their data scattered throughout the world. That does double or triple your cost plus the overhead to manage all of that. Is there a business requirement for your data to have that level of durability and availability. If not, why invite that level of overhead just to avoid a Black Swan style of event.
Not all businesses are created equal and not all companies have the same specific constraints around uptime requirements. If S3 goes down across a region or even to some extent a single AZ drops out for a while, that's the day where the internet is going to not be working well for almost everyone. To some extent, you can sort of fly below the radar. There are exceptions to this, if you have something such as an ad network, where people aren't going to come back and view those ads again later, you may want to wind up architecting for that. But mapping, backup, and DR considerations back to a point of being able to tie those to business requirements just seems like a no brainer to me.
Chris: I remember that too. If I remember correctly, some of the icons that even Amazon was using were hosted in S3 for the S3 status page. That being layers of levels there which are which are always fun. But that also goes back to some of the things that we've designed for an offer the correct layers of configuration for customers. Going back very early in the show, I talked about using policy-based administration for the system. Ultimately just feed in what your business requirements are and the system will tackle that. Certainly, applications that you want to make sure, an eight-hour unavailability window for data for some reason just wouldn't be acceptable. But probably for most of them, it's not the end of the world.
A typical policy that you can configure would state, "Keep anywhere from 15 to 30 days of the data that I backed on-prem in the appliances taking the backups and then send everything beyond that for the next seven years into an S3 bucket as an example.” If it's even more critical than that, you can say, "Mirror the data, I want it local and I want it all to be stored in the public cloud.”
There's certainly ways that you can assign these policies and kind of tier them and prioritize them to make sure that it makes sense. Ultimately, the nice thing is you don't really have to do a lot beyond the policy creation. I think that's what people get excited about. As Ken was alluding to, cloud tends to be hard. We've all read your S3 bucket of negligence award, because even security can be tough. We handle most of that, and abstract away most of the nuance so that those mistakes aren't really repeated when you're setting up the product, which I think is kind of nice. But then all the controls to make sure that the data is where it needs to be and it's available are all kind of being monitored and handled by Rubric to make sure that when you need it, it's available. Because like you said, backup is easy, it's the restore part that's hard.
Corey: I'll also challenge you on that slightly where I've noticed a lot of these discussions are not technical debates, they're people problems. Early in my career, I was building out an email system where I started out my entire nonsense once I became pure technical. I was talking to the business and asked them, "Okay, how much downtime a year is acceptable?" Their answer was, "No downtime, the system must be always up." My response to that, because I didn't really understand diplomacy back then, not that I do now, was, "Great. That'll be $20 billion to start. I'll come back when that runs out and we're still not going to get there but we'll get close." At which point, it led to a series of back and forth discussions and it turned out that no downtime acceptable meant 40 hours a week when people are in the office, mostly for the email system.
It became a much easier target to hit as a result. But having that conversation was never something that technology, at the time at least, was going to be able to drive, facilitate, or solve for. Because if you give people a question of, "What are the data durability and availability requirements?" The answer you'll get is, "100% across the board," until you start tying that to cost.
Chris: To me, that's one of the interesting points because I totally agree with you. The first answer is always zero and anything to go down, always available infinite retention, all these un-obtainable goals that people that aren't technical don't realize what the cost and the impact of that is going to be. Also, traditionally, when you deal with a backup system, there's no question of what your availability is and your RPO is. It's all about configuring backup jobs and stating I want these 10 servers protected first, then these next ones, send them here, and it's all about the tactics but not a lot of strategies when you're working with these products.
It's kind of a forcing function when you're dealing with Rubric. It's not about when do you want to take it, every third Thursday or every night at two in the morning. It's more around this specific business objectives. When you when you edit a policy in the product, it's literally asking you for things like RPO and RTO, and do you want it to replicate? What's the availability requirements?
I think because we're kind of going down that path, it forces you to think in alignment with the folks that are business users and also reveals what that's going to look like when you set it all up and when you apply the policy. You could change the policies later if you want to be more or less aggressive. I feel like that departure from imperative, kind of pull the lever, get the banana type job architecture, where it's really tough to answer a question like, "What's my overarching RPO for these workloads? Oh well, let me go check all these jobs. I'll run a report, I'll get back to you." With Rubric, it's really what's the availability for these workloads in this data? You can just go, “It's four hours.”
But when is it going to back up? I don't know, whenever it's necessary to meet a four-hour RPO. That's all you really actually care about. It just really wipes the slate clean from all that technical debt, if you will, that I think we've been paying down and never actually reaching a zero-level debt when it comes to data protection. That's one of the things I really like about it.
Corey: I remember some of the olden days, we're talking three years ago, where I would have a backup job that ran and it took a while to run and it started to run into problems because the next backup job didn't meet our RPO had to start before the old one completed. It led to a variety of terrifying hacks to make that work.
Looking back, it would have been so much easier just to renegotiate the requirements around what it is we were trying to achieve or change the architecture but that's asking for a magic wand.
Chris: I had to deal with that for the better part of a decade. I hated waking up in the morning. I really hated looking at my phone because it was like what broke this morning or I'm waking up because my phone is ringing because something's broken. It ended up giving me horrible panic attacks at night where, I'm like I don't want to go to bed because I know something's going to break or I know when these systems I set up are going to not work because people are involved and they're going to change things and they're going to cause an impact of the environment. My rules are static. It's not a dynamic system that's dealing with this, it's a static imperatively defined system. It doesn't adjust while I'm busy sleeping or going on vacation or something versus an intelligent system that is declaratively managed and is actually looking at the RPOs and the business needs and doing what's necessary to achieve those.
We're just telling you, "You're crazy. This is not going to work. Please adjust." Rubric software is looking at how it's configured, as well as how the target environment is configured and then determining how best to provide that data protection.
I remember talking to Seattle Genetics, they're one of our public reference customers that is dealing with a lot of patient data really needs restores to be lightning quick. Especially when you're a biotech, I backed it up, but I can't restore it because that could be some massive, important IP that you're dealing with. They were saying restores are 90% faster and all that critical patient data that they need. It's always protected, they don't have to worry about it.
Actually the science behind what they're trying to do instead of backing things up, which is actually their technical differentiation and how they compete in the market.
Ken: In that way, we're following the trends that I think we're seeing across the industry, which is like even the legacy or a lot of older companies sign and realize. Things are getting so much more complex that we want to try to remove human decisions out of things as much as possible and let code and let the automation take over. We're trying to do that Rubric where it's less about bottlenecking because a customer usually has to make a decision about when to backup something and when to send it to the cloud.
We're saying, "You tell us what you actually want, things to look like at the end and we'll figure out how to do all that for you. We've got other customers like University of California San Diego, who presented with me, actually at re:Invent this year and talking about how they went from all these tools, all this time they have to spend just managing backups, to using Rubric and using AWS, and now, they're spending minutes a day instead hours just to make sure that the backups are going the way they want it to.
Corey: I think that there's a definite challenge for a lot of companies as they start to move in this direction, to embrace a new way of thinking about it. One of the more discouraging parts about a lot of it, to my mind, is when you look up modern best practices, it assumes that either you're starting Greenfield or you've been working on this for a decade already. That can be incredibly discouraging. What words of wisdom you folks have for people who are in that position?
Chris: The two things I would say is at least have a general idea of what the end goal is. You need to have a goal that you're shooting for, otherwise, you won't know when you've arrived. Sometimes, I think customers with new technologies like cloud, like containers, they just do it because they were told by the management that they should but they don't actually know what end goal or end-stage would like.
I tell customers, "Have an idea what the end state looks like and then figure out what the pathway to get there. Recognize you're not going to get there in a month." If you've never used the cloud before, you're not going to become a cloud expert and have everything running in the cloud in six months to a year. You're probably going to take up a longer span of time. Figure out the end goal and then figure out what the interim states are.
That could mean like I'll start off by just doing some lift and shift of my workload into the cloud. But then, maybe over time, I'm going to start using some of the cloud services. And then further down the road, you may even start thinking about kind of re-architecting your applications to being cloud native. Don't try to bite that off all at once, just try to figure out how you can do that in incremental steps.
Chris: Yeah, I don't disagree at all. I remember SugarCreek was a customer I was working with. Smallish team, three people, and just spending way too much time on the day-to-day, keeping the lights on stuff. I'll relate to some of their story that I think is good advice. They're basically saying, "We want to automate the heck out of everything. It's necessary for the cloud. It's something we have to do on-prem but it's not even a question if we're going to do that when we start dealing with these public-cloud environments.” They were really looking to automate these workflows.
The kind of advice that I have there is capture what that workflow is going to look like. Even if it's manual today, what's the process to go from A, to B, to C, to ultimately reach the goals that the Ken was talking about. Figure out where you can insert this into a pipeline of some sort. Automate the pieces that you need to automate, hopefully, as much as humanly possible.
Rubric, we obviously, are very big fans of our API and things like that. But I would say, I wouldn't really introduce anything into my data center today that doesn't have a solid track record when it comes to publishing their API, making sure it's available and documented, and there's no license fee or anything goofy like that."
Because how the heck are you going to build out something in public cloud if you're not even practicing those models of pipeline driven, declaratively driven, policy-driven type architectures on-prem because the to go really well together. If you if you start with one in an area where, "Oops, I made a mistake. The automation isn't quite the way I wanted on-prem."
It's not like you're going to pay a bajillion pennies in transaction fees or egress/ingress fees or something like that. It's kind of a good test bed to figure that out. Then, you can hopefully apply a lot of that learning to a public cloud environment as you build out, I know if we call it, hybrid cloud or whatever, but public plus on-prem together can actually be quite fortuitous for a customer.
Corey: Absolutely. Ken, anything you'd like to add?
Ken: In addition to what Chris said, is choosing the right tools. Sometimes, I hear customers and they want to evaluate 10 different tools that does the exact same thing and then they get stuck in that cycle just picking the right tools. The same thing with the cloud, sometimes customers want to evaluate every single cloud service until they find the perfect cloud for what they want.
What I tell customers is, "At the end of the day, narrow down your choices and just pick one to get started because if all you do is spend your time in evaluating all these things, you'll never actually get going." It doesn't matter if it's AWS or if it's Azure, pick one and go with it. If you want to use CloudFormation and you want to use Ansible or Terraform—just pick one and get really good at that. Once you've built up some good practices and good knowledge then you can start thinking maybe about trying something else but to get started.
Don't clutter yourself with 20 different tools. Try to hone in on a few and get really good at those and learn those well.
Corey: Great advice—analysis paralysis absolutely becomes a real thing.
If listeners want to hear more sage words of wisdom from you folks, where can they find you?
Chris: The best place probably to find me is on Twitter. I'm @ChrisWahl on the Twitterverse or LinkedIn if you want to network or whatnot. Those are pretty much the two places I tend to hide.
Ken: I'm on Twitter @KenHuiNY. I tend to tweet about technology and lots of snark about AWS. I also blog at Medium, also @KenHuiNY and about 90% of my blog posts are about cloud and about AWS.
Corey: Thank you, both, so much for taking the time to speak with me today.
Chris: No problem. Thanks for inviting us.
Ken: Thank you, Corey.
Corey: I'm Corey Quinn. This is Screaming In Cloud.
This has been this week's episode of Screaming In The Cloud. You can also find more of Corey at screaminginthecloud.com or wherever fine snark is sold.