When companies migrate to the Cloud, they are literally changing how they do everything in their IT department. If lots of customers exclusively rely on a service, like us-east-1, then they are directly impacted by outages. There is safety in a herd and in numbers because everybody sits there, down and out.
Today, we’re talking to Chris Short from the Cloud and DevOps space. Recently, he was recognized for his DevOps’ish newsletter and won the Opensource.com People’s Choice Award for his DevOps writing. He’s been blogging for years and writing about things that he does every day, such as tutorials, codes, and methods. Now, Chris, along with Jason Hibbets, run the DevOps team for Opensource.com
Some of the highlights of the show include:
Chris’ writing makes difficult topics understandable. He is frank and provides broad information.
SJ Technologies aims to help companies embrace a DevOps philosophy, while adapting their operations to a Cloud-native world.
Many companies consider a Cloud migration because they’ve got data centers across the globe.
Some companies do a Cloud migration to refactor and save money. A Cloud migration can result in you having to shove your SAN into the USC1. It can become a hybrid workflow.
Lift and shift is often considered the first legitimate step toward moving to the Cloud. However, know as much as you can about your applications and RAM and CPU allowances.
Know how your applications work and work together.
Some do not support being on the Cloud due to a lack of understanding of business practices and how they are applied. But, most are no longer skeptical about moving to the Cloud.
Don’t jump without looking. Planning phases are important, but there will be unknowns that you will have to face.
Downtime does cost money. Customers will go to other sites. They can find what they want and need somewhere else. There’s no longer a sole source of anything.
The DevOps journey is never finished, and you’re never done migrating. Embrace changes yourself to help organizations change.
Chris Short on Twitter
Corey: Welcome to Screaming in the Cloud. I am Corey Quinn and I am joined today by Chris Short who has been doing a number of interesting things in the Cloud and DevOps space, but most recently has been recognized for his newsletter at DevOps’ish. He also won in 2018 opensource.com People’s Choice Award for his writing in the DevOps space.
Chris, thanks for joining me today. Can you tell us a little bit about your writing and what it is that brought you to the people’s attention?
Chris: Thanks for having me, Corey. The writing, I’ve been kind of blogging on and off for years, it hit me maybe a year or two ago like, “I should really write about the stuff that I do every day.” I started writing on my website, just simple stuff here and there, little tutorials, codes, methods, things like that. Then I realized I’d like a bigger audience.
I knew of opensource.com thru a friend. I decided to team up with them. Now Jason Hibbets and I run the DevOps team for opensource.com which don’t hark at the DevOps team moniker. It’s actually a team of writers specifically geared towards DevOps topics. We’re actually looking to do a couple of talks going forward for DevOps type things which is interesting because Jason is more of a community manager writer, and I am more of a technical hands on person. We have an interesting dynamic as far as lessons learned and then how they apply not just to technical fields but also other ones.
Corey: Right. I’ve been a fan of your newsletter for a little while, almost since before I launch last week in AWS. I tended to be very focused and very snarky on one area. You tended to be a lot broader and frankly, a lot kinder to the things that you write about as a general rule.
I appreciate your ability to disassemble intelligently about such a wide variety of different topics and also admit when you’re unclear on something, when you’re not entirely sure on how something fits in. I’ve seen you call that out on a number of occasions. To my mind, that’s always been the mark of mastery where you can take a look at something and understand that, “Ah, there’s something I’m not seeing, so I’m going to call that out rather than hand waving my way past it and faking it.” I’ve just always been a fan of that type of approach.
Chris: I appreciate that and don’t get me wrong, last week in AWS gives me as many laughs as it does knowledge about what to do and what not to do in AWS. I appreciate that very much.
Corey: Most recently, you’ve been working at SJ Technologies with John Willis. You’ve been focusing on helping companies embrace DevOps philosophy while adapting their operations to a cloud native world as you put it. What does that look like on a, I guess, day-to-day basis for you?
Chris: Let’s look at it as two things. The Cloud Native world, Cloud Native means a lot of things to a lot of people. A friend Justin Garrison and Kris Nova wrote a great book called Cloud Native Infrastructure, I highly recommend it, go read it.
A Cloud Native world does not necessarily mean that these organizations are like moving to AWS or Google Cloud or Azure. It just means that they want to take advantage of some of the philosophies and tooling around being Cloud Native. Let’s not say that they’re going whole hog Cloud Native or whole hog cloud for that matter but they wanna utilize some things.
One of the clients I’m working with is actually using an Oracle tool and OpenShifts on bare metal on their own data center but they want to really lean forward and embrace moving faster. One of the key takeaways for them is they want to be able to develop their mobile app, and make it better, faster. It all comes back to consumer outcome focused type things. Just like the DevOps work we do where we worry about what the outcome is for various steps in the software development life cycle, and coming back to culture, and as well as some tooling here and there. We’re not so much focused on tooling but more so on process.
If you wanna use Puppet for configuration management enhanceable for some kind of deployment pipeline, Kubernetes is for something else all within your one little world, fine. We’re more concerned about the process itself. Creating that left to right flow, shifting the security things closer to the left so you detect them earlier, as well as adapting some of the various stream mapping type processed that you see in a Lean and other disciplines in the DevOps world.
Corey: I hear echoes of Simon Wardley’s mapping started to creep in to the conversation there. I’m detecting a recurring theme in conversation I have around this sort of thing. But rather than going into those particular weeds today, something that you’ve talked about on and off for a while is the concept of doing migrations. When you think about doing a migration, a Cloud migration specifically, in your mind is that generally coming from on-prem? Is it coming from a different cloud provider into something else? Or are you viewing it as something else altogether?
Chris: I’ve seen it kind of go two ways. You’ve got all these data centers across the globe where you’re doing colo or you have your own data center, and your own facilities, and you’re moving to the cloud, and that’s great. The one thing I’ve noticed the most is people just say, “I have this cage in locations A and B. I wanna put all that in AWS.” Most of the time, they’re like, “Yeah, it’s active-passive backup.” And, really, it’s like they have two data centers. They treat them very differently.
They can never switch from one to the other very easily, but they want to be able to do that in the cloud and you end up biting off a lot more than you can chew is what I’ve seen for the most part regardless of which way you were coming from and then to. But I’ve seen very few AWS to Google type migrations between cloud providers aside from like going from Rackspace to AWS, those kinds of things are pretty common but also a lot easier than you would think.
Corey: What I’ve seen emerge as a recurring theme with cloud migration is they’ll start the process with the idea of we’re gonna take everything from our on-prem data center and then shove it into, let’s say AWS, and it goes well until they encounter workloads that weren’t in the initial pilot, that it turns out are really hard to move. Your Amazon account rep gets very difficult to work with when one of your requirements is to shove your SAN into the USC1.
I’m not saying they won’t let you do it but it costs more than you’re probably willing to pay. At that point, it’s, “Ooh, that one workload is either going to need to be significantly refactored.” But what often happens is the customers says, “Oh! We’re not gonna move that one workload. We’re gonna leave that one in our data center. Now we’re going to call our environment hybrid, plant a flag, and declare a victory.” That’s more or less where it dies in some cases. Is that a pattern that you’ve seen? Or companies starting to get over that hump and start moving they’re mainframes into the cloud?
Chris: I think you’re right. For the time being, you’re gonna see that kind of hybrid workflow of, “We have this thing in data center X and it’s not moving anywhere until we greatly refactor it.” I’ve seen that and I’ve heard of that happening at some of our potential clients that we’re just talking to as they are in those scenarios. What I’ve also seen is some people have made some assumptions about, “The cloud is definitely gonna save us money because we don’t have to run our own infrastructure, and we’ll just gonna let them shift everything and call it a day,” without any kind of real plan in the refactoring it before they decide to make that actual leap and start moving workloads.
The fact that this even occurs is fascinating to me. I’m sure we’re gonna dive more into that but the sense of hybrid and victory is claimed, is a very much a real world thing. I think that’s systemic of how fragmental a lot of the IQ organizations are in somebody’s organizations.
Corey: One term that keeps coming up in this conversations that I think means something slightly different to various people who hear it is lift and shift. What does that term mean to you? Because I can’t think of at least two ways that plays out but that’s just because I haven’t talked about it with too many people yet.
Chris: People make this assumption that lift and shift is a legitimate first step towards moving to the cloud. AWS will tell you, “Yes. Go ahead lift and shift everything, take server A in your data center, and make it server A in AWS and you’ll be fine.” But what they don’t tell you is that in VMware you can allocate 16 gigs as a RAM and 48 CPU. Well, now what is that gonna be in AWS?
There’s not a great correlation because VMware’s very flexible. AWS has these instances that are all various shapes, sizes and, for different workloads. You have to know a lot about what your applications are before you even get started.
A lot of people when it comes to lift and shift, they don’t care. They’re just like, “Try to make it close and go.” Right. You don’t make the assessment of how do I optimize for resurgences or anything else. You just take all your servers and VMs and everything else and you just say, “Create them in AWS. Go.” They gladly slurp in all your VM where instances you can create a mapping of this sized thing to that sized thing and off you go. But it’s a good strategy to just get there.
One of my previous employers, we had a contract renewal with the data center coming and we didn’t wanna renew. We could go month to month at a very exorbitant cost but we really, really pushed very hard, very quickly to move a decade’s worth of presence in a colo into AWS. It got very expensive very quick. Because we weren’t sitting there doing optimization of resources as we migrated. We weren’t sitting here looking at cost. We were more concerned about meeting the deadline and less concerned about money.
At the rate we were going it was gonna be like twice as much to move to AWS unless we optimized as opposed to staying in the actual data center. You need to look at density just like you would in a data center in your cloud environment as well which a lot people don’t realize when you’re lifting and shifting.
Corey: I understand where you’re coming from. My day job as a consultant where the one problem I fix is optimizing AWS bills for the business side of the organizations. What I often see is what you described. Everyone’s upset that the migration is costing more than they thought it would but the other side of that coin is if you go ahead and refactor your applications during the migration to take advantage of cloud primitives to make them ‘cloud native’, to embrace autoscaling groups, to be able to scale out as workload conditions change.
What you’ll very often see is indeterminate errors. It’s not clear initially whether it’s with the platform, whether it’s with the application. You wind up with a bunch of finger pointing. The successful path that I have seen play out multiple times has been to do a lift and shift first, whether you take everything exactly as it is and shove it into the cloud provider, and yes, it runs on money in that context but that’s okay.
The second phase then becomes refactor things. Start addressing first off the idea of scaling things in reasonably when they’re not in use, or going down the reserve instance path, or even refactoring applications to take advantage of, for example, server list primitives, things that you don’t generally have on an on-prem environment. That, historically, in my experience, has been an approach that works reasonably well. Is that something that you’ve seen as well? Or are you more of an advocate for doing the transformation in flight?
Chris: I don’t recommend doing the actual refactoring as you’re migrating. What ends up usually happening is people end up running into these deadlines. They can’t move to AWS without refactoring and they can’t refactor without moving to AWS. It becomes this circular kind of finger pointing like you mentioned. What I recommend is build your application on AWS or lift and shift it to AWS and see how that goes.
Have your prototype type kind of thing going. If you can easily move it without refactoring it and if you know your applications well enough, you know what size and instances to use, you know what monitoring you’re gonna need to have in place, all those basic things where if I see this log entries something’s gone wrong and act accordingly.
Chances are, if you know all these things, your migration’s gonna be very simple. But if you have very little knowledge of how your applications work, especially how they work together, you’re gonna have a really hard time with things. Making the move is one thing, doing the actual refactoring becomes another thing altogether.
You have to choose one or the other if you like. If you don’t feel like you can say, “Okay, we’re gonna lift and shift very slowly. We’re gonna refactor inflight, we’re gonna build a whole new stack in one side, and have our other stack running, then spin up all of our application in AWS. Have the data center as a backup for a little while and shift traffic over.” That works but that’s as much lift and shift as it is in refactoring in my mind.
The premise I think people need to realize is that you’re literally changing how you do everything in your IT department when you go to cloud. You’re not dealing with your CapEx anymore. You're dealing with somebody else’s CapEx and this become your OpEx. You have to optimize for that. You definitely need to make sure you understand how your applications work as you shift them over.
Corey: It seems that a lot of that is sometimes bounded by lack of alignment on the part of the company that’s doing the migration. Many moons ago, I had client that was adamant about moving from AWS into a physical data center. They were planning at, and there were regulatory reasons for this at that time that no longer hold true, I’m not sure. I would suggest that they would do this today but this was years ago.
One of the big obstacles they had is they’re sitting there trying to figure out the best way to replicate SQS, a simple queueing service in their data center. That was a big problem. They had engineers looking at it up on one side and down the other.
The insight that I had at that time was, “Well, let’s check the bill. Okay, you’re spending $60 a month on SQS and no regulated data’s passing through that thing. Why don’t we just ignore that for right now, and down the road, once you have everything else around it migrated over, then you can come back and take a look at this.” That seemed to be a better approach of what’s their constraints at that time. But they were too far in the rabbit hole of, “We’ve a mandate to move everything out of AWS,” and that’s it full stop without really understanding the business drivers behind it. It’s communications problem and a problem of alignment in many cases. How do you tackle that?
Chris: I had a previous organization we had a lot of on-prem data center. We had very much cost controls over that. We were paying very little for bandwidth, we’re paying very little for hardware. We’ve really optimized the on-prem scenario, but we had workloads that were not quite serverless-ready because they were too long running but were very similar in nature of like just need some CPU, we need to run some JAVA, bounce some messages between SQS queues, and off you go, clean it up, that’s three buckets. Aside from a long running part of it is perfect for serverless, and we will totally use AWS for that stuff. We were to use AWS for storage in a lot of cases because it made sense.
The premise of a lack of understanding of business practices and how it applies is something I see often. Legal department say, “Oh my gosh, there’s this new regulatory thing, now we gotta do this.” You just gotta get out of cloud. That’s the only way to be safe. Legal always tries to play it as safe as humanly possible. They want a black or white, they don’t want a grey.
To address that, you have to do a lot, like what you said, you have this regulatory needs. We completely understand those as long as you say these are buckets of regulated data, or services, or whatever.
Being able to control that’s very easy. Otherwise, you kind of have to go back to the stakeholders and say, “Listen, for you to just sit down and say we’re moving from AWS to on-prem, or on-prem to AWS, flip a big switch, and you’re just gonna move everything over.” You kind of have to understand what the impact of all that is. You have to plan accordingly. That’s the biggest thing in all these migrations is people just jump without looking actually like a lot.
The planning phases are super important. You have to understand that there’s gonna be this Don Rumsfeld unknown, unknowns that are out there for sure. You have to be ready for those and sit there and say like, “What is this gonna impact by moving this workload or that workload?” You have to know where your boundaries are as far as legal is concerned, as far as contractual obligations are concerned, way before you get started. You have to realize that what you say is gonna take six months will probably take 18. It depends on how long and how old some of your processes are.
Corey: Right. And then you wind up with Mythical Man-Month territory things which is a great book. If you haven’t read it, let me know and I’ll send you three copies so you can read it faster.
Chris: Yes, because I am multi-threaded book reader. I have two eyes, why can’t I read the same book twice differently.
Corey: Exactly. It’s all a question of perspective. Increasingly, we’re seeing a lot of focus on migrating from on-prem to cloud, we’re seeing focus on migrating between different providers. But at this point it feels like even in larger, shall we say, more traditional blue chip enterprises, there’s no longer the sense of skepticism and humor around the idea of moving to a third party cloud provider.
Instead, it becomes a question of instead ‘why cloud’ it becomes ‘why not’. As the cloud’s ability to sustain regulative workloads continues to grow, and as you see various conversations with different stakeholders who bring up points that are increasingly being knocked down by various feature enhancements and improvements with this providers, it becomes a very real thing. There are benefits directly back to the business that are very clear even if, for example, companies still want to treat everything as capital expenditure rather than Opex, there are ways to do that.
If your accountant and auditors sign off on it, you can classify portions of your Cloud spend as CapEx. That is something that’s not commonly done but it can be. You also start to see smoothing of various spend points. For example, with storage, as you store more data, the increase in what it costs to do that remains linear. You don’t have this almost step function type of graphs where we added one more terabyte and now it’s time to run out and buy a new shelf for the NetApp. It just continues to grow in a very reasonably well understood way based upon what you’re storing. In time, you start aging data out or transitioning it to different storage tiers.
The economics generally continue to make sense until you’re into the ludicrous scale point of data storage, past a certain point when you’re in a position where you’re budget is running in the hundreds of millions per storage. Yes, there are a number of options that unfold for you where running some form of storage operation on-prem may make sense economically, but as a counterpoint, it’s also very difficult to guarantee the same level of service, the same level of reliability, the same level of regulatory compliance that you get ‘for free’ with the large scale cloud providers who have to do this for everyone.
Where do you stand on the perspective of should a company undertake a migration project as they’re looking around their decaying physical data center, “Was that a rat we just saw? Good Lord, this fans are loud. It's always the same fluorescent lights, and I think I’m going slowly deaf.” How do you wind up approaching the should we migrate conversation?
Chris: I like to use an example. The one thing that I remember from my first job after getting out of the Air Force was there was always this metric, like a minute of down time we’ve worked like 50 grand or something like that. When you take that metric and you say okay, fine, and then you’re on call and all of a sudden your data center get struck by lightning, and while it has the facilities to handle that, something inevitably goes wrong and that data center is now 120 degrees, because some page facts which didn’t reset accordingly when the generator flipped over, because that generator got struck by lightning. Then all of a sudden after about five minutes, your stuff just stopped responding because it’s just too hot, start shutting itself off, what then?
You have to go in and literally turn off everything and turn it all back on after the disaster recovery at the data center is completed. How many hours were lost because of one millisecond of event? This took three hours for us to get everything back to normal, 50 grand a minute, you do the math, that’s a whole lot of money.
I look at it as like you’re paying for an extra night. When you do these things, you don’t have to worry about sending somebody to the data center, and start swapping out disc in your NetApp. You don’t have to worry about, “Hey, what server chassis should we put this workload in? Where do we have the most capacity?” You don’t have to think about those things. You’re kind of buying yourself and extra night because inevitably wherever you are there will be downtime in that facility for one reason or another.
Just look at BGP. How many times have BGP mistakes occurred in the global internet and caused some kind of weird outage in some location? Do you really wanna be a part of that? Or do you rather to design your workloads to be more effective and resilient? With some of the largest consumers of network stacks on the internet, the idea of collocating your things with big, big, big companies that control wide swaths of the internet is highly, highly effective at keeping resilience in your systems.
Corey: One thing you mentioned that I wanted to call a little bit of attention to, and I’m not talking specifically about the Air Force, please don’t feel the need to respond on their behalf. I prefer not to see you renditioned although if it was, it would be extraordinary, I’m sure. But I’m curious as to the metric of a minute of downtime costs us $50,000. That’s an argument that’s easy to make depending upon what your company does.
But to you give an example, back a few years ago, I tried to buy something on Amazon. I think it was probably a pair of socks because that’s the level of excitement that drives my life. It threw a 500 error and that was amazing. I’d never seen this from Amazon before. I smiled, I laughed, I try it again, same thing.
Of course Twitter, or however long ago this was, maybe Fark, if that was the year of what social media looked like back then, so I shrugged and I went and did something else. An hour later I went and I bought my socks.
The idea of did they lose any money from that outage in my case, the answer was no, because the decision point on my side was not, “Well I guess, I’m never going to buy socks again,” the end and now I’ve been down one pair ever since. Instead, I’ll just do this later.
There is the counter argument that one time out of three that I tried to make any given purchase on Amazon it didn’t work, I’ll probably be going doing something really sad like buying from Target instead. But when it’s one off and it’s rare and hasn’t eroded customer confidence, there may not be the same level of economic impact that people think there is.
As a counterpoint, if you’re an ad network every second you’re down you’re not displaying something, no one is gonna back and read the news article a second time so they can get that display ad presented to them. In that case it’s true, but I guess to that point, there is the question of what down time really costs? Do you have anything to say on that?
Chris: Yes. Let’s take your point in time of trying to buy these socks. If Fark was the social network of choice, chances are those socks were only on Amazon. When you look at the landscape now and how it’s changed, you don’t get [inaudible 00:34:13] on Twitter anymore for a reason. You can buy the same pair of socks on Amazon that you can buy on Target, and vice versa and guess what, their price match each other to an extent.
If Amazon can’t sell you that pair of socks, you’re going to go to a different site and buy them there because those socks are gonna be in more than one place if they’re on Amazon. With that being said, you bring up a good point, the company I was with where a minute of downtime cost $50 grand was very much an ad driven business. If you were losing clicks, you were losing money. It was very, very easy to make that justification.
You can slice it up at night. You’re making less than $50,000 a minute or whatever, but whether you’re buying something, or you’re consuming content and ad revenue was being generated that way, people will find another place in the internet nowadays to go get what they were looking for. That’s just the nature of things. There’s no sole source of anything anymore. You can’t compete thinking like that nowadays. 10 years ago, you most definitely could. You were only gonna get that thing from Amazon because there is no way you were gonna go get it locally, let alone from some other website.
Corey: That’s very fair. There’s also the argument to be made in favor of cloud migration from my perspective where if you go back to a year or so ago when Amazon had their first notable S3 outage in the entire lifetime of the service, and it was unavailable for at least six hours or something like that, there was a knee-jerk reaction in the SRE DevOps space, of “Now, we’re gonna replicate to another provider and we’re gonna go ahead and have multiple buckets and multiple regions.” These things spiked the cost rather significantly to avoid a once every seven years style outage of a few hours.
When you look at how that outage was reported, notice I’m talking to you about the Amazon S3 outage, I’m not saying, “Oh, the American Airlines outage or the Instagram outage or Twitter for pets was down during this time,” because it became today the internet is broken. Individual companies were impacted by this weren’t held to account in the same way as if it had been just that one company with their own internal outage.
Frankly, I struggle to accept that you’re going to be able to deliver a level of uptime and service availability that exceeds that of AWS. They have an incredibly large army of incredibly smart people working on these specific problems all day every day. I feel that there’s also some safety in being part of the herd, if you’ll pardon the term, when us-east-1 has a bad day, we all have a bad day. I feel there’s a safety in numbers. Is that valid?
Chris: That’s really valid. It’s amazing to me to think about how the world has changed because of services like AWS and large scale applications on the web like Twitter, and Facebook, and Google. People have a greater understanding of backing services that drive these things. It’s also surprising to me how many people are relying on us-east-1 exclusively.
That outage that you were speaking of, I was directly impacted by it. I don’t know anybody who wasn’t but we had a significant amount of data in us-east-1 and we decided, “You know what, us-east-1 is kind of the dumpster fire from what we’re hearing, let’s move it to us-east-2.” Literally, it was just a [inaudible 00:37:57]. You just do a search find, replace 1 with 2, move everything, done, off you go, cool.
But the idea of yes, there is safety in the herd, right, because everybody could sit there and be like, “Whoa, listen. We bought this thing and it’s down, we’re sorry.” I feel like that’s kind of a BS excuse. You didn’t engineer your application to be a little more less single point of failure. If us-east-1 goes down and you’re calling something that uses us-east-1 as its sole backing service for something, that’s really a bad idea on my opinion.
A lot of the things that were down, look at Atlassian, not to pick on any one company, but a lot of their stuff was down. I remember that very distinctly because I couldn’t get to my Jira instance, I couldn’t get to my documentation, I couldn’t get to a lot of things because they’ve used somebody that utilized us-east-1 rather heavily for things, and they got bit by that. That I think is completely unacceptable from a business perspective.
You have to know where your single points of failure are and at least be aware of them. You don’t necessarily have to address them because I do agree with you, good luck getting better reliability than AWS the past 5 years.
I do remember a time when S3 was not as good as it was just like any brand new Amazon services never gonna be the best it is at release time. The idea of people saying, “Hey, we can just blame AWS for everything,” I think is a very, very, very tenuous situation to put yourself in because if your customers are going to sit there and rely on you for something, they don’t care.
If your SLA says it’s 99.999% uptime and you don’t deliver on that, that’s your penalty. You can’t pass it on AWS. You think they’re gonna foot the bill for that? No. It’s like the cable company at that point like, “Oh, we’re sorry you don’t have service. We’re not gonna give you a credit.”
Corey: Or if there is an SLA credit, it’s minor compared to the impact it potentially had on your business as well. But then again this is not to bang on Amazon specifically the fact that we can talk about this single issue and everyone knows what I’m talking about is testament to how rocksolid at least some of their services have become.
Chris: I remember reading the Buzzfeed newsletter the day after that outage and it talked very intimately about us-east-1. This is Buzzfeed here. This is the people that made listicles a thing but they know what us-east-1 is, it’s amazing to me.
Corey: Yes. Service number seven will blow your feet off. Thank you very much for joining me today, Chris. Is there any parting comments, observations, or things you’d like to share before we call it an episode?
Chris: I think the two biggest things for me is a DevOps journey is never finished. You’re never done migrating. You’re never done doing DevOps. You’re always doing DevOps, that’s thing one. Thing two is something I realized maybe this year as technologist like you, myself, and probably a lot of your listeners are, we have to be more embracing of changes in our own work, life, every things, so that we can help our organizations change.
When we make change simple for ourselves, for everything, I’m talking about the way you drive to work, the way you login to your systems, the shell you’re using for that matter, when you can make a change seamless and not painful for yourself, it exudes the sense of confidence when you’re trying to make larger changes throughout the organization. We have to get better as technologists in making changes and helping people embrace change.
Corey: Very well put. Thank you once again for joining me here on Screaming in the Cloud. This has been Chris Short of DevOps’ish and I’m Corey Quinn. We’ll talk to you next week.