DevOps as a service describes what Reactive Ops is trying to do, who it’s trying to help, and what problems it’s trying to solve. It’s passion to deliver service where human beings help other human beings is done through a group of engineers who are extremely good at solving problems.
Sarah Zelechoski is the vice president of engineering at Reactive Ops, which defines the world’s problems and solves them by pouring Kubernetes on top of them. The team focuses on providing expert-level guidance and a curated framework using Kubernetes and other open source tools. Sarah's greatest passion is helping others, which encompasses advocating for engineers and rekindling interest in the lost art of service in the tech space.
Some of the highlights of the show include:
Kubernetes is changing the way people work; it offers a way to release a product, provide access to it, and behaviors when you deploy it
Any person/business can use Kubernetes to mold their workflow
Kubernetes is complex and has sharp edges; it has only recently become productive because of its community finding and reporting issues
Business value of deploying Kubernetes to a new environment: Flexibility and uniform system of management; and it can provide a context shift
Implementation Challenges with Workshops/Tutorials: Valuable entry level strategy for people learning Kubernetes; but the translation is not easy
About 85% of the work Reactive Ops does is helping its customers get on to Kubernetes is spent on application architecture
If thinking about moving to Kubernetes, how well will your current applications translate? Do you want to start over from scratch?
Value in paying someone to do something for you
Using Defaults: Try initially until you realize what you need; Kubernetes gives you options, but it’s a challenging path to go from defaults to advanced
Deploying a workload between all major Cloud providers is possible, but there are challenges in managing multiple regions or locations
Cluster Ops: Managed Kubernetes clusters where Reactive Ops stays on the map, watches them, and puts them on pager, so you can continue your work without having to worry
Full Episode Transcript:
Hello and welcome to Screaming In The Cloud with your host, cloud economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming In The Cloud.
This week’s episode of Screaming In The Cloud is generously sponsored by DigitalOcean. I’m going to argue that every cloud platform out there biases for different things. Some bias for having every feature you could possibly want offered as an added service at varying degrees of maturity. Others bias for, “Hey, we heard there’s some money to be made in the cloud space. Can you give us some of it?”
DigitalOcean biases for neither. To me, they optimize for simplicity. I polled some friends of mine who are avid DigitalOcean supporters about why they’re using it for various things, and they all said more or less the same thing. Other offerings have a bunch of shenanigans, root access, and IP addresses. DigitalOcean makes it all simple, “In 60 seconds, you have root access to a Linux box with an IP,” that’s a direct quote albeit with profanity about other providers taken out.
DigitalOcean also offers fixed-price offerings. You always know what you’re going to wind up paying this month, so you don’t wind up having a minor heart issue when the bill comes in. Their services are also understandable, without spending three months going to cloud school. You don’t have to worry about going very deep to understand what you’re doing. Its click a button or making API call, and you receive a cloud resource. They also include very understandable monitoring and alerting.
Lastly, they’re not exactly what I would call small-time. Over 150,000 businesses are using them today. Go ahead and give them a try. Visit do.co/screaming and they’ll give you a free $100 credit to try that. That’s do.co/screaming. Thanks again to DigitalOcean for their support to Screaming In The Cloud.
Corey: Hello and welcome to Screaming In The Cloud. This week, I’m joined by VP of Engineering, Sarah Zelechoski at ReactiveOps—a company that winds up defining the world’s problems and solving them by pouring Kubernetes on top of them. Welcome to the show, Sarah.
Sarah: Thank you. Thank you for having me.
Corey: A pleasure. You’ve been working at ReactiveOps for a few years now, to my understanding. I have my own definition of what the company does when I get up on stage and say embarrassing things at the company then repudiates later. But there’s probably a more formalized definition of what the company does than my half-baked but very loudly articulated understanding. What does ReactiveOps do?
Sarah: Yeah. I can absolutely talk about that. I’m actually had been at ReactiveOps since the very beginning. I was employee number one. I’ve been with it for several years now and have seen it evolve. At the core of what we do, we call it DevOps as a service.
I think there’s a couple of important parts in that phrase, one is DevOps, that’s the space wherein there’s a lot involved in that term and people will argue that till the end of the earth. But I think that just describes what we’re trying to do, who we’re trying to help, what problems we’re trying to solve. The second part of that phrase DevOps as a service, is the keyword service.
One thing that I’m super passionate about is service. To me, service is people helping other people. It’s not software as a service, infrastructure as a service, platform as a service, where you are interfacing with a piece of software that performs a function for you. There’s no interaction with a human being involved in those types of services. But in this particular company, what’s really important to us is human beings helping other human beings.
The form that that takes is we have a fantastic group of engineers who are extremely good at solving problems. What we do is we interface with customers and we take a look at their problems, what does your business do, what is stopping you from being productive, what is causing you to wake up at 2:00 in the morning, and how can we solve those problems. A lot of that has to do with the workflows that you use or the types of behaviors that you use with your developers and your operations people. Then some of that has to do with tooling, that’s a little part of it but not necessarily a large part of it, then we do poor Kubernetes on everything now. But that wasn’t necessarily always the case, immutable infrastructure as code, those types of philosophies—I guess you could say—and those practices are what really can help you solve those problems. That’s what ReactiveOps does.
We just do it with Kubernetes right now. We help a ton of fantastic customers, a group of amazing people really—help solve their problems and then let them focus on their business.
Corey: It’s nice to hear stories like that. We have a ton of fantastic customers, yes, and three terrible ones. But it’s always fun to wind up seeing how this stuff winds up shaping out. The problem right now is that Kubernetes is one of those words that you see everywhere. It’s difficult for people to wrap their head around what it means—personally, I believe it’s named after the god of spending money on cloud services from ancient Greek Mythology—the problem is it’s not necessarily accurate.
Right now, my question is this, how bubbly or how sustainable is Kubernetes today? By which I mean, it incredibly complex orchestration system. People who know how it works are very expensive and hard to find and there are many sharp edges which will cut you to pieces as you wind up rolling this out, learning new and exciting things. How well does this map to I guess a future sustainable world in which Kubernetes remains relevant and not discarded or simplified rapidly?
Sarah: Yeah. For sure. I had a really interesting conversation with somebody once when they told me that what’s great about Kubernetes is that it’s changing the way that people work. It is giving them a new framework in which to envision their workloads. It is putting a frame around what people are doing to give them context. You have certain elements of Kubernetes, a way that you release a product, the way that you provide access to it, the behaviors when you deploy it. It is generic enough and robust enough with plenty of options and flexibility to allow people to reframe what they’re trying to do with their infrastructure, with their deployment strategies, etcetera.
What’s really great about it is that any particular person or any particular business could potentially pick out Kubernetes and mold their workflow kind of rethink what they’re doing and express that in Kubernetes. I think that it is sustainable in the long-term because it will be ever evolving to fit people’s workloads.
One of the things that you said is that there are many, many sharp edges, and that’s very true. I think Kubernetes has been around for about four years but only, I think personally, about a year and a half of that have been productive, I guess, I want to say because the systems are robust enough that people can be in production with them. The community is a huge part of what has brought Kubernetes up to spec. You’re saying that it has many sharp edges, it’s because people are finding them and they’re reporting them and are fixing them, it’s becoming a better product so it will be sustainable in the long term because there’s constant improvement.
As far as people who know it are expensive, I think that that is somewhat true. I think the pool of people who know it is very small right now. There have been people who have been using it and hacking on it since the beginning and they are I guess “experts,” they are people who use it in their day-to-day work who are also very expert in specific aspects of it. But how do you get experts without giving them time to learn it. That’s the expensive thing is that you have to take a risk, put somebody in front of it, allow them to spend time on it, and then become an expert.
If you are looking to grow Kubernetes experts from within, yes, it’s going to be expensive. You have to give them chance to get used to it, to modify workloads to work on it, to really hone their skills, and that can be expensive.
If you’re looking to hire somebody from the outside, the pool is very small. That person who you’re looking to has spent the same amount of time and their expert advice is worth a lot of money. I think that really sustainability is about putting in those hours. It’s about allowing your people to get comfortable with it and to make it productive for you. It’s never going to be quick and easy. Nothing that shifts your perspective this far is going to be easy and cheap. But in the end, I think like I said, since it’s so flexible and it really changes the way that people work, I think it is going to be worth it and that the quality of work in the end is going to be great. I guess that’s what I have to say about that.
Corey: Which is very fair, the challenge that I see is that everytime there’s a talk about Kubernetes it’s, “Oh, here is the solution to your container scheduling problem,” but you don’t have one of those, you have a culture problem and Kubernetes will help address this. I guess, to that end, what is the business value of deploying Kubernetes to a new environment that hasn’t had something like this before? Is it effectively people focusing on their old resumes as far as what they want to do next? Is it effectively trying to keep up with the hype cycle?
Gartner has been talking about this and other analysts for awhile now, you see Fortune 500 that are diving in face first but I’m not seeing yet an articulation of what it is that is unlocking for those companies from a strategic or business capability standpoint. What am I missing?
Sarah: I think the first thing that Kubernetes provides is a context shift, I guess, it’s what I should say. What you said earlier is that there is a culture problem and you also have workload problems.
Kubernetes can tackle both of those things because it is a tool that encourages all users of a platform regardless of their function to be involved in the process. As far as being able to solve not only technical problems but also cultural problems, I think Kubernetes does allow for that developers can get involved very early in the process, packaging their code, controlling its releases, and having an impact on how that service is presented. I think it helps subtly solve cultural problems by exposing its features to a larger set of engineers, I guess, I would say.
As far as what is its business value for bigger Fortune 500 companies, I think the answer is flexibility and a uniform system of management. If you are a company that is multi-regional, one of the main problems is going to be either building out data centers or building out cloud locations. Kubernetes is going to get to a uniform system of management across all of those avenues.
You’re going to be able to interface with your systems and services identically regardless of location. I think that filters down to all different levels. If you can internally start exposing your services in Kubernetes, that will all translate out into your data center and your cloud infrastructure.
I think it’s important in these bigger companies that you have everyone on the same page. I’ve interfaced with a couple of enterprises in my career and the thing that I’ve noticed the most is that there is a huge disconnect between all the different internal teams. In an enterprise, what happens is you have the Denver team, and Philly team, or the front-end team, and the back-end team, it doesn’t really matter what the teams are. But what matters is that each of those teams is managed individually, they have a different set of engineers, they may have a different set of goals, they have a different experience day-to-day, and classically, they’re on different pages. If you look at older virtualization technologies, let’s say VMware or even packing AMIs for cloud services…
Corey: Thank you for pronouncing that properly.
Sarah: My pleasure. I think that the problem back then was that even with infrastructure as code, CloudFormation, Terraform, things like that, you get configuration drift. You are creating resources differently across different teams. VMware world, you’re building gold standard VM images from the hand or from DashScripts. The way that you are developing them has much to do with the opinion of the person on the team that is creating the automation.
If you look at Kubernetes, the step forward is that Kubernetes’ abstractions are standard. You can give them to separate teams—the frontend team and the backend team—and the way that they have to package their software, the way that they have to distribute their software, expose their services, is all going to be the same. You’re getting a uniform service, a uniform way to manage things.
I think that there’s a lot of value in that because it brings cohesion to your very large company. It brings the ability to have different departments affect each other more. If you’re a Fortune 500, it’s very important that a lot of departments are aligned, that they’re working toward the same goals, and that they can adequately judge how long it’s going to take to do something.
I think Kubernetes will be a vehicle for that because you’ll be able to understand each other when departments cross talk, they’ll be able to more adequately estimate how long it’s going to take to do something even so much as if your services can interact properly. I think that’s huge.
Corey: I think you’re right. I think that this is an area where it’s a capability story, it’s a culture reformation story, and you’re in a position where you start to drive improvement to the way that software is built and delivered in many organizations by approaching it from what seems to be a technical perspective but really is driving modern best practices in the space. That’s a really neat and powerful thing. Let’s get into the weeds a little bit as far as implementation challenges.
Corey: A lot of the workshops or the tutorials all start with, “Okay, let’s take an application,” and we’re going use let’s say WordPress because that always seems to be the terrible application that people use to demo these things and that’s great, “Yey, I can take an application that didn’t exist in my environment until I started this tutorial,” and “Yey, it’s up and running inside of Kubernetes, that’s great.” Now, if anything goes wrong, I’m lost and screaming for help. But assuming all goes well at the end of it, “Yey, I have this new thing up and running.” How well does that map to a 20-year-old legacy PHP Ruby application that has been in the environment is dealing with paying production customers right now and, “Oh, we’re going to go ahead and shove that into Kubernetes,” goodluck. How well does it map to that use case?
Sarah: It doesn’t map well, to be honest. There is a place for getting started tutorials. It is the place where your engineers will start to learn it and it may be a place where your executives start to understand it. But they do not translate at all to production operations or the map is extremely complex where you have to start and do some random question mark-profit type thing.
I think that I don’t want to discourage people from get doing up and running tutorials or Kubernetes in 10 minutes or less tutorials because like I said, I think that is a valuable entry level strategy for people learning Kubernetes. But I think what companies and what executives need to understand is that when you are jumping in with both feet and you want to go into production with your 20-year-old legacy apps—and not only just apps, platforms—you wouldn’t believe that complex microservice architectures that we’ve seen and people just want to translate them directly into the container world into the Kubernetes world. That translation is not easy, it’s going to be hours.
We help people at ReactiveOps take that journey. I would say a good 85% of the work that we do in helping our customers get on to Kubernetes is spent around application architecture. It is not normally containerization, writing down your files can be challenging but is not normally. Writing Kubernetes resource definitions, again, can be complex but for the most part straightforward. But getting your application architected properly to work in a cloud-native way, making sure that you have the proper architecture to allow services to talk to each other, and making sure that you have all of your resources taken into account as far as both dependencies outside of Kubernetes and resources as far as what does your application use, memory wise, CPU wise. Have you ever had to think about that or did you just throw your application on a giant EC2 instance, and then kind of just pay for your sins with money.
When companies start to think about moving to Kubernetes and how well their current applications are going to translate, I think that you need to think long and hard about whether or not you want to start over from scratch, to be honest. What we’ll find is that for those customers that we do translate them directly, they immediately feel the pains and they want to try to re-architect it in a new way that will have a better use case with Kubernetes. There is value, I think also in understanding the workflow and the framework that Kubernetes provides so that you can see whether or not your application will do well inside before you jump in that direction.
Corey: One of those measure-twice, cut-once type of stories.
Corey: Right now, we’ve seen all of the major cloud providers come out with a managed Kubernetes offering. You have GKE from GCB, you have AKS from Azure, and EKS from AWS. Can you compare and contrast these at all or speak to the maturity or lack of same across these? How decent is this compared to running something yourself bespoke as opposed to just throwing this over the wall for a cloud vendor to handle?
Sarah: Right. I can certainly speak to that. One of the first things that I want to say is that I am absolutely a fan of hosted services. I think there’s a lot of value in paying someone to do something for you if they know exactly what they’re doing. Because if you’re questioning it and you don’t know if you can afford it as far as the people that you’re going to have to put into it, the hours that you’re going to have to put into it, what pains it’s going to cost you, it may be a good idea to trust somebody who knows what they’re good at. I would say that about any service.
To me, GKE, AKS, EKS—fantastic. They have put user-friendly abstractions and automations around standing-up Kubernetes customers that will take some of the pain away if you don’t have people who are completely dedicated to running operations and Kubernetes. To be honest, building clusters is not the interesting part. The interesting part is solving those challenging business-level decisions and application architecture that you are going to have to put on top of Kubernetes to run your business. I think that any operation’s person, any engineer would love to give away—I guess that’s probably not true, there’s plenty of people who like to control—but if it’s an uninteresting problem that is solved well, giving it away to somebody else so that you can focus on harder, more interesting, more time consuming problems, is a good investment, in my opinion.
Now, as far as the current offerings go, GKE has been around the longest. It is in my opinion the strongest candidate. Google has a lot of experience running this tool and they have put a lot of thought into the nicety is on their sharp edges that they can round out for you. There is a lot of magic in GKE that if you are a controlling engineer that you may not like because they kind of hand weave a lot of the complexities for you. I think that’s the idea though, that’s a managed service. They’re starting to add more complex options allowing you to assign certain network addresses to pads now and things like that so you can get a little bit more advanced features there than you used to. But we have many, many happy customers on GKE. I would recommend it highly.
AKS, I haven’t had hands-on experience with. It is also a solid offering and it’s been around for a while. I think the challenge that most people have with AKS is they’re just not familiar with Azure. If you are familiar with Azure, hosting Windows containers may be an interesting option for you. I think that a lot of people are enticed by AKS and its free credits. I encourage people to try it out. I think that it’s certainly worth investigating if you’re looking at Azure.
EKS, brand new to the field. I think that AWS realized that they could provide a managed Kubernetes offering or there a lot of people already building Kubernetes clusters in the AWS. I think right now, it’s very rough, it’s very new. That doesn’t mean AWS won’t mature and won’t get better overtime. I just think that currently, there are better options. They’ve got Fargate which a lot of people are really liking. A lot of people are on ECS. I think if you’re on ECS, you know what? It’s been around a while already, if it’s working for a use case, don’t link the switch yet.
I think if you are running clusters on your own, so if you’re using Kops, Terraform, or something to create on clusters, you really should probably think about your use case. Are you doing anything interesting or are you just using the defaults? If you’re using defaults, it’s probably a good idea that somebody else do have work for you. If you have advanced use cases, networking, hooks that you need on the nodes when they come up, anything like that, I think that it may be worth looking into running clusters on your own with Kops or Terraform.
Corey: What does it look like when you start off using defaults? Generally, I view defaults as a form of best practices, where if you don’t have a compelling reason not to go with them, go ahead and run in this direction. For the first few months, it may make perfect sense to let someone else do that heavy lifting. Then you need to start differentiating, “Oh, we need these capabilities that are not in this managed offering.” How is that migration to running your own? Is that difficult, straightforward, or somewhere in between?
Sarah: I think it can be difficult, playing to know it works well until you get your real 20-year-old legacy apps in there and you realize what you need. The transition is that you need somebody with an advanced understanding of the problem. Usually, what happens is that you have an operation’s person who is responsible for your cluster. They’ve been asked to onboard their workloads and they realized the problem, they potentially are familiar with the issue and they go exploring.
What I normally see happen in the community is that people in that position will show up in the Kubernetes Slack, the public Kubernetes Slack, and they’ll start asking for help. Then what they’ll find is that they’ve opened a giant can of worms and the problem that they’re seeing can only be solved by either configuring something differently, customizing settings, using different overlays, there are so many different options in Kubernetes. I always like in Kubernetes to ICQ, if you remember that instant messaging client.
Sarah: ICQ was fantastic except it had more options than you would ever see anywhere else. It was the instant messaging client for somebody who wanted to do something weird. AOL instant messenger was the default. It was just easy, you talk to a person and that was it.
Kubernetes is very advanced and it gives you options to tweak almost anything. By affecting the source code as in open source project, you can tweak literally almost everything. But I think that it’s a challenging path to go from defaults to advanced and it’s only something that you can learn by getting in there, getting your hands dirty, and figuring out what your use cases need.
Corey: Which could really be sad to apply to almost anything, hiring an expert is generally the right direction to go in when you care about having something done right. I think most of us have had home improvement project where we start off from the prospective, “Oh, I can go ahead and do it myself. How hard could it be?” Only down the road do you realize it now you’re going to pay three times more to hire the person you should have had the good sense to hire in the first place.
One thing I want to circle back on given that all of these major cloud providers are offering a platform-based Kubernetes experience, is there any validity to the commonly trumpeted idea of, “Oh, I have this workload that I can now seamlessly deploy between all of the different major cloud providers,” that always felt to me a bit like snakeoil but you see this more often than I do. Is that something that is capability people are taking advantage of or does it come at a cost that generally isn’t worth it?
Sarah: The cost potentially that I think a lot of people don’t see is the cost of supporting dependencies. You’re not going to be able to fit everything or throw everything in a Kubernetes cluster. Database is I can think of as an example, any key-value store, cueing is a little bit challenging. Not everything has a good use case for Kubernetes.
What you’re going to have to do is you’re going to have to tie your Kubernetes deployments into external dependencies. Where do you house those? Do you need to then keep a customer database in each cloud? Do you need to have asset cashing everywhere or do you need to use a third party? Those are I think the challenges that people don’t think of.
Because yes, you can absolutely run Kubernetes on every cloud platform, sure, and you can have your CI/CD system deploy your services to those clusters, absolutely. But then, how do they get data in the backend? How do you address them with DNS across multiple clusters? How do you make sure that maybe three different regions are all deployed at the same time? There are a lot of challenges in managing multiple regions or multiple locations that a lot of people don’t realize until they get there. Yes, you could easily run your workloads in any cloud platform but there’s a much more complex problem when you go to present that to the world, and I think that’s the challenging part.
Corey: It gets back to the idea of undifferentiated-heavy lifting, where you wind up having to rebuild a lot of things that you get natively but implemented differently across all these different providers, easy example, load balancers, they all tend to behave slightly differently so when you go to the path rolling your own begins to make sense. There’s also the data transfer charges of moving, “Hey, let’s save, 20 cents per container,” but it cost me $2000 to get the data over to work on with that container, starts to be a bit of a challenge as well just from an economic perspective.
Corey: Changing gears a little bit, one of the interesting things about ReactiveOps is that the company is entirely remote. There’s no office for me to come into once a week and ruin, I can’t irritate people with my loud mechanical keyboard typing except on conference calls. You’re the VP of Engineering there so what’s it like running a completely distributed team?
Sarah: I quite like it. There are certainly challenges although I think because ReactiveOps has been fully remote since its inception. We’ve gotten pretty lucky about building patterns early. There’s all the classic pitfalls of being fully remote, you don’t have a lot of FaceTime with people, you have to find effective ways to communicate. Having people have different schedules throughout the day could be potentially challenging if you’re working on some more projects. But I think we combat that with cultural behaviors, I guess I want to say.
First of all, we work fully in Slack. I think that Slack provides us with a very nice asynchronous but also at the same time synchronous location to aggregate our conversations. When I say that, I mean, if you’re actively working with somebody on a problem, you can at the same time be chatting, you can both be there at the same time. But then asynchronously, if I leave you a message and I’m on the East Coast in the morning, 6:00 AM and you’re West Coast, I’m not expecting you to pick that up until you come online later on.
I think that we have patterns of using clock where everybody is expected to set their notifications well, etc, to make sure that we are both present with each other but also respecting of boundaries, and I think that works well.
Another behavior that my company takes part in that I think works well for our fully-remote team is that we can use video chat to have ad hoc meetings. What’s fantastic is that we replicate the behavior of being in an office by allowing anybody to join in. Imagine if you are having a conversation at your cubicle with somebody about a technical topic. In an office, somebody could overhear you and become interested and come join the conversation. What we do is that we drop a video chat link and we say, “We’re going to have a conversation about CI/CD,” something like that. What’s great is that anybody could join in by just clicking on that link. We kind of replicate that idea of being able to just jump to conversations ad hoc and to join with one another to brainstorm and to just talk shop, which is great.
I think for me, the challenge as being VPE is I need to connect with all the engineers on the team to make sure that they’re getting what they need to be successful. The only way that I think that that’s possible is just by having really excellent FaceTime via one-on-one video conference.
I try to check in with my team every other week, and to just see how they’re doing and what projects are stressing them and if there are any tools that I can provide. What’s really great is that everytime I come out of one of those weeks where I’ve talked to everybody face-to-face, morale is at its highest and people really feel like they’re connected to their company and that they feel like part of the team, which I think is great.
Corey: One of the more compelling aspects of hiring remotely for a company like this is that it gets away from this terrible anti-pattern where, as a culture and as an industry, we have been so disruptive that we have taken a job that can be done from literally anywhere and created a land crunch over 8 square miles located in earthquake zone. Of course, all the best engineers are in San Francisco, just to ask any engineer who lives in San Francisco.
It seems like getting away from that, being able to have engineers sitting in places that people might—I don’t know—actually want to live, is definitely a compelling advantage. But it seems that companies are extremely reluctant to pursue this type of model except for one-offs where it’s, “Oh, yeah. We’re entirely co-located in the office, except for Todd.” Todd is a bit of a special unicorn, Todd wants to feel like a third class citizen in some cases where they’re completely locked out of all decision making.
I’m curious as far as how you see this manifest not just ReactiveOps but is this a model that other companies could effectively cargo cult in or is it going to be thing sort of thing where there needs to be a commitment to it from a culture perspective as you’re building the company initially and if you didn’t do that then give up?
Sarah: I think it’s very challenging to create a remote-friendly culture when most of the employees are co-located. Like you said, if Todd is off by himself, he is lesser being right because he’s excluded from all of the conversation that would happen locally and like we won’t get invited to meetings, etcetera.
I think that a company would have to have a serious commitment to supporting remote engineers and integrating them into the team. There would have to be serious equity happening to make that a reality.
I think that there is so much value in allowing people to live across the country, in places where they’re happy, where their family is, where their interests are. You’re going to get people who are, in my opinion, more sustainable, they are people who are doing this job where they are happy and how they are happy. I think those employees are more dedicated, they are generally more flexible and they are interested in making it work.
I, for example, live on a small farm in Western Massachusetts and there are not very many tech opportunities around here. The fact that ReactiveOps lets me work from a place where I’m happy and I can go pet my horses at lunch is extremely important to me and thus, I am very dedicated and loyal to ReactiveOps. I think that attitude is really important.
If I were to go to San Francisco and I work for a job and there were challenges or I wasn’t necessarily happy with what I was doing, there would be a million other places for me to look for a job. I would be pulled in a lot of different directions and I would probably be miserable with all the traffic and having to live in a place that I didn’t necessarily enjoy.
I think that companies need to start really thinking about the sustainability in the long term. The tech field is not what it used to be. It used to be you worked at IBM for 30 years. Now, people jump around every two-ish years, maybe less. I think that speaks to a lack of sustainability. I think that’s something that the remote culture, a fully remote culture, or the remote-friendly culture can really do for you.
Corey: Speaking of someone who has a home office in San Francisco proper—because apparently I have zero grasp with economics—I agree with everything that you just said, it’s incredible watching how some of my colleagues in the city tend to hate their commute, they are throbbed with their company, they’re always one foot out the door. Being able to work with companies that have a perspective of, “Yeah, anywhere you want to sit on the planet is generally okay, feel free to let us know how we can help,” but everyone is remote is the only way I’ve really ever seen a remote-culture work.
Even in companies that have a few offices in several cities, there is always a hierarchy of time zones—if nothing else—where the company tends to run based on headquarter’s time, which invariably leaves some people feeling like they’re not involved or invested in decision-making in a reasonable sense. How do you find that being fully distributed impacts hiring?
Sarah: I think that the fact that we are fully remote is very enticing to people. I think that we are not having a lack of candidates based on being fully remote. Although I have certainly seen some resumes where people said, “Not open to a remote work,” which is mind blowing to me. Hiring, I think is a challenge because—currently, ReactiveOps is a bit strat startup. We work hard to make this company a reality and we don’t have necessarily a safety net. As such, we pay for talent as much as we can. We give, we try to pay our engineers very well but we cannot compete with companies that are located in New York City, San Francisco, or any of the tech hubs because we just don’t have that funding.
What’s really frustrating to me for remote hiring is that I could potentially hire somebody who lives in San Francisco because they happen to be there and I think that their talent is fantastic and I had looked out of them on my team, but unfortunately I cannot pay to support them. There’s all kinds of companies that would tell you remote employees who live in places that don’t have a high cost of living should get paid less. But in my opinion, if you are doing the same engineering work, you have the same skill, and you are doing the same job, you should be getting paid the same amount. Our remote employees get paid the same and that makes it very challenging.
Corey: That’s a completely valid thing to say. It’s right up there with, “Well, you choose to live somewhere else so we’re going to pay you less,” that’s where the long there with trying to determine what someone should get compensated based upon their own lifestyle. It’s, “How many dependents do you have? We’ll adjust your pay accordingly.” You see that with the military, and precious little else because that’s a terrible pattern, but it just feels like going with the idea of, “Oh, we’re going to pay you based upon where you physically sit, nevermind the fact that the work you do does not change based upon your location,” just to be a toxic attitude. I want to say just from having seen it from the outside for about a year now that it’s one of the more compelling aspects of ReactiveOps.
Sarah: I think that you’re exactly right. I’m very passionate about the fact that we are doing the same work and if were to over compensate people who lived in places with a high cost of living just because of the place that they live, we would be delayed of the service to the rest of our engineers. It’s unfortunate but personally, I think that it’s not a good thing to contribute to that problem because there’s people in San Francisco who can’t afford to live there and who are making a good amount of money but are still scraping by just because the economic situation is so bad.
Remote hiring, I think, unfortunately right now, it’s really hard to hire in the places that are hiring physically. I think we’re doing a really great job of encouraging the rest of the country to open themselves up to remote work. I saw an article the other day that said, “Vermont was going to pay 100 people $10,000 to come work remotely in Vermont.” I think that’s fantastic because Vermont only has a tourist industry and nothing else. If they can encourage people to come work from their state, they’re going to be better off in the long-term and I think that’s an attitude we all need to start thinking.
Corey: Oh, I agree. I at least live in Vermont, it was a heck of an experience. I didn’t have quite the same advantages that you do living in Massachusetts where a Boston street map looks an awful lot like a microservice diagram so you feel right at home. But yeah, I grew up most of my life in main where there is no economy, there is no industry. The biggest challenge at reason I had to leave was even if I manage to find a job, great, if that dries up and blows away, I have to move.
I’m very interested in seeing companies continue to adopt these patterns where people can live some place that appeals to them and not have to be constrained based upon where their employer chooses to put an office building. That tends to be something that I think that the entire sector could benefit from. Is there anything else that you want to talk about? Where can people find you? I want to make sure that people can read more about your thoughts.
Sarah: Yeah, sure. People can certainly find me on Twitter @szelechoski. I think that I love to have conversations about engineering value. I’m looking to try talking more about that both on my Twitter, at conferences this year, and kind of the stuff that we talked about earlier which is if you really want to get into this new tech tools, this Kubernetes, and cloud native, and all that kind of stuff, there’s a lot of investment that you have to do.
Even when we talked about remote work, the theme here is that we need to protect and advocate for our engineering resources and that we need to start valuing that work more so that we can start not only being more successful in the implementations that we go through but also so that we’re providing sustainable careers for people that we are not chewing people up in a tech sector and spitting the mouth so that we don’t have people migrating away from tech because of burn out. I think that if anybody wants to talk to me about those types of things, those are the types of conversations that I would love to have with people.
The only other thing that I think that I would like to say before we go is that, we talked a lot about Kubernetes here, I think that there’s probably a lot of people out there who are interested in how their certain situation can benefit from Kubernetes.
Honestly, I think that there’s probably a lot of people listening to this podcast who have tried and who are trying to make it work and maybe this is just the plug for ReactiveOps but we are starting to see more and more people adopt Kubernetes and the patterns that it encourages and those people are just looking for guidance and advice. We’re starting a new engagement type of ReactiveOps which is not, we’re going to help you from beginning to end, we’re going to lift and shift you from EC2 into Kubernetes but more mature for people who are, you know what? We’ve got Kubernetes clusters, we’ve tried it, we’ve got a couple of services going on it but we’re scared to death of supporting it. We don’t want to run pager for it, we’re worried about what happens if in the middle of the night there’s a question we can’t answer.
We are starting to want to help customers and companies who want to make this work, who have engineering resources, and who have people who are interested in continuing to help solve their own problems. We’re starting something called ClusterOps, which is managed Kubernetes clusters, we stay on the map for you, we watch them, we put them on pager, you put your workload on top, and you and your engineers can continue that amazing work without having to worry at night that your clusters are going to fall over. I think that that’s an interesting story. If people are interested in talking about that, they’re welcome to reach out to me.
Corey: Wonderful. Thank you so much for taking the time to speak with me today. It’s always great to have people who are doing interesting things in not just giant cloud companies but also smaller businesses that have built a workable customer base and market niche in the shadows of the giant providers that are dominating the landscape these days.
Sarah: I appreciate that. I really am. I think this is a great opportunity to talk about what little companies can do and how we can all value the engineering work that people are doing out there to make these technologies available to people.
Corey: Absolutely. Sarah Zelechoski, VP of Engineering at ReactiveOps, I’m Corey Quinn and this is Screaming In The Cloud.