How are companies evolving in a world where Cloud is on the rise? Where Cloud providers are bought out and absorbed into other companies?
Today, we’re talking to Nell Shamrell-Harrington about Cloud infrastructure. She is a senior software engineer at Chef, CTO at Operation Code, and core maintainer of the the Habitat open source product. Nell has traveled the world to talk about Chef, Ruby, Rails, Rust, DevOps, and Regular Expressions.
Some of the highlights of the show include:
Chef is a configuration management tool that handles instance, files, virtual machine container, and other items.
Immutable infrastructure has emerged as the best of practice approach.
Chef is moving into next gen through various projects, including one called, Compliance - a scanning tool.
Habitat is an open source project featuring software that allows you to use a universal packaging format.
Habitat is a run-time, so when you run a package on multiple virtual machines, they form a supervisor ring to communicate via leader/follower roles.
Deploying an application depends on several factors, including application and infrastructure needs.
Habitat allows you to lift a legacy application and put it into that modern infrastructure without needing to rewrite the application.
Habitat is Cloud-agnostic and integrates with public and private Cloud providers by exporting an application as a container.
Chef is one of just a few third-party offerings marketed directly by AWS.
From inception to deployment, there is a place for large Cloud providers to parlay into language they already speak.
The technology landscape is ever changing. What skills are most marketable?
Operation Code is a learning by experience type of organization and usually starts people on the front-end to immediately see results.
Links:
Nell Shamrell-Harrington on Twitter
Nell Shamrell-Harrington on GitHub
GorillaStack (use discount code: screaming)
Full Episode Transcript
Corey: Thank you for tuning in to Screaming In The Cloud. My name is Corey Quinn. Today I’m joined by Nell Shamrell Harrington, currently a senior software engineer at Chef and also the CTO of Operation Code. Welcome to the show, Nell.
Nell: Hello. Thank you so much for having me, it’s great to be here.
Corey: We’ll get back to Operation Code a little bit later. One of the reasons I reached out to you originally is to talk a little bit about how your core employer, Chef, is evolving in the context of a world where increasingly Cloud is on the rise. Historically, to my understanding, you came up through Cloud computing through your previous work as a developer.
Nell: That is correct. I was working on Blue Box which was then a Cloud provider.
Corey: There’s a lot of stories that end that way. They once were a Cloud provider and then etcetera, etcetera, things happen and here we are today.
Nell: It’s the nature of the industry. What happened was they were bought out and absorbed by IBM.
Corey: Which is, frankly, not a terrible way to go. It beats the acquisition exit as opposed to and then they were never heard from again as they wandered off into the wilderness.
Nell: Sometimes I look at my old, because I got a lot of conference t-shirts from the past six, seven years. A number of them, I have to think does this company still exist? No, I don’t think they do. I found little historic relicts of swag from companies past.
Corey: A number of folks also tend to have this, I guess, collection in a drawer or somewhere of stock options that are worth the paper that they’re printed on for. They can wallpaper a room with them but they never turn into things. Frankly, I find the idea of having conference t-shirts from companies that don’t exist anymore a lot less depressing. After Blue Box, did you go directly to Chef or did you have a wilderness period?
Nell: Kind of a wilderness period. I worked for PhishMe for a year, which was an application company that ran on Blue Box. It was nice having that little bit of inside connection when I started working there. They produced a software that could be used to send simulated phishing emails—employers could send to their employees, that an employee can click on it, they would be brought back to our web application and get some nurturing lessons on how to avoid doing that in the future.
Corey: To some extent, I imagine that was overtaking in time by the Russian mafia who instead of sending nurturing lessons, sent very expensive lessons but fundamentally it wanna be in a crowdsource solution.
Nell: You learn either way, one was more expensive than the other.
Corey: After that you wound up at Chef.
Nell: I did.
Corey: How long ago was that?
Nell: That was three years ago. I’m actually coming up on my three year anniversary about a week, I think.
Corey: Probably before the show winds up going out there. Happy belated anniversary when the time comes.
Nell: Thank you very much.
Corey: I personally considered myself something of an expert in the realm of configuration management but I’m going to caveat that with Chef was the one tool out of more or less the entire public market of configuration management tools that I never touched directly. There was always, “I know all of these tools.” And then there is Chef where I would smile, nod, people said things and never said a word.
To my understanding, these tools fundamentally, at least at the time I was heavily involved in this, all tended to do the same type of thing. In other words, they would manage what was on a box or an instance or a virtual machine or even inside of a container if you wanted to go that approach, files, services, packages installed, certain things in a certain state. Whenever they would run, they would detect deviations from their ideal blessed state and attempt to converge them back to the mean. Is that roughly accurate?
Nell: That’s roughly accurate, that’s the fundamentals of configuration management.
Corey: This was on the rise for a while. It seemed like this was the direction that a bunch of companies were heading in. Instead of having 300 administrators all doing the stuff by hand badly because humans make terrible computers, this was the shiny future that everyone envisioned. Somewhere along the way, it seems like the industry took a different path that not many of us saw coming.
These days or even a few years ago now, you talk to companies about what they envisioned the best practice approaches. The answer generally tends to take in the form of, “You use immutable infrastructure, you don’t wind up touching anything on the box, you just blow it away and then repair it.” We can debate whether or not that is the correct way of doing things but that is an architectural pattern that is emerged with some vigor. In a world like that, what does Chef become?
Nell: Chef classically, classic Chef as I call it, is about configuration management about having those fleets of servers that you all want. Configure the same then you wanna be able to make a change in their cookbook as we call it, the template for it and roll it out to the entire thing. I’d say configuration management is current gen, things like immutable infrastructure containers, things like that, those are next gen.
The way Chef is moving into that next gen while not abandoning people who are still in the current gen is we have a couple of new projects, one is compliance. No matter what you run your infrastructure in, whether something you managed through a Chef cookbook, whether it’s an immutable piece of the infrastructure, you need some way to make sure it’s secure.
We have a lot of customers who work in defense industry and health care. They need some automated way to scan all of that infrastructure and be sure that certain ports are closed, different things are configured about them. We provide a tool called Compliance which will automatically scan infrastructure—real working infrastructure—for those security requirements.
That is a one major play that Chef is making to help expand us from just configuration management to being relevant to immutable infrastructure. No matter what kind of infrastructure you have, security is always gonna be relevant.
Corey: This would be inspect.
Nell: Inspect is what we use to create those templates and compliance is the actual scanning tool.
Corey: It ties in to a larger offering around compliance. I would also point out that as much fun as it is to be a hype evangelist and talk about how containers are the way and the light of the future, not every workload is appropriate for a container based ecosystem.
Nell: Absolutely.
Corey: Further, a lot of companies are, I don’t wanna use the term stagy so let’s pretend I did, these are large insurance companies, large banks that have been around for a century or so because someone gets up on stage at a conference and says, “This is the new way to do everything, go ahead and shove it in a new direction, the end. We’ll see you when you get there.” These digital transformations are something that a number of these companies take very cautiously.
Given the consequences of getting it wrong, I can’t say that they’re necessarily wrong for doing that. There’s always going to be a bit of a long tale of folks who still today don’t necessarily trust virtualization, let alone Cloud, let alone containers, let alone the future server list, etc.
Nell: I remember reading, when the new tax plan passed, the software the IRS uses to generate tax forms was, I believe, written in the Kennedy era. They still use it because it still works but now they’re facing having to change it. I think they might look at modernizing that a little bit. It’s a one thing for a young startup to change from using configuration management on their infrastructure to containers.
That’s not gonna be an easy change even for them but for big institutions that control major portions of the world’s economy, that cautiousness is necessary. We can debate whether it’s a by product of the cultures of those companies or if it’s their actual technical deeds but I think it’s a little bit of both.
Corey: I remember reading about that. I believe the IRS still uses one of the last mainframes in existence with a kick start and a double clutch. That’s not generally something you see to many other places. When you call up a cloud provider and ask about getting one of those installed, they look at you very strangely.
One other thing that Chef has been focusing on for a little while is something called Habitat, historically. I had the privilege of attending a meet up on it about a year, year and a half ago. I had to leave halfway through the presentation because my brain was full. At that point, I could not wrap my head around what it was, what it represented and the level of technical complexity that was being discussed by some of the very bleeding edge people who were working on it. First off, what is it? Secondly, is that still the case?
Nell: Habitat at its core is an open source project and one of the core maintainers of it. As what it is and what you would use it for, I think one of the difficulties with conveying it is it really is two things. The first is Habitat is software that allows you to use a universal packaging format. We provide software where you just create your application code as you normally would. You don’t need to rewrite the application in any way.
We provide a way for you to take that application and put it in a what we call heart artifact, a certain package of the software. You could run that heart artifact whether it’s on bare metal, virtual machine, or container, it doesn’t matter which one, at the moment as long as that you uses Linux/x86.
The real power is you can take that heart file, that package that you created with Habitat, you can easily export it to Docker, to Cloud Foundry, to Kubernetes. More formats are being added all the time. Something I’ve seen when I’ve gone into other companies is that there’s a lot of fighting about what is the one true way of deploying applications.
Habitat, it kind of turns that on its head and it says, “There is no one true way to deploy applications, it’s going to depend on your application’s need, your particular infrastructure needs, the environments that you work in. Let’s give you a way to export it into whatever format you need it.” You don’t have to worry about that when you’re developing the application itself. You can put no matter what it is in that habitat format and then export that to whatever you need.
The other thing, along with the packaging format, it’s also a run time. What I mean by a run time is when you run a heart package, let’s say you’re running that on a virtual machine and you want to run multiple virtual machines running that same package. When you start them up, they form what we call a supervisor ring. What the supervisor ring allows them to do is it allows them to communicate with each other.
There’s a lot of things that you can do with this communication, you can roll out configuration changes, etc. One of the coolest things it does is, let’s say we want a MySQL cluster, we have three different VMs all running MySQL. When you’re running a cluster like that, it’s really common to want a leader follower to apology where the leader will receive all of the rights and the followers will receive all of the reads.
What happens is when you spin up those three VMs and install that heart package, once you start them, you don’t have to do anything. On their own, they will hold an election using a built in algorithm and they will decide who the leader is and who the followers are. The other really cool thing is if the leader goes offline for whatever reason, they will automatically hold another election and elect another leader and the rest will be followers.
They don’t need a central orchestrator for the run time like you need with a lot of container run time solutions. The idea is to push as much as we can down into the application layer and then allow the packages themselves do things that are running the packages themselves to decide how they should be run. Loosely, it is one, a packaging format, universal packaging format and two, a particularly cool run time that allows your applications to self-organize.
Corey: From a perspective of rolling this out to an environment, a lot of these systems and tools work very well in a green field style deployment model. But when you take an existing application, let’s not pick any ancient mainframes, let’s pick on one generation newer, PHP apps that were written 20 years ago. What does it take to take these old monoliths, these old systems that have these byzantine deployment models and convert them to take advantage of something like Habitat, is that a total rewrite?
Nell: It’s absolutely not. That is one of the most beautiful things about Habitat. If you have an old PHP app and you know how to deploy that app, all you would do is you would take your application and you would write what we call a plan. The plan is how that application is deployed. If you know how to deploy it manually like you take that PHP app, put it on a virtual machine that know the commands you run to get it running, you’ll capture those in the plan file.
That’s all it would take as long as you know how to deploy it and know how to capture that plan file, you would then package that into that heart artifacts, that Habitat universal artifact, and you’d be able to instantly put that anywhere in the cloud whether it’s a BM, whether it’s a container. It’ll allow you to leave to that application, that legacy application and put it into that modern infrastructure without needing to rewrite the application itself.
Corey: Is this one of those boiled world scenarios in which every part of an application needs to be managed by Habitat for it to makes sense or is there something that could be eased then more directly?
Nell: You can ease into using your application with Habitat by taking one part of it. Let’s say you have a Rails application and you wanna start running this Habitat, what I would probably start of doing is just taking the application server either package or a passenger or the web server like NGINX or Puma or whatever it is that you use, package that in Habitat and then see how that goes and then consider that time moving the database to be in managed with Habitat. Basically it is something that can be eased into.
Corey: Because the general theme, as much as there could be a theme, of this podcast is cloud computing. How does this wind up integrating with the large public and/or private cloud providers that exist today.
Nell: If you export your application as a container or you can easily write it on AWS container services, you can write in on Azure container services. I cannot keep the acronym straight, whatever the acronyms are for those services, pretend I said them.
Corey: I’m sorry, there have been three more launched since we began having this conversation.
Nell: The nice thing about it is it makes it cloud agnostic. I could use the same Habitat package to run on AWS, I could run it on Azure, Google Cloud, Digital Ocean, all those different evolving cloud providers. It gives you that freedom to run your application where you want to run it in the same way regardless of which cloud you’re using.
With that said, if you’re using some really high level AWS features, the higher level you get, the more tied to a certain cloud provider you get. There might need to be some adaptation there but as for Core, VM or container functionality, you can write it in any one of those Clouds
Corey: As the world moves to server list, the use case for any code that can’t complete in five minutes or less and is written in one of a handful of blessed languages, there’s just no place for that in the world in the future. In reality, that generally doesn’t tend to play out the same way. With that said, something that’s been interesting about Chef historically is that it’s been one of the few third party offerings that has marketed directly by AWS historically. You had OpsWorks for Chef and now it’s Chef Automate for OpsWorks.
Nell: There’s OpsWorks which is a service within AWS. If you want to run Chef Automate which is a self-contained Chef server platform, there’s an AMI in the AWS marketplace that you can use to spin one of those up.
Corey: Coming back to that point, if you’re in a position now where Habitat starts to more or less solve a number of these creation questions of getting software from inception to something that is able to be deployed, is there a place for some of these large Cloud providers to start providing Habitat native platforms or is it something that at this point is so easy to parlay into something they already speak, there wouldn’t be a need for them?
Nell: At this point I would say the focus is on parlaying it into something they already speak. I could see at some point in the future having Habitat native platforms but the whole idea of Habitat is being able to take the same package and write it on anything.
Corey: This was always one of the questions to some extent in the early days of Docker. We have this instances and we can just run it there then increasingly the Cloud providers came out with support for different orchestrators. These days it seems like Kubernetes has more or less owned that entire space.
There was a time before this happened where getting anything to run repeatedly in, for example, AWS style environment, most people were using Packer or similar to build an AMI often with Chef as a handler to do a lot of those configuration pieces and then that AMI that got created was passed out to auto scaling groups and the rest. This took a significant amount of time to launch new features, you would look at 45 minute deployment processes for a one line fix.
There’s always been a little bit of a tug-of-war between do you go ahead and bake everything from scratch every time there’s any change or do you get it most of the way there and then finish it off by having something like Chef in place making those changes? Increasingly, it seems that if you can reduce the cycle time down, the argument of never having something you can log into, you just deploy a new version with the push of a button in seconds or less starts to become something much more viable.
Nell: One of my favorite stickers I’ve seen on someone’s laptop at a conference said, “Server lists just means you’re using someone else’s servers.” As we abstract, that’s wonderful but there still is very much real infrastructure underneath that needs to run.
Corey: The nice part now is being able to pay someone else to handle these things with a team and a budget that generally far out […] what most of us would be able to put together. Oh you’re a three person startup, go ahead and hire an entire team of 200 ops people as your next round of funding. That raises eyebrows, having been in those rooms.
Nell: I imagine so.
Corey: To come back around to what we started our conversation with, something that you’ve been up to lately is you’re the CTO of something called Operation Code. Can you tell me a little bit about it?
Nell: Operation Code is an organization that’s dedicated to helping veterans who are transitioning from military to civilian life learn software engineering skills. We partnered with code academies or coding bootcamps to try and establish scholarships for these veterans. We have lobbied Capitol Hill in Washington, DC to have GI Bill funds be able to be used for coding bootcamps. Basically we are a teaching organization helping veterans move into those high paying software engineering jobs.
Corey: How did you get started in something like that?
Nell: I am the daughter of two military officers, both my father and my mother were in the Air Force. Going into the Air Force was not an option for me when I turned 18 due to some medical history stuff that I have. I grew up in military culture, I’ve always very much identified with it. I realized that I wanted to do something, if I couldn’t be in the military myself, I wanted to do something to help those who are making that transition out of it move into this world of technology that I’ve been in for the past several years.
Corey: One of the challenges as I take a step back and look at the career trajectory that I’ve had, I went from doing a bunch of non-tech things to being in support to working as assist admin to becoming a systems engineer, production engineering, DevOps—if you won’t smack me over the wrist for that one—and so on and so forth. If I take a look back at the technical world that I walked, I, at this point, have lost sight of how I got here in a way that translates meaningfully to someone who is just starting out today.
If someone said, “How do I wind up becoming the level of cloud architect or senior systems engineer that I generally still think of myself as technically? I’m at a loss where I don’t have a great story to tell people of well, just do this, this and this. For better or worse, the road that I walked is closed.” How do you bootstrap someone who’s starting from little more than an interest and a willingness to put in time?
Nell: Knowing where to start is by far the hardest. The technology landscape since I started 10 years ago, more that 10 years ago, has changed so rapidly. It feels like it entirely turns over as what skills are most marketable every couple of years or so. What we do is when someone is just starting out, wanting to get a feel for something, we have a partnership with, they used to be Lynda.com, now it’s LinkedIn Learning, I think something like that.
Basically we give people the resources to try it out on their own if they want to and see, “Do I like this coding thing? What speaks to me about it?” It’s such a hard feel to get into. I think some people come in because they want a high paying job which honestly is a perfectly fine reason to come in the field, I think. But that isn’t always enough to carry someone through not just the learning curve of a beginner but the learning hockey stick, I think of it as, for an intermediate developer into a senior developer.
Something else we do is we do have open source project, that’s a major part of a role of a CTO, is helping govern those projects. We run our frontend in React, we run our backend in Rails and then we run our infrastructure in Kubernetes.
The reason we do that is all three of those are very modern, very desired technologies to use so we give someone who wants to come in and get experience with these a chance to work with myself or work with other mentors in the program on learning these technologies through contributing to a real open source project that they can then put on their resume or put in their portfolio and show developers. We are very much a learning by experience kind of organization and we are constantly iterating on that and changing as the industry changes.
Corey: Someone comes in and they’re learning as they go and you give them exposure to frontend, backend, the infrastructure bits with Kubernetes. I believe you mentioned before we started the show that the Kubernetes’ cluster itself runs on top of AWS.
Nell: Yup, using a Kobos I think is what we use to spin up that cluster.
Corey: The consensus on the proper way to run Kubernetes on AWS is clear that everyone else is doing it wrong. I feel like as many people as you talked to, there’s always someone with the divergent opinion. When someone comes in, where do you start them on that entire stack? Do you tackle the entire thing and see who drink from that fire host to some extent?
Nell: We usually start people on the frontend because on that case, you can see your work immediately and see the effects of your work very intuitively. That’s usually where I start someone off to just get a feel for coding, get a feel for development. We might teach them about APIs and move them into Rails. Usually, only after that, unless someone comes in saying, “I want to be an infrastructure engineer.” Then by all means, I’ll start them with the infrastructure.
Then we start introducing people to Kubernetes in particular because our current setup, you have to have your workspace configured for Kubernetes in order to get into one of the containers that’s running our web server and access Rails council or access the Rails’ logs. There is currently a little bit of Kubernetes knowledge required for a lot of the maintenance task with our Rails application.
That’s something we’re looking at saying, “Is there a better way to do this?” It’s a way to get someone’s hand wet or feet wet, I guess you don’t wanna get your hands wet, in Kubernetes and just get a little bit of exposure to all these different technologies.
Corey: Here’s the $64,000 question, is Habitat used in some ways in these environments or not yet?
Nell: Not yet. It’s something I am looking at doing, I very much want to if for nothing else than using it to create our Docker images. I’m not introduced to Habitat quite yet, there’s a few things I wanna get more stable about our environment before introducing the Habitat which will require some more knowledge acquisition by our team.
Corey: One thing I’ve always noticed in the course of my career has been that if I think I know something, all I have to do to dispel that notion is teach it people who’ve never heard of it before. Everything from there turns into a question of how I think about it, how it’s presented. For something that is still as early days as Habitat presents that sometimes, I can’t shake the feeling that that would wind up adding significant value just to the onboarding process.
The challenge when you’re working on an open source project and you’re deep into the weeds as you forget at times what it’s like to come not only to the project new but to the project as it stands today because when you started it did far a fewer things and it was much easier to wrap your head around. These days I can’t imagine what it’s like to come to AWS, build a new account, get the giant list of fine print, that’s a service listing, that’s not just legalese.
It’s difficult to get over that hump and it’s challenging because a lot of times, very talented people forget what it’s like. One of the things I find that’s so compelling about operation code is it’s giving back in a way that you don’t see very often.
Nell: A part of what motivated me to reach out was a dear friend of mine who’s a veteran, posted something on Facebook. When someone tells her thank you for your service, it puts her off a little bit. I didn’t fully understand it but as we talked through it, I realized because I always thought the moment you find out someone is a veteran and you always thank you for your service, I thought that was the polite thing to do.
The thing was, that was about me, me thanking someone for their service, I was saying, “Look, I know the proper response to that. You made that sacrifice and I didn’t. Thank you.” The veterans have told me you can feel like someone is saying better you than me.
As I talk to this friend, I’m so glad that she had this conversation with me. It revealed to me that the way for me to truly help is not necessarily thanking someone though I do sometimes still thank people with their permission, it’s through giving them the resources they need to find a purpose in life after the military because that is a very hard transition to make for a lot of people going from a very well-defined purpose, knowing exactly what you’re doing, to the kind of more nebulous civilian technology world. I knew I needed to do something that could directly help people then transition and get them into those very well-paying, purpose filled technology jobs.
Corey: It’s great to hear stories like this. It’s nice to see that people care. It’s a nice reminder that the general nature of humanity is to help other people out. Sometimes it’s difficult to maintain the side of that.
Nell: Especially when you’re dealing with internet culture. I actually have Twitter and Facebook blocked by default in my main browser because I had a free moment, I would just turn it on and then instantly be flooded with all the negativity, the darker side of human nature. I can find that without effort but it takes some extra effort, I think, to find the good or to see the good side of human nature right now.
An organization like Operation Code, the joy is once you join it, you’ll see that good side of human nature constantly and you’re actively doing something to make the world a little bit better.
Corey: I’ll throw a link to Operation Code in the show notes. Where else can people find you?
Nell: You can find me on Twitter at @nellshamrell. I still check it daily even though I have it blocked by default, sometimes I unblock it. You can find my personal website, it’s nellshamrell.com or on GitHub, I’m Nell Shamrell. I’m pretty consistent with what my username is on most things. Feel free to reach out to me there if you’re interested in Operation Code or just wanna talk Cloud infrastructure or Habitat or anything, feel free to drop me a line.
Corey: Thank you so much for joining us, Nell. My name is Corey Quinn. This is Screaming In The Cloud.