Docker went from being a small startup to an enterprise company that changed the way people think about their infrastructure to now, where its relevance is somewhat minimal. The conversation is no longer around the container level. Docker has become commonplace.
Today, we’re talking to Jérôme Petazzoni, formerly of Docker. While he was with the company for about 8 years, Docker definitely experienced a roller coaster ride.
Some of the highlights of the show include:
Amount of work conducted on the enterprise vs. community editions
Docker was so widely adopted because its core technology was open source
Challenge is to build a viable business and revenue model for the long run
Similarities between Docker and Red Hat open source platforms
Docker went from six people working in a garage to having a few hundred employees and $1.3 billion valuation
Changes happened, but they were gradual; the changes were necessary to be a profitable and sustainable company
Contingent of internal and external people believed that Docker was the answer for whatever problem surfaced; Docker would save you, but not always
Balancing Act: Pushing forward with a correct message and regulating enthusiasm
Networking and Docker for dummies; confusion and problems of things not working as expected have been resolved
Things will continue to shift; Kubernetes and the orchestration battle
What was unthinkable, could happen by companies pushing the envelope and making progress
Will who you have as your Cloud provider stop mattering? It depends.
All major Cloud providers plan to offer managed Kubernetes services and what Jérôme thinks of them
Jérôme’s opinion on whether Kubernetes will follow this same path as Docker
What does the road ahead look like for infrastructure automation? There is potential and lots of best practices in Cloud environments.
Full Episode Transcript:
Corey: This week’s episode of Screaming In The Cloud is generously sponsored by DigitalOcean. I’m going to argue that every cloud platform out there biases for different things. Some bias for having every feature you could possibly want offered as a managed service at varying degrees of maturity. Others bias for, “Hey, there’s some money to be made in the cloud space. Can you give us some of it?”
DigitalOcean biases for neither. To me, they optimize for simplicity. I told some friends of mine who are avid DigitalOcean supporters about why they’re using it for various things, and they all said more or less the same thing. Other offerings have a bunch of shenanigans, root access, and IP addresses. DigitalOcean makes it all simple. “In 60 seconds, you have root access to a Linux box with an IP.” That’s a direct quote albeit with profanity about other providers taken out.
DigitalOcean also offers fixed-price offerings. You always know what you’re going to wind up paying this month, so you don’t wind up having a minor heart issue when the bill comes in. Their services are also understandable, without spending three months going to cloud school. You don’t have to worry about going very deep to understand what you’re doing. Its click button or make an API call, and you receive a cloud resource. They also include very understandable monitoring and alerting.
Lastly, they’re not exactly what I call small-time. Over 150,000 businesses are using them today. Go ahead and give them a try. Visit do.co/screaming and they’ll give you a free $100 credit to try that. That’s do.co/screaming. Thanks again to DigitalOcean for their support in Screaming In The Cloud.
Hello and welcome to Screaming in the Cloud. I'm your host, Corey Quinn. I'm joined today by Jérôme Petazzoni, formerly of Docker. Welcome to the show, Jérôme.
Corey: You were at Docker for eight years and just recently wound up leaving. That's long enough to be declared legally dead. That's effectively an entire career and a half if you go into a non-tech company. What's it like having left a company and having been there that long?
Jerome: Well, that was a first for me. All my previous work experiences had been stretches of one year or two years, some time, and then I would switch to something else. I feel like I cheated a little bit with Docker because I made it last longer just by switching hats often enough from the very early days where it was like almost literally six guys in the garage. It was not a garage; it was some co-working space in the summer, but that's as close as it gets to a garage around here.
Then, going through its evolution of becoming one of the competitors to Heroku, managing a team of amazing SRE folks, and eventually the pivot to Docker and then accidentally becoming Docker's evangelist and developer advocate. Eventually, after a few more years, reducing the number of stocks I was doing at conferences to focus on the workshops, and tutorials and that kind of thing and then, at some point I’m like, "Okay, I need to take a break from all this," and that took almost eight years.
Corey: During that time, Docker itself went through a fantastic rollercoaster ride. It went from this tiny startup more or less in a garage that had this great software idea that they wound up bringing a lot of attention to change the way that people tended to think about their infrastructure, got very large and then, effectively, wound up not having a stated revenue model that could carry them and continue to sustain that growth.
It went from tiny startup to enterprise company. Now, it feels like the relevance of Docker as a company in the marketplace just eight years later is somewhat minimal. The focus has moved on either to orchestration or just server-less technologies—things that are powered by Docker technology—but the conversation isn't around the container level anymore. Is that an accurate assessment?
Jerome: It's one side of the things. I think that the people in Docker Inc. are at least aware that it's just not around the container and time. Maybe it was in the 2014 or 2015, but very quickly, the conversation, as you said, moved towards higher levels of the stack. “How do I orchestrate my containers? How do I manage the lifecycle of my container images? Which means how do I build them? Where do I put these images? How can I be sure that this image doesn't contain some three-year-old vulnerability that was let through because we are building stuff from some antique package repository, etc? How do I find out if my hosts are ruining that kind of image?”
I think Docker moved into that space pretty quickly, maybe faster than we predicted then with the radio scanning things, with Docker Enterprise Edition, etc. Perhaps one thing that confuses or annoys people is that there is a gap between the enterprise edition offering, which is pretty solid–I haven't used it a lot because I tend to have skin reactions when I use enterprise software, but the few times I did demos with it, I was positively impressed–and then the open-source side.
It was like, "Okay, we nailed down the container engine. That's great. It's open source and everybody can use it and it's everywhere," and then all of the evolutions happened more on the enterprise software side which also maps the evolution of the company from this kind of open-source hero or champion to being more in the enterprise side and seemingly less-friendly to open source. I phrased this as 'seemingly' because, of course, there used to be a time where 90% of the engineering people are at Docker we're working on open source because there was only open source in Docker products. Now, it's a very different split, maybe it's 50/50, maybe it's even more tilted to the enterprise side. I don't know, exactly.
That's a big difference, but there are still people working on open-source software at Docker, and I think there will be for a long time. Perhaps some people are disappointed, thinking, "Docker was so big that, by now, they should be worth like $11 gazillion," or whatever but I think Docker is kind of comfortably making a spot in the enterprise software ecosystem now, and that's the best that can it do now.
Corey: It feels a bit challenging to look back to 2010 when this all started as dotCloud once upon a time and see, even with perfect foreknowledge, how the world would evolve to have, I guess, changed Docker's growth trajectory. The reason that Docker was adopted as widely as it was because the core technology was open source. It was given away to people at no charge to them. That challenge has always been, for companies–and this predates Docker for a very long time–was how do you build a viable and sustainable business full of very expensive, very talented people and have a revenue model that can drive that over the long term? Eventually, venture capitalists want to see some form of return on their investment.
Jerome: I think, first and foremost, I don't consider myself as a good businessperson in the sense that I don't know how to make money out of things. If I did, my present would be very different because in 2005 when I started my own company in France, we were the first folks to have a Retro Machine hosting offering. We were selling VMs for hosting, and that was really how we wanted to differentiate ourselves.
That was a few years before EC2 and yet that didn't turn into a successful business. The company still exists and it still pays for a handful of very talented folks to do amazing things but, interestingly, neither of them is Jeff Bezos selling our clothes. Disclaimer: I'm not truly a businessperson so what I am going to say, it's probably a little bit simplistic, but I feel like this challenge that you pointed out, like, "How do we make money out of open source especially when there are VCs being like, 'Hey, where is the money now?'" I think that this has been addressed by folks like Red Hat and I feel like even if that might be making cringe a few of my former co-workers.
I feel like Red Hat and Docker's business model are actually pretty close in the sense that like, "Hey, there is this thing. It's free. You can get Fedora, CentOS without giving a dollar or two to Red Hat, and yet Red Hat gets a lot of money from RHEL and from services and from a lot of things to make open source awesome in the eye of the enterprise buyer. I'm aware that this is a very simple view of things, but that's how I understand it and, from what I could tell from my last quarters at Docker, my co-workers were on track to achieve similar things.
Corey: The last time I checked, Docker had a few hundred employees and had a $1.3 billion-valuation. As you mentioned, starting off as six people sitting in a garage, what was it like as the company transformed and effectively hit hyper-growth?
Jerome: Well, first of all, the early days were pretty much exactly like, as you can imagine, there was six of us sitting around a big table in a co-working space, six white dudes and not a single of ounce of diversity on site. It took us some time to realize that, "Hey, maybe this might be a problem and we should address it," but, eventually, we did and I'm glad that we did. The transformation happened in a really subtle way, of course, because you don't go from six to 500 or 600 overnight.
I don't really know if I could put specific points in the timeline because this is all a continuous thing. Even switching CEOs was like, sometimes, you could think, "Oh, my god. They're switching CEOs. There are going to be huge, deep changes." No, because the CEOs that we had at the time, I think, were smart enough to not try to say maybe like, "Alright, now we're going to change overnight because I'm the new CEO." No.
I remember when Ben Golub came onboard—he did a lot of round-table meetings with us to assess what was the future like, where do we want to reinforce, and what do we want to change. When Steve Singh came around, I hear that similar things happened. I was not part of that because I was not physically in the office at that time and my role in the company also evolved. It's really hard to put the finger on the specific point in time when things changed, but, of course, at some point, we went from garage startup to enterprise software and, of course, a lot of people feel either unhappy or frustrated about it because it's a different atmosphere.
It's like, "Oh, my god. We have all these people in suits. What are they doing? I think they're sales, right? Yeah, and they're bringing that thing that we call revenue? Huh, interesting. That's new." A lot of changes happened but they were pretty gradual and I don't deny that some people probably woke up overnight and were like, "Oh, crap. That's not the company I used to work for and love," but I think that these changes were necessary. I was talking about bringing in revenue, having sales, etc but, that's what you need to do if you want to turn into a profitable and sustainable company.
Corey: One of my historical criticisms of Docker was always that there was a contingent of people, and you were never in this group incidentally. You were always very even-handed, but there were forks both internal and external in the community who thought that, regardless of what your problem was, the answer was going to be Docker as a container. It turned into jokes where someone tried to give a lightning talk once, which was just five minutes of saying, "Docker, Docker, Docker," the entire time that may or may not have been me.
The challenge there was that it felt like it was this panacea that could be poured onto any problem that you had, whatever it was Docker was going to save you from yourself, your architecture, and your poor decisions. While Docker did unwrap and unleash a number of different opportunities and infrastructure, it doesn't solve for everything. How did you feel going through that process as you see some of the hype starts to run away with itself?
Jerome: At first, I didn't notice it and I feel bad about it because, as you pointed out, lots of people who were extremely enthusiastic about Docker would be like, "Yeah, it's going to be the best thing since sliced bread," and some people inside the company and some people outside. At first, I think mistook that for the Californian optimism where everything is awesome and, when something is not awesome, they think that it's horrible.
It took me a while to realize that, there are some people who are overdoing it either directly or just because they are really convinced that Docker is going to save the world and then, at that point, it's a very delicate thing to do to say, "Okay, well, Docker is great for many things but not all of them, and I know that you're super excited about it but let's see how we can tone down things a little bit because it's not helping the cause in the long run if we try to have people replicated the first thing they do in Docker. It's probably not going to end well and nobody wants that."
I think at some point, the answer I built for that is, "Alright, I'm not going to debate if you can do everything with Docker or not because you can probably do everything. I mean, if you're MacGyver, you can probably build the tire or whatever with just a screwdriver and a little bit of duct tape so, sure." My point was more to say, "Okay, let's start with the easy things and so state-less applications, something like new that's low-impact. Use Docker as a glorified package manager because it's going to be easy and it's going to help you because building packages is hard and boring. Then, little by little, let Docker work its way towards CI, maybe has some CD but for staging or QA or something like that. Little by little, you can assess what's going on to see what the programs are, but don't try to go too fast."
I think, internally, it has been a very difficult balancing act between pushing forward a more direct message because we don't want people to get burned and, also, you have to have some enthusiasm and to be convincing and say, "Yeah, this is awesome! Let's do it," just because that's the way it works. That's the way it works. First of all, me as a European looking at the Californian stock of culture, as I was joking about, everything is awesome and if something is not awesome, it's probably because it's really horrible.
You have to kind of be super-enthusiastic with your customer, with your users, with your investors, with everyone. If you're not, then people are going to think, "Oh, it's weird. That person didn't say awesome in the last two minutes so something must be wrong." That was a difficult balancing act. It was having a positive message and also being able to tone down when things were kind of over-inflating the abilities of Docker etc.
Corey: One of the early talks that I gave in my speaking career was sort of a break act for me where I got up and gave a talk about things that I really didn't understand how Docker could address, and what I expected was that people who heard this talk would excoriate me. They would tear me to pieces and say, "Ha, idiot! You don't understand how this works. Here's the answer." Instead, that talk picked up by roughly a dozen conferences and people really started to care about it, and I was blown away by that.
I just assumed that I was the dumb one, that I didn't understand, "Networking in Docker isn't an issue; you just do this, this and this," and, in time, that talk did have a shelf life where, now, almost everything I pointed out is no longer an act of concern. There's been enough development in the space that, surprise—things that were problems three years aren't anymore. At the time, that was eye-opening for me; it was transformative to think that, "Wait, if I don't know how this works and other people don't know how this works, are we just all kidding ourselves?"
Jerome: That talk for the viewers, I think you're referring to, have you seen the touch of Docker and it's an amazing talk. I've seen at least two times. There was a committee and the conference wondering if that talk would just be like rambling, and ranting, and trash-talk, or inappropriate or whatever. I said, "No, I've seen that talk. It's great. You want it for that conference."
First, about that talk, as you pointed out, there are many things that you noticed. "This didn't work as expected and Docker seems real in that regard, and how do we do this?" and you are not the one, obviously. I think it reminds me a little bit of that amazing talk about the security of systems called something like beanbags of order or something like that and safe at any speed, explaining that the computer security is just starting to get somewhere and makes a parallel with car security where, in the past, car security was just horrible and you would die in car accidents all the time because the car was unsafe at any speed. Then, we added airbags and car frames that can offer deform better and the shock, and etc.
Corey: We've started catering to the weak. Yeah, that happened.
Jerome: Yeah, except for the fact that the weak is 100% of the population. I think for computer security, we're kind of getting there as well. The discourse is shifting between, "Yeah, of course. What do you mean? You didn't have this 16-character password using uppercase, lowercase, symbols, emojis, and numbers, and you haven't changed it every month as we asked? Ha, no wonder you've got hacked." The discourse is evolving towards something that is actually possible for normal human beings.
I think for Docker, it was the same thing. I'm glad that the evolution went faster because, at first, Docker was super exciting and useful but requires a lot of extra little knowledge like, "How do I get my networking to work properly?" What if I need a speed performance of the network, because I doing video streaming, gaming or VoIP," or whatever and little by little, we addressed these things and I would say—It's kind of multiple-tiered.
The first tier is like, "Hey, this is a little recipe that I'm going to share with you. It's a hack. It's going to be weird and maybe you're not going to like it but if we do the trick, then it lets you ram down that roadblocking and continue on your fantastic Docker journey," Then, little by little, we made these hacks less hack-ish. We cleaned these things and made them part of the product and, eventually, the user experience and recommendation follows along and at some point, you reached a point where if you want to do that network thing, then it's documented, it's there and there are blog posts and explanations and all you need so that it works well.
Your talk pointed out a lot of these early hacks and early–I want to say misconceptions but maybe mis-explanations would be better. That was great because, personally, when I watched that talk, it made me realize the areas where we needed to improve because it's really hard when you're using Docker since half a decade and you're trying to have an objective look on it to figure out, "Okay, what are the pain points?" It's hard. Having small people do that thing and then point out the issues that they had was extremely helpful, at least for me.
Corey: It's always a challenge too, when you have an emerging technology come out and companies that can iterate rapidly or are just starting out and able to dive directly in to whatever that technology is. That's exciting and it's fun, and you fail fast, and you learn things, and our technology progresses quickly. The other side of it is the large enterprise companies that some would call stodgy; others would say they have actual revenue and agreements to contend with. They have compliance concerns. They have legacy software that does not support being deployed in completely new ways without some serious rework.
I feel like, with any exciting technology, there's always going to be some form of long tail. Companies start progressing to the point where, for example, containerization becomes viable. In less than a decade, what fascinates me is that Docker has more or less gone from this thing that hobbyists and experiment-types tend to use to something that is relatively mainstream to, now, it's progressed almost to the point of being part of the plumbing. It's not something that needs to be actively thought about in quite the same way.
Today, it feels like that decision point has moved been up the stack to your selection of orchestration tooling. Down the road, potentially, even that's going to wind up being eaten as things slowly move up the stack and things that used to be complex now become commonplace and just work. How do you see the orchestration battle playing out? Relatively recently, I believe Docker as a company wound up supporting Kubernetes as a first-class citizen for a lot of their orchestration needs.
Jerome: Just like you pointed out, that things are going to move up the stacks. First of all, the engine is not really relevant anymore. Yes, we are in containers—you're probably running them on Docker. You can also use other container runtimes but for now, most folks run that on Docker just because it works and it's kind of not relevant anymore. I think that, in the future, things will continue to shift the same way that things shifted, for instance, for hyper-visors.
Even in the open source side on Linux, it used to be like, "Are you running Zen or are you running KVM? Both had their pros and cons and, nowadays, I don't even know which one is always running. I used to know but I think they changed maybe multiple times so I have no idea and, honestly, I don't care. I used to care because it was useful to know the intricacies in details and be like, "This is using Zen, therefore, I know that this pin lock intimidation is going to behave in really interesting ways, and I need to know that."
Eventually, that becomes irrelevant and part of the trending, and I think that each time we make a significant improvement in a given space; we just push the envelope one step further. I'm going to give an example that probably was the one that opened my eyes on this. If we think about this highly-available distributed key value stores that were and still are very popular when you need to store important stuff, so I'm talking about ZooKeeper, HCG, console, these kinds of things.
It used to be the case that you had mostly ZooKeeper and it worked but it was kind of difficult to deploy, operate and maintain. When you had Zookeeper in the equation, it's like, "Great, now we have the JDM and we have this extracting to maintain," but we need that because we need that highly-available key value store. Then, I remember when HCG came along. That was certainly kind of – to me, it's every person speaking.
It was kind of a revolution because setting up HCG was super easy and, of course, we knew it was new so there'd be some rough edges but in the long run, we could see that this would be amazing because operations that used to be frightening like decommissioning a mode to put another one instead, all these things would be completely normal routine operations on HCG and then some people started thinking, "Hey, what if I just put HCD server on every single of my EC2 instances? That way, I can just connect to local host everywhere and that's easier. I don't have a separate cluster to maintain."
At first, that seems like a good idea and then, very quickly, you're like, "Oh no, that doesn't work because there is this right protocol they need to learn about, and If I have thousands of writers, I doesn’t exactly need to learn about and if I have thousands of writers there, it doesn't exactly work," etc. What is really interesting is that this idea of, "Let's put one HCG server on every machine." We would never have thought about that with ZooKeeper because it would just have been completely unfathomable.
I know, probably, a few people tried or maybe even did it but, for most of us, it was just unthinkable. HCG kind of pushed the envelope by moving us to the next stage which is, "Okay, now we're going to earn that stuff everywhere. It's going to be pervasive," and we are already thinking about new use cases for that thing. I think that this is progress in our space. We have something that used to be kind of experimental, special and then, at some point, it becomes mainstream and, when it becomes mainstream, then a lot of people who did not want to touch the technology within 10 feet suddenly can embrace that technology and these people have new ideas that the older people didn't have. That's how we make progress.
I think that we are going to see similar things on the orchestration space. Now that we have Kubernetes that is really solid and complete and awesome offering. We still have stuff like Kubernetes on this side when you have some specific needs. I think that, at some point, there are applications, use cases that are going to appear just because something like Cube becomes pervasive, just because you can rely on the fact that you're going to have Cube everywhere.
Instead of being like, "Huh, maybe we could do this but we need the customers to have Kubernetes and that's too small of a market to really think about," instead, it can be like, "Okay, to use our stuff, people need Kubernetes but almost everyone does so let's do it." Docker did that in some ways. Cube is going to do that. I don't know what's going to be next because I don't really define myself as a visionary person, believe it or not, but I think that this is what's going to happen.
Corey: As things like Docker, Kubernetes, etc have become pervasive, it feels like we are on the cusp of being able to have an application and a configuration written in YAML or some other language.
You were approaching a point very rapidly where that's all it takes to deploy an application or a workload to any cloud provider. In near real-time, you'll be able to arbitrage between different providers for cost reasons or for different service offerings and different locations. Do you think that we're heading to a point where who our cloud provider stops mattering?
Jerome: I think that it will start mattering for some applications but still matter for others. What I mean by that is that if we look to the past, in theory, if you put everything like your whole application in Puppet, Terraform or use Ansible or config management all the way, in theory, your choice of infrastructure shouldn't matter too much because it's obstructed by the wonderfulness of config management. The realities will be different because each infrastructure has its own little different things and even if Docker helps to kind of clean that field, there are still a few differences like, for instance, you're using maybe Dynamo, or SQS, or something like that. What's the equivalent if you move away from AWS?
Conversely, good cloud platforms came around with GTE which at least until the end of 2017 was by far the best managed host offering of Kubernetes that may change, of course, now that both AWS and Azure have the offering and obviously try as hard as they can to catch up. I think that for some applications, it will be really easy to migrate, easier than ever not just because of Docker, it's more like Docker moved the slider a little bit so now we have more applications that are easier to migrate.
Obviously, if you have a big, complex application relying on APIs and services specific to a given provider or just because of the sheer size of your data means that moving around is, let's say, complicated, then Docker-on-not-Docker, Kube-on-not-Kube, that's not going to change anything. It's in the air that it will help a little bit, of course, but it won't be like suddenly the magic wand that makes high-grade deployments and portability a thing overnight.
Corey: To that end, all of the major cloud providers have at least announced if not rolled out managed Kubernetes service. Have you got the chance to explore those in any significant depth? Do you have any early opinions as far as which ones are wonderful and which ones are terrible or are you holding out to see more than has already been delivered?
Jerome: I have stale, second-hand opinions. In December of 2017, I spent a lot of time who folks who had decided to go to prediction no Kubernetes. Back then, the big takeaway was that GTE was really fantastic and everything else sucked. That was before AWS and Azure rolled out their offering so I would expect that things are going to evolve. Here, honestly, I don't have any provision or anything like that because it's really how to get an idea of, not only the resources but the route maps and also the priorities internally of these different people.
I think, in that case, choice is good. That means that a lot of people who wanted to have managed Kubernetes clusters don't necessarily have to move to GTE. They can also explore ETS and ATS and other numerous signature. I don't have a particular preference myself, especially given the fact that now I'm not on-call anymore since three years now, but I don't even have a vested interest either way. The only uptimes I care about now is the uptime of my blog which is basically static pages so I don't really need a good cluster for that. Sorry, I don't have a nice quotable answer on that.
Corey: No, no trouble at all. It's one of those areas that still very actively undergo development and I've seen companies coming out with these in new and interesting ways. It's interesting watching this continue to evolve. Do you think that Kubernetes is going to follow in the path of Docker in that people care for a while tremendously about what orchestration system they're using but then get to a point where that's abstracted away to the point where, once again, it's part of the plumbing and no one explicitly cares?
Jerome: It's possible. I've witnessed already a few conversations about, "Hey, should we use a case on the Helm or something else to define our applications?" For some folks, it looks like Kube is a done deal but then, suddenly, there are lots of new things to figure out. I think that doesn't make Kube irrelevant, far from it, because even if we look back, Docker is not in truly the question anymore.
When we work on our applications locally, generally, there will be either Docker for Mac or Docker for Windows or something like that so we still use Docker really closely on a day-to-day basis. With Kube, I feel like it's kind of the same thing. We're like, "Okay, we can agree and accept that we're going to our Kube cluster. Now, how are we going to define our applications and how are we going to movie images around?" and, as I said a while ago, find vulnerabilities, etc. Now, we can work even harder on these problems, on these challenges.
Corey: Last question that I'll beat you up with: Play futurologist for a minute. What do you think the road ahead is going to look like as far as infrastructure automation, as far as deploying software from a developer laptop into a cloud environment that winds up being globally-spanning? Do you think that we're still going to see incremental improvements or do you think there's another Docker-like paradigm shift waiting in the wings?
Jerome: First, as a disclaimer, usually, my provisions and forecasts turn out to fail miserably. I don't know if there will be a big shift. There are many things kind of waiting in the wings, as you said, like serverless and IoT and Blockchain, etc. So it's kind of interesting to see, "Alright, what kind of potential do we have here?" First of all, the thing to me that is really exciting in the road ahead with containers, generally speaking, and in particular, everybody in the bigger brand—Docker, Kube, everything.
The really exciting thing is that there are lots of best practices in cloud environments like, "You should have golden images and you should carry deployments in blue-green and this, and that," and feature switches and whatever. I feel like containers give us an easier way to do that. For instance, even if their infrastructure used to be something that maybe Netflix was doing and maybe next week was themselves and a fewer folks.
When you really kind of dive in the trenches and ask people, that stuff is hard and, often, it steps on the brakes of innovation because now each time you want to allow a single line of code, it has to be baked into an EMI and then that server has to be replaced. That takes a while and the tooling format is huge and complex. With containers, that tooling becomes easier and faster to use. We can have immutable stuff with 10 seconds between I put my line of code and then I get images built and pushed into my servers. That's exciting.
There are certainly other things that are going to make this workflow easier. There are certainly new workflows that are going to appear and new stuff that will seem even better than the future switch blue-green that we do today, but I don't know for sure what they are. Otherwise, I would probably be gathering a team to create a startup around that. I don't really have a good forecast on that, unfortunately.
Corey: Thank you very much for sharing your thoughts on infrastructure. One last thing before we call it an episode. You've been very active lately–or maybe for a while and I just started noticing it more recently–on Twitter talking about a variety of different topics that aren't directly tied to technology: English as a non-primary language. You've talked about mental health. You've talked about diversity. You're basically using a short-form communications method to have very in-depth almost essays in a way that flows naturally. What sparked your use of Twitter as a platform to have that type of conversation?
Jerome: I think the main reason is audience because thanks to my involvement in the content of story, I attracted a following on Twitter, and so that's a platform. One of the things that I've decided to do, starting, I think in 2015 was to use that platform–it sounds really cheesy–for social good, in a way because if there are thousands and thousands of people willing to listen to me talk about container stuff, I might as well try to move the needle even just a little bit by telling about these other things that are not container-related but that matter to me.
I feel like we don't talk about them enough in particular for diversity and mental health. Speaking about French and English is more a kind of byproduct where a few times, I had thoughts and conversations with others kind of muddling at the differences between French and English and how concepts map between the languages and so I decided to kind of throw that in the mix as well.
Another thing I noticed about Twitter a while ago is that it feels like it's, to me, a good way to consume that kind of shorthand information, something that is longer than a tweet, maybe not long enough for a full blog post. I'm pretty sure that I'm not the only person to sometimes kind of scroll aimlessly through my Twitter timeline like a zombie, and I think other people do that as well. I noticed that if there is a link to a blog post, perhaps I would read it but, most likely, I will start and then perhaps later I will read it.
If it's a thread, it's short enough so that I can invest a little bit time into that and, if I'm bored, I just scroll past it and it's quick. I don't need to leave my timeline. I don't need to open a web browser which, on the phone, is not really a good experience because 90% of the screen surface adds in other stuff. I felt like Twitter was a good outlet for that. Twitter has many flaws that I don't even want to stop talking about them but I feel like it was a good outlet for this short microblog scales.
Corey: Thank you so much for appearing on an episode of Screaming in the Cloud, Jérôme. My name is Corey Quinn. This has been Screaming in the Cloud.
Jerome: Thank you as well for having me.