Do you understand how tabs work? How spaces work? Are you willing to defeat the JSON heretics? Most people understand the power of the serverless paradigm, but need help to put it into a useful form. That’s where Stackery comes in to treat YAML as an assembly language. After all, no one programs processors like they did in the '80s with raw assembly routines and no one programs with C. Everyone is using a higher-level scripted or other programming language.
Today, we’re talking to Chase Douglas, co-founder and CTO of Stackery, which is serverless acceleration software where levels of abstraction empower you to move quickly. Stackery has an intricate binding model that gives you a visual representation - at a human logical level - of the infrastructure you defined in your application.
Some of the highlights of the show include:
Stackery builds infrastructures by using best practices with security applications
What's a VPC? Way to put resources into a Cloud account that aren’t accessible outside of that network; anything in that network can talk to each other
Lambda layers let developers create one Git layer that includes multiple functionality and put it in all functions for consistency and management
Git is an open-source amalgam of different programming languages that has grown and changed over time, but it has its own build system
Stackery created a PHP runtime functionality for Lambda; you don't want to run your own runtime - leave that up to a Cloud service provider for security reasons
Should you refactor existing Lambda functions to leverage layers? No, rebuild everything already built before re-architecting everything to use serverless
Many companies find serverless to be useful for their types of workloads; about 95% of workloads can effectively be engineered on a serverless foundation
Trough of Disillusionment or Gartner Hype Cycle: Stackery wants to re-engage and help people who have had challenges with serverless
Is DynamoDB considered serverless? Yes, because it’s got global replication
Puritanical (being able to scale down to zero) and practical approaches to the definition of serverless
Full Episode Transcript:
Corey: This week's episode is sponsored by Datadog. Datadog is a monitoring and analytics platform that integrates with more than 250 different technologies including AWS, Kubernetes, Lambda and Slack. They do it all, visualizations, APM and distributed tracing. Datadog unites metrics, traces and logs all into one platform so that you and your team can get full visibility into your infrastructure and applications.
With their rich dashboards, algorithmic alerts and collaboration tools, Datadog can help your team to learn to troubleshoot and optimize modern applications. If you give it a try, they'll send you a free t-shirt. I've got to say, I love mine. It's comfortable and my toddler points at it and yells, "Dog." every time that I wear it. It's endearing when she does it and I've been told I need to leave their booth at re:Invent when I do it. To get yours, go to screaminginthecloud.com/datadog. Thanks to Datadog for their support of this podcast.
Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Chase Douglas, co-founder and CTO of Stackery. Welcome to the show.
Chase: Hey, Corey. Glad to be here.
Corey: It's been a busy week so far to put it lightly. Before we dive in too far in the vat, let's talk a little bit about Stackery. First, what's a Stackery?
Chase: Yeah, great question. Stackery is server-less acceleration software. We really come in and we provide filling all the gaps in the software development lifecycle as you build your server-less applications. This goes beyond just the things that you need to build your first land of function, build your first API. It's great at that, but it goes into, how do you begin to use that smorgasbord of AWS services that we're all becoming more […].
How do you piece them together into real-world applications? How do you collaborate with your team so that they all can work together with a high level of velocity, faster than–the great thing about server-less is, as a technology, you can build and ship things fast than ever before, but then when you have a team that actually has the right tools to do that, it's amazing how quickly you ship the most incredible features, the most incredible products using this technology.
Corey: That's a reasonable introduction to it. That said, in the interest of full disclosure, I've been playing around a little bit with Stackery over the past few weeks and I've got to say it's interesting. It's not quite clear to me, I guess, from a perspective of looking at the ecosystem around this entire space where, not to call you a framework, but there's something over 15 different frameworks that wind up wrapping around Lambda functions and server-less applications that all purport to make it something a human being might be able to work with. What is it that I guess makes Stackery different from the direction that most of these things are going?
Chase:Stackery sort of sits on the top of the framework. Today, we import server-less application models, SAM applications from AWS. We import server-less framework applications from Serverless. We really help with everything above that framework level. As soon as you start to use these frameworks, you might start out with an API gateway that is connected to a function to do this little "Hello, World" app.
But when you realize that, "Well, what I actually need to do is I need to have this function in my API to be able to connect to my database that resides inside of my VPC and, oh, by the way, when I'm in production, it's this VPC and this database but when I'm in a development environment, I just want to spin up new versions of those. How do I manage the passwords to access that database?"
You've heard of some of the new integrations with Secrets Manager this week. How do you parameterize things across your environments? And then, all of a sudden, you end up with this gigantic template that you thought you were doing a simple, server-less application; now, this is a thousand lines or more of straight YAML, and this is what leads to people tweeting about how they become YAML engineers and usually not very happy about it.
Corey: Job descriptions: must understand how tabs work, must understand how spaces work, must be willing to defeat the JSON heretics.
Chase: It's this thing where I think a lot of people understand is totally powerful, the server-less paradigm, but I need something to help me chorale this into a useful form. We come in and we treat that YAML as sort of assembly language. No one programs processors today like they did back in the '80s with raw assembly routines. No one even programs with C anymore. Everyone is in a higher-level scripted or otherwise programming language.
Corey: Meanwhile, someone writing an asset assembly has a single tear fall down their cheek as they listen to this.
Chase: I apologize. I'm sorry. When you figure out the assembly instruction for uploading to S3, please let me know. It's all about levels of abstraction. The levels of abstraction, really, are what empower you to move quickly. What Stackery does it takes your frameworks, it ingests that and it understands it. It has an extremely intricate binding model to then give you a visual representation at a human logical level of your infrastructure that you've defined in your application.
When we were talking about that app that has a function, talking to a database within a VPC, VPC alone is like at least 20 different resources under the covers. There's the VPC itself but then there's sub-nets, and route tables, and gateways, and this, and that.
Corey: I still feel like you could wind up taking people who have been working with this stuff for 10 years, sit them down in front of a whiteboard and say, "Draw out all the moving parts in how a VPC network ties together," and still only see about 20% pass rate. Personally, I think I might be able to get a 7 out of 10 of the various moving parts, but it's not something that I would be confident about to the point where I'd stake my life on it, which means I'm, of course, vulnerable to someone who's incredibly convincing and lying.
Chase: We try to play it as we help you build up your infrastructure so that it is using best practices, the things that are straight out of the AWS–they've got their cookbooks of how to build the best practices with security applications of all kinds of types. I'm not sure what the–there's a proper name. I don't have off the top of my head but, as we went through, we're an advanced-tier AWS partner. We had to make sure that we followed every one of their guidelines, and they talk about how to do some of these things, how to provision VPCs in the right way to ensure that traffic is contained and your databases aren't accessible from the internet and so on.
We encapsulate all of that into just a simple–at a human level, people tend to understand. What's a VPC? It's a way of putting resources into my cloud account that aren't going to be accessible outside of that network, but anything I put in that network can talk to each other. That's what we at Stackery try to do; we understand, we ingest, we manipulate all of this YAML goo and turn it into an interface that humans understand using a visual diagramming tool.
One of our customers, they sent us a message one day. They said, "We realize that we needed to whiteboard something," some new feature of their product and how they were going to implement it, and then they realized that it was faster for them to drop into Stackery, drag some new resources around, wire them up than it was to actually get out the markers and start marking on the whiteboard. They looked at their existing whiteboard, which was full of other stuff, and they were like, "It's not even worth erasing this. Let's just drop into Stackery and just wire it up and then we'll be done with it." That's the speed at which we enable the customers using Stackery to build their server-less applications.
Corey: All of which makes a fair bit of sense and is definitely something that I think that, as we start talking to companies that are dealing with this at a larger, more–shall we say–process-mature level, is a definite need. That said, that's probably not the most interesting thing to talk about today. Let's talk about Lambda layers.
Chase: Probably not. Yeah, we're super excited about Lambda layers. We are a launch partner of this new expansion of Lambda. It really falls into two parts. The first part is, as a Lambda function developer, there are times when I have a bunch of common code that is used and accessed by a bunch of functions in my application. A great example is in Stackery itself, as we help you manage your templates and your applications, those are all stored in various Git repositories, whether GitHub, GitLab, in AWS Codecommit.
What we do is, in each of our backend functions, they need to be able to run the Git commands. We compiled the little version of all the Git commands that we need for our backend, and we include that in every one of these functions. Now with layers, we can create one Git layer that includes all this functionality and just slap that in to all of our functions. This helps us in two ways. The first is it gives us consistency and a nice management mechanism for all of our functions, but also it means that we don't have to worry and consider about how we're packaging two different types of code in our functions.
Git is an open-source, sea pearl–it's actually like an amalgam of different programming languages under the hood. It's kind of interesting how it works, but it's built one way, and all of our…
Corey: I assume the Git was a text-based massively multiplayer online RPG, but it might be my old bias creeping in there. The final boss is super hard.
Chase: It's amazing how much it has grown and changed over time. It's a gargantuan effort, one of the technological wonders of the world, but it's got its own build system, and I don't want to deal with that when my functions are all written in Node.js and I just want to run NPM Install and get them off and running. It helps with that, but the second part of the whole Lambda layers functionality, which is really interesting, is that you can create your own runtime now.
One of the things that we've done as a launch partner is we went and created the thing that people have been asking the most for out of Lambda since the beginning of time. I guarantee that this is, by far, the most needed functionality, and that is a PHP runtime. Hold your laughter.
Corey: I tried very hard to steer away from language bigotry, but there are things I have to say. That comes not from being able to code my way out of a paper bag in any of them but, from painful years of experience trying to run various different–shall we say–interestingly-constructed applications and a wide variety of languages in challenging production environments, the lesson I take in all of this is that everything is terrible.
Chase: That is true. A lot of people, they like to rag on different languages, but it's really a means to an end. When we look at the proliferation of things like WordPress and other PHP applications, one shouldn't discount the value that that's provided to our society. One of the cool things is we've been able to build a PHP runtime for Lambda using a layer. We published it publicly so anyone can go out and use it for their own applications, and it operates like a traditional PHP web server.
When a request comes in and you route it from API gateway to this Lambda function, it's going to interpret your PHP files in the same way that, if you were running your PHP web server at home, it would do, and then it sends it right back out the door. There's this possibility now–although this is a kernel, a seed, of a runtime, there's the possibility now that all of those PHP applications that we've got on our servers as monoliths or in other forms, we can now start to think about, "Oh, maybe we can start to upgrade this, bring it into the modern server-less land, break it down," and it's exciting what's now possible with this new Lambda runtimes capability.
Corey: What I'm trying to wrap my head around–and maybe this is a naïve question. You have to excuse me. I had to leave through the keynote when they started talking about this stuff because my brain was full. Again, I'm not the sharpest tool in the shed some days when it comes to these things, but it seems to me that running different language runtimes in Lambda while, yes, it's valuable, "Yay, I can run my kobold monstrosity inside of Lambda functions now and it lasts 15 minutes," or whatever the uptime is now.
Chase: I bet there are many banks out there who would be very happy to play with that.
Corey: Absolutely, but that feels like it's a won and done victory as opposed to the other components of layers, which really feels like more of an ongoing win as native serverless applications continue to grow and evolve. I'm not sitting here trying to say that there's no value to supporting additional runtimes. If there's one thing that we can always count on Amazon to do besides giving things ludicrous names, it's meeting customers where they are, and customers do have that need.
I'm curious, from my perspective as I start looking into this more and more, it's less about running this Ruby thing inside of Lambda and more about being able to go ahead and address the ongoing workflow story of solving the problems of shared dependencies between a wide variety of Lambda functions which, until now, I have to confess, has been terrible. The amount of code we use in my various layout functions is, to be frank with you, shameful.
Chase: The runtime piece is kind of exciting and interesting, and everyone gravitates towards their specific thing that they've always wanted to do but, at the end of the day, you actually, for the most part, don't want to be running your own runtime; you want to leave that up to the cloud service provider so that they're making sure that the Node.js you're using always has the latest security patches, and the same for every other language.
That really hits at the–while the bring-your-own-runtime aspect is powerful and interesting, the real key here is much more around providing these paradigms that really enable people to build the applications with confidence, with consistency, with best practices. It's really exciting to see Amazon continue to push the envelope in all of these ways.
Corey: You are building an entire product company around the idea of addressing more or less–and please correct me if I'm getting this wrong–enterprise workflows around the world of server-less in a way that even an individual person who writes their own like I do can also embrace and appreciate. It's not enterprise-ware where it requires a team of 40 people just to roll it out. I guess, from your lofty position in this space–and this is the burning question I've got.
Now, yes, I can go down the hall and talk to 50,000 people here at re:Invent who all will have a different opinion on this but, again ,I have series of server-less applications that power a lot of things, including aspects of this podcast, my entire newsletter, et cetera. Do I go and refactor all of my existing stuff to leverage layers? Do I wind up doing the refactor as things come or do I just do this for net neu and forget everything there and call it legacy like the old joke of squashing all of the code until today into one giant commit, but the message of legacy code and then start fresh moving forward? What's the right answer here?
Let me ask that question again. You are effectively a subject matter expert in this. You have built an entire product company around making enterprise tooling accessible to not just enterprises but for those of us who are building their own–you've built an entire product company around making process maturity something attainable not just for enterprises but for people with relatively small-scale serverless applications like I have to the point where it's not just for purpose for enormous companies where you need a team of 50 to wind up deploying it but also to wind up bringing this to a point where I can do this as a part of my workflow and not hate it. It makes sense and it makes me go faster, not slower.
You're in a somewhat privileged position to answer this despite the fact that, yes, I can walk down the hall and get 50,000 people to weigh in. Do I refactor all of my existing Lambda functions to leverage this, do I just do it one by one as I start updating those in the natural fullness of time, or do I just squash all of my previous code into one giant commit, label it legacy code like the joke goes, and start fresh from here forward and just treat this is as something I use for Greenfield.
Chase: I love the approach of squash-it-all-a-legacy-layer and just wash your hands of it. That sounds like the dream realized. No, actually, one of the things that I think server-less has gotten right and somewhat out of necessity is that if you have to rebuild everything that you already built once before you're re-architecting everything just to user server-less, then that technology is not going to take off. No one is going to sit there and say, "I want to completely rewrite everything that I already did for the past 10 years just because server-less is the hot, new technology."
Instead, server-less really enables the techniques that–one of the patterns is called the strangler pattern where you take a monolith and your goal is, over time, to strangle it down and pick pieces off of it. If it's an API monolith, you're picking off one route at a time and re-implementing it in a different paradigm. You might take your monolithic Ruby application–or Rails application, I should say–and then you start to take a couple of routes at a time and move it into a server-less function.
Now, with the news today that you've got Ruby as a full-fledged runtime, you're able to do that even easier without having to rewrite your code. I think with layers, you'll see people do the same thing, is that, where they have a need to share a bunch of code, the next time that they're going to need to go and update that across all of their functions, they're going to go and say, "It's time that we go and use this new layers approach," because the alternative is when I need to go update that one line of code in all these shared stuff, now I need to go and copy that all around among all of my hundreds of functions. That becomes extremely tedious. I think it's a matter of just continual evolution of the way that we do software and, as new techniques become available, we adopt them as it makes sense.
Corey: To that end, whenever I find myself leaving the tech coast, for lack of a better term–and I can tell I've left because I'll sit down and talk to companies about what they do, and they have these ridiculous things like business models, and a sense of profit, and trying to build something sustainable, and employees. They respond very earnestly without a condescending accent. I can tell, "Oh, I'm out of the bay."
To a number of these companies, server-less isn't really a thing yet or, if it is, it's a toy that they're playing with in some small Skunk Wars project, which I get. If you have a massive thing that generates billions of dollars of revenue, maybe being the first person on your block to try the new technology isn't really in your Top 10. What I'm trying to figure out as I think about this is, is if I put myself into a place where I was a few years back–by which, I mean not that long ago–and I no longer had anything running in Lambda and I'm approaching it for the first time–this is the, "Today, I learned that there's a thing called Lambda that isn't part of a Greek alphabet or on the name of a fraternity or sorority somewhere." Great. Awesome. Does layers change the way that I approach Lambda from a Day One perspective?
Chase: Yeah. I first take a little bit of exception to the premise that people–it's only the startups, the newer business and the newer product lines that are adopting server-less. There's a lot of the industry, both small and large, that have found that server-less is key to where they're going. You've got companies like Coca Cola, Nordstrom, Matson, which is a gigantic shipping–like container shipping in the original sense of the word–company, and these are companies that people would not have thought are your tech leaders.
They're not on the same tech pedestal as Apple, and Google, and Facebook, and yet they're the ones finding that server-less is extremely useful for their types of workloads, whether it's spiky traffic. They're finding it's faster to ship products of all kinds. There's really a wide spectrum of people using this stuff. Now, I do think that layers as well adds another important key to this tool belt that enables people to dive in more fully.
It's kind of the recurring theme of all of server-less since it started out in 2014 here at re:Invent. The Lambda was announced, the term 'server-less' wasn't even coined yet, and, over time, there's been this evolution of thought of, "Oh, well, what can this really be used for?" It's an expansion of, "Oh, if we hook up API gateway to it, now we've got a great way of creating scalable APIs. Oh, and if we hook up Kinesis Streams to it, now we've got a great day of streaming and consuming data in a scalable fashion," and, "Oh, if we hook up DynamoDB, it streams to it," and, "Oh, if we hook up SQSQs," and, "Oh, if we hook up all these different things, it expands the capability of what you can do with server-less."
I would venture that, at this point, about 95% of workloads out there can effectively and positively be engineered on a server-less foundation, and that's really exciting stuff. I think that it's probably going to creep more and more towards 98% or 99% the next year or two, and we're going to see a sea change as people move to managed services to they're no longer having to run clusters of Docker containers and clusters of databases.
We can see this now with Aurora server-less. It's amazing technology in that it's this one thing that everyone kind of assumed, "There's no way to make that horizontally scalable." While it's not horizontally scalable in the same way that DynamoDB is, it's still providing a capability for easy scaling up and down for a type of technology that people just didn't even really try to scale in that way before it was released earlier this year.
Corey: Absolutely. You started that by saying you took exception to the idea of enterprises not playing the server-less and it only being in the realm of startups. First, if you take exception to that, better go catch it. Meanwhile, there's some developer commuting right now who's driving to work and laughing so hard at that lame joke, they almost rammed a bridge abutment. My apologies to other people on the road. I agree with your premise that companies are investigating this in very interesting ways.
In fact, this may be one of those weird, progressive technologies where enterprise is about faster than some startups do. When I talk to companies who are doing this–and this may be my own bias based upon who I'm speaking with–they tend to be replacing a lot of backend processes, things like cron jobs, things that require instantiation and then go away but, if it fails or is delayed, it's not user-facing in the traditional sense, that small, back-office market that only has how much percentage of the world's GDP writing on top of it.
I do absolutely agree with what you're saying, and I absolutely don't want to come across as if I'm saying that this is just something for the cool kids of tech. I agree with that wholeheartedly. What I'm trying to wrap my head around is getting away from my own historical prejudices regarding Lambda, by which, I mean still think of Lambda as being constrained to five minutes. I still think of it as, "Oh, that's right, it supports something that isn't node."
I still think of it in terms of a very limited subset of its current-day features just because of those constraints from my first introduction to it. Even now that those constraints have been relaxed, lifted and expanded significantly, I still find it hard to come to it from a new perspective. I wonder how much that shades my thinking process.
Chase: There's a couple of things to unpack here. The first is you talked about use cases that a lot of people have for server-less and Lambdas, which is that background batch processing, offline, not-in-production throughput traffic scenarios. That's definitely a huge win for server-less but it's also an onboarding step for organizations where they start to play around with it and they get comfortable with this idea of, "I don't need to know about the server that this is running on. I don't even know what that is to just do this little batch script."
It's extremely powerful for individual developers when they know they just want to run this tiny little thing once a day, maybe, but, in the past, they've been held up in infrastructure procurement. They didn't have a server to go put this on. It's simple. It's like it's a conscript. I can run it on my laptop, it doesn't use any resources but, if I don't have someone giving me a server to run it on, then I'm still blocked.
It starts to tease into people like, "Oh, wow, I can do these amazing things that I never was able before just as a developer or as a DevOps practitioner." Even ops people can start to actually cross into the development roles a little bit, the DevOps roles. There's like a meshing of these roles that server-less enables. The second thing that you were getting at is, "How do I rethink what server-less is and what it means as it's changing over time?"
This is something that we're focused on as well. There are unfortunate realities that certain people who jumped all in on server-less in the very early days may have done so without realizing all of the sharp edges both in tooling and in capabilities that have been smoothed out now. Now, at this point, those same people might have an amazing experience using server-less tech but they swore it off two years ago.
We're working to re-engage with those people to be able to understand where they had challenges and ensure that they've got a great, re-onboarding process. There's that famous Gartner–what's it called, that graph? Was it called the Trough of Disillusionment?
Corey: The Trough of Disillusionment?
Chase: Yes, the hype cycle.
Corey: Yes, I'll throw a link to that in the show notes. Yes, the Gartner hype cycle. Thank you.
Chase: Server-less certainly has had a lot of hype behind it, especially early on. Everyone could see the possibility of this, but it wasn't quite easy or possible to realize all that it was purely capable of two or three years ago. We started on this hype curve, and I think that there are some people who are already heading towards that Trough of Disillusionment. At Stackery, one of our goals is to help catch people when they're starting to get disillusioned because their existing tools and processes are breaking down and help them get across that trough into the zone where, "Actually, we're extremely productive and happy with this and it's everything that we hoped it would be for our use cases." That's what we try and do day in and day out for our customers.
Corey: One more question for you that I'm sure will absolutely get both of us thrown out of any conference for the next two years based upon the fact that someone's going to disagree with this. It sounds like a trivia question but it's not. Do you think that DynamoDB counts as server-less?
Chase: That's a really interesting question. I tend to think of server-less as meaning managed servers. I do not need to figure out how these servers are provisioned, how they are managed from a security perspective. Obviously, I have to figure out IM credentials and permissions which, thankfully, Stackery handles for me, but I don't have to worry about operating system patches. I don't have to worry about, "Is this spread out across availability zones?"
Even with DynamoDB, you've got global replication. Is this even accessible at proper latencies where I want it to be accessible around the world? That, to me, means server-less. Now, there's some people who want server-less to also mean that it's pay-per-use, not upfront provisioned, and I'll take and leave based on what I fundamentally understand about how technology works. I would love unicorns and magic ponies in my backyard every day but, at the same time, that's not what's available in the real world.
As a computer engineering background engineer, I understand what is DynamoDB and how it works, what is Aurora server-less and how it works at a foundational level. I don't understand all the magic and tricks that AWS puts in place to make it work as well as it does, but I still understand how that data is stored, how it's charted, how it's brought up, and that leads me to understand why it has to be provisioned throughput, why it has to be scaling that is on-demand with a certain amount of latency built in and hysteresis. To me, I see it as AWS has taken this technology as far as it's humanly possible, and they will continue to break down the barriers, but I hate to fault them for the fact that, "Well, if this is how database technology works to lay around, DynamoDB is not actually server-less."
Corey: I've had a back-and-forth discussion about this with Simon Wardley in a situation where he was far too polite to tell me to go away and stop bothering him. The trick: always do this in public. The line that he took that I can't really get out of my head because I think it was very poignant is that it doesn't quite qualify on the simple grounds that it cannot scale down to zero. You're always paying for one rights or read unit of capacity, and that's a tiny cost but doesn't wind up changing any of the economics of anything other than the smallest toy problem, but it's still a cost.
You're always going to pay for storage, sure, but, conversely, I'm not paying for a Lambda function when it's not running Aurora server-less stops. I'm not paying instanced hours for it. With DynamoDB, I'm always paying something regardless of how quite or non-traffic I make to the table. On the one hand, that does feel like it's pedantry. On the other, I can't shake the feeling that there is something poignant about that, or maybe I'm just low-brained from lack of sleep due to re:Invent week here, but that's where I sit.
Chase: I'm going to throw this back at you and ask: Do you think Aurora server-less is server-less simply because it can scale to zero even though, in most real-world usages, no one's going to let it because it has too long of a cold start? Does the fact that it theoretically can scale to zero–is that the important part or is it important that it has to be able to scale to zero and scale up immediately?
Corey: That's sort of the question I run into. It's scaling up but, also, if I'm not running it or putting traffic to it, there's the assumption that I pay for zero for it. I think it's a fundamental tenet of it with the obvious caveat of, yes, storage will cost money. I don't expect people to store my data and not charge me for it proportionately. That's fair. I'm talking about compute and network perspective where I don't have to pay for something that is not seeing active use at this instant, and I think that is part of the fundamental tenet that and event-driven computing.
Chase: In my head, I disagree, and the reason I disagree is simply that if a valid use case was that I needed to have thousands of DynamoDB tables and, thus, I needed them to be scaled down to zero because the cost of having thousands of read and write compute units because they must have at least one of each for each of these tables is problematic. If that was a valid use case, I would totally agree, but I tend not to think that this use case makes sense.
The closest you might get is, if I've got a thousand developers in an organization and they've got DynamoDB with auto-scaling turned on at a minimum of one capacity unit for reading and writing and every one of those developers has their own environment where they've provisioned this table, that gets close to this use case yet, even still, if you've got a thousand developers, the percentage overhead of every one of them having their own single-capacity unit tables versus their salary and benefits is miniscule. To me, it seems unnecessary to put that restriction in place.
Corey: That's fair. To be clear, DynamoDB scales down to, I think, costing–what is it? Two bucks a month? It really is who-cares money unless you have 10,000 of them. To some extent, that's where it starts to be concerning to me. It's not the one-offs; it's the idea that I can scale out something truly massive and undo one of these for every version of a service. If that starts to incur costs across the board, it very shortly turns into something where I can't treat it quite the same way. Maybe that's an edge case. Maybe that is so ludicrously down the path that it's not even worth having the conversation. Again, it's one of those things that I've heard about and I can't get out my head now, for which I, once again, blame Simon Wardley.
Chase: Yeah, it feels to me like that there's a puritanical approach to the definition of what is server-less and there's a practical approach. I would say that, yup, the puritanical definition of server-less should include that aspect of being able to scale down to zero, but the practical definition of server-less–I don't see that as being a necessary part of that definition. I will tell you what I do wish–if I'm going to throw out an Amazon wish list, I would love to have a server-less-ish elastic network gateway for my Lambdas when they're running in a VPC.
That, to me–and, obviously, everyone's got their own pet need and everything–that's the one services I would love to have, especially since they don't have tiered offerings for sizes of NAT gateways with the need for a lot of people to access their existing resources. This really hits at some of the enterprise use cases that we have people come and talk to us about. "I want to strangle this monolith. The database is in this VPC. I put my functions in there." There's a lot of people who complain about, "You should never put a Lambda in a VPC. It has all kinds of overhead," which is not actually true if you manage it correctly. There's still that cost of the NAT gateways that is problematic for people. When it comes down to it, I'm hoping that maybe, in the next year, the next re:Invent, even more of these services are server-less.
Corey: I think you're right and I think that we're definitely seeing a trend. In other words, you don't see significant time out of keynotes devoted to baseline on differentiated services anymore. At most, you'll see a few things here or there talking about, "Yes, we've added the 1800th instanced family," but I don't think that’s what's interesting from a perspective of the future of computing. I think that for a keynote and event like this, it always has to be forward-looking and aspirational. From that perspective, I think they nailed it.
Chase: Yeah, I agree. The breadth of services that AWS has, the amazing use cases of what you can do on the cloud, at this point, it really is quite–it practically is limitless. It's more a matter of what do they make easier from here on out because it's all possible, and that's what's really exciting, that I can do everything in the world. Netflix is all on Amazon and, of course, they've been that ways for years, but if Netflix can do it, if all these other companies can do it, you can do it, too, without having your own servers sitting in your room. It's mind-boggling, the power at your fingertips.
Corey: It really is. I think it's unlocking an entire world of possibility for companies that, until very recently, would not have had the capability of even dabbling with this. Now, I can spin things up in the course of an afternoon that boggles the imagination or–let's be honest–I could if I were better at working with computers. There's still no service for that. Like Werner always says, there's no compression algorithm for experience.
Chase: Yeah, that's so true.
Corey: Thank you so much for taking a time out of a very busy week to speak with me today.
Chase: Yeah, I'm glad to do so. It's exciting. It's fun. There's nothing like being at re:Invent.
Corey: There really isn't. For those who have not had the pleasure of experiencing Las Vegas for a solid week with 50,000 of your closest friends, it's something that should be experienced once and never again. This is my second year, and I'm wondering at this point why I've made the choices that I made.
Chase: Did you bring along all the medicinal kits? My thing is all those shots and all those pills they give you when you're going to third-world countries, I've asked for the same thing to go to Las Vegas this week.
Corey: That would have made an awful lot of sense. I forgot to get my malaria pills.
Chase: Yeah, exactly.
Corey: Some of those buffets are no joke. Thank you so much, once again, for taking your time to speak with me today, Chase Douglas, co-founder and CTO of Stackery. I'm Corey Quinn and this is Screaming in the Cloud.