How many of you are considered heroes? Specifically, in the serverless Cloud, Twitter, and Amazon Web Services (AWS) communities? Well, Ben Kehoe is a hero.
Ben is a Cloud robotics research scientist who makes serverless Roombas at iRobot. He was named an AWS Community Hero for his contributions that help expand the understanding, expertise, and engagement of people using AWS.
Some of the highlights of the show include:
Ben’s path to becoming a vacuum salesman
History of Roomba and how AWS helps deliver current features
Roombas use AWS Internet of Things (IoT) for communication between the Cloud and robot
Boston is shaping up to be the birthplace of the robot overlords of the future
AWS IoT is serverless and features a number of pieces in one service
Robot rising of clean floors
AWS Greengrass, which deploys runtimes and manages connections for communication, should not be ignored
Creating robots that will make money and work well
Roomba’s autonomy to serve the customer and meet expectations
Robots with Cloud and network connections
Competitive Cloud providers were available, but AWS was the clear winner
Serverless approach and advantages for the intelligent vacuum cleaner
Future use of higher-level machine learning tools
Common concern of lock-in with AWS
Changing landscape of data governance and multi-Cloud
Preparing for migrations that don’t happen or change the world
Data gravity and saving vs. spending money
Ben Kehoe on YouTube
Ben Kehoe on Twitter
Full Episode Transcript
Corey: Hello and welcome to Screaming In The Cloud, I'm Corey Quinn. Joining me today is Ben Kehoe who's currently a cloud robotics research scientist at iRobot, in many circles better known as The Roomba Company. Welcome to the show, Ben.
Ben: Hi, glad to be here.
Corey: You've been involved in the AWS ecosystem for a fair bit of time. In fact, I believe, a year or two ago you were named an AWS Community Hero. What is that?
Ben: That's right. A community hero is a program that AWS has to recognize people who are contributing to the community around AWS. Expanding the understanding, the expertise, the engagement of people with AWS. I really like facilitating people's understanding of AWS, and their interactions with AWS, and amplifying their voices so that AWS hears the masses more clearly.
Corey: Did that come as an outgrowth of your work at iRobot? Did it come through your work on other projects? Your phone just ring one day and, "Hi, it's Amazon. We've got this thing we'd like to talk you into doing."
Ben: Well, it's pretty much that. I think the seed of it is the interaction with AWS at iRobot where we transitioned our robot fleet to use AWS IoT as our cloud connection mechanism, and grew out from that with my Twitter account, and my interactions with them, and my interactions with the rest of the community both other serverless users, especially, and just the broader Twitter community around AWS.
Corey: Help me through this a little bit, you're effectively at this point known as a cloud/serverless guy, which makes sense, but when I ran into you at re:Invent last year, you convinced me to go ahead and buy a Roomba. I did this as a favor to you. I figured I'd try, it wouldn't work, and I would return it quickly. Instead, it serves two purposes.
One; my floor is far cleaner than it ever was before I had this thing, so it become indispensable. Secondly, as an added bonus, it terrorizes my awful little dog every time it starts with a little chime. She starts barking and invariably goes to the wrong part of the house because she's cute, not smart. What I'm trying to understand is how do you go from being this cloud/serverless guy to tying that back to the robot vacuum that I don't have to think about?
Ben: Well, I'm first of all glad that you've decided to purchase the Roomba. Our CEO is a roboticist and talks about having to learn in the early 2000s to become a vacuum salesman. I think that's true, a lot of what we do at iRobot integrates a lot of different disciplines. My path to where I am today which is really while I do serverless cloud stuff is a big part of what I do, there's also robotics in there, and there's also smart-home IoT things in there. It's kind of a mishmash of a lot of stuff.
I'm an undergraduate of physics and math, and I worked as a theatrical carpenter. I worked for a big enterprise IT contractor for a while and then I went to grad school for robotics. I started out there doing unmanned aerial vehicles, which are a different kind of cloud robot. Then halfway through that, the funding got cut, this is a thing that happens to grad students and I switched to starting to think about how could we leverage cloud computing to enable robots to do more and better things.
Then, I finished my PhD in 2014, came to iRobot in the midst of a transition period for connected robot at the time and that helped transition into the AWS and serverless realm.
Corey: Okay, which makes sense, but credit to your salesmanship you know a sucker when you see one. You sold me one of the upper line of Roomba devices and that's great.
Ben: Roomba 980, yup.
Corey: That would be the one. However, Roomba has been around for 10 years and a lot of these AWS offerings are...
Ben: 15 actually.
Corey: My apologies, even better. Back in 2003, AWS wasn't a thing, so there's obviously been a series of, I guess, evolutionary steps as these things continue to evolve. What would they look like originally and what do they do now that they didn't once upon a time, and how has AWS helped with that?
Ben: There's a few different pieces, one is that when you look at a connected robot, you can get telemetry back from it. I don't know about you, have you ever sent in one of those registration cards where they have a little survey about how you use your robot or any product?
Corey: Yes, I have.
Ben: You really have, you are like the first person I've ever met.
Corey: At one point, they offered a raffle that I'm pretty sure didn't exist for some company, so I did that a couple of times and realized...
Ben: So you are a sucker.
Corey: Say something authoritatively and I will do exactly what you tell me to do.
Ben: Most people are not like that. For a long time, we know we've had passionate users who care about their robots and we know that they last a long time, but we never knew how long. We never knew are people using them in the ways we expect, are batteries sized correctly for the size of people's houses. When you start having a connected robot, you can start to get that information back and understand better how your users are interacting with your product, and then make it better for them, so that's one aspect.
The other aspect is if you look at a Roomba, it doesn't have a screen on it, but you want to be able to program it to do the things you want to do, like schedule it, change settings about how it cleans, view the information that it's generating, and without a screen on the robot you can't do that. Setting schedule on one of the non-connected robots is an exercise and like pushing a lot of combinations of like four different buttons. With connected robot, you just open up your app, and you're using that screen, and you can have a nice experience right there.
These are the benefits that come with a connected robot. We launched our first connected robot, the 980, in 2015. We're now connected through the whole line. The benefit of the high-end ones is that they also perform systemic navigation, so they use robotics algorithms that allow it to tell where it's been and where it's going, and that helps it map out the space, which again because you have that cloud connection, you can show that map to the user. We've just announced the beta around showing you the Wi-Fi signal strength that it sees as it moves around your house, so you can identify dead spots. All of that is enabled by cloud connection.
Corey: Wonderful, I'll even take it a step further and I already made sure that one in this room was set on mute, but I very rarely play with the app anymore. Ignoring the schedule, I even will just sometimes say, "Alexa, tell the Roomba to start cleaning," and that works.
Ben: A fun fact about that, this is one of my favorite pieces. We use AWS IoT to communicate from the cloud to the robot and vice versa. The connection there is low enough latency that when you say that, the Roomba will start playing its little noises before Alexa even responds.
Corey: Fascinating. I just said that. Well, granted Alexa is on mute in my office, but the Roomba is sitting right next to me and it did not start playing when I said that. That would have been hilarious if it had, so there is still that communication that has to happen or is there more to it than that that I'm not seeing?
Ben: No, I'm saying that Alexa tells us that you want your Roomba to start. We're able to deliver that down to the robot faster than after we return, Alexa packages up its text-to-speech and delivers it down to Alexa and Alexa plays it back to you, and that's thanks to AWS IoT.
Corey: Oh, I see what you're saying. Yes it does start playing before Alexa starts responding to me. I was very confused for a second there. I understand that these are little robots, but I have trouble imagining the use case for putting microphones in it. You'd wind up with a combination of the vacuum noise itself as previously mentioned by obnoxious barking dog who follows it around barking angrily. It's great. She weighs less than the Roomba does, which really is a wonderful experience for everyone involved.
You're based in Boston, correct?
Ben: That's correct.
Corey: Wonderful. So I feel like there's a opportunity there, because I believe Boston Dynamics is also based there, given the name.
Ben: That's true.
Corey: It seems like Boston is really shaping up to be the birthplace of the robot overlords of the future.
Ben: Robotics happens primarily in two centers in the U.S., Boston and the Bay Area. Boston Dynamics is here and they make very slick robots. I don't think you'll see them taking over the world anytime soon. They're very good mechanically and they're good at various specific things, which is true of all robots. They're good at very narrow use cases and you put them outside of that and they tend to fall over.
Corey: Yes, I've had mixed results attempting to release the Roomba into the wild, experiments continue to be ongoing.
You mentioned AWS IoT a few minutes ago and I historically have a background on the web app side of things. I can talk about EC2 until I'm blue in the face for my sins, but I don't know much about what AWS' IoT offering is. In a nutshell, what is that?
Ben: AWS IoT is actually a number of different pieces in one service. It's a pub/sub system that delivers messages over MQTT and WebSockets, and it's a rules engine that can take messages that are in that pub/sub system, and deliver them out to Kinesis or Lambda or other web endpoints. There are some asynchronous communication mechanisms and storage involved in there.
It also involves authentication and authorization mechanisms for connecting devices to the internet, because unlike all the rest of AWS services, your device, your IoT device is probably not going to have the same kind of AWS credentials that even a cognito user might have.
That's of course just our connectivity layer. Behind that is where we build all our application logic and AWS IoT itself is serverless. There are no provisioning knobs to tweak on it. You pay for what you use. But then our application that sits behind it is also completely serverless and I think we're up to 30 AWS services in production now, like that's how many AWS services we use to deliver our production applications.
Corey: That's over a third of them and given how far out into the woods some of those services are, that's almost impossible to wrap my head around.
Ben: It keeps life interesting.
Corey: It's one of those things as well where if I take a step back and imagine when I was a kid what the future would look like, I might have said that I would have predicted a robot uprising, potentially. But I would never have guessed, for example, that I bet the robot uprising will have very clean floors.
It was one of those things where a robot and cleaning the floor was never something that made sense until I owned one of these. It sounded like a ridiculous thing for people with too much money. But it works, it's one of the more astonishing and I'd say pleasant consumer experiences I've had, where it over delivered above my expectations in virtually every regard.
Ben: Well, thank you.
Corey: That's no small feat. I tend to be relatively demanding and cynical as a personal failure mode.
Ben: We have some Easter eggs in our Alexa skill where you can ask Roomba to give the cat a ride, I'm forgetting the other ones, there's a number of them. I wanted one of them to be, "Alexa, ask Roomba to take over the world," and for the response to be, "I'm sorry, I can't do that yet." But, unfortunately, that didn't make the final cut.
Corey: Generally, PR and legal tend to want to weigh in on things like that. One question I have for you is a service that I think the world is taking relatively lightly or at least ignoring it for the most part, and that is AWS Greengrass. For those listeners who aren't aware, that fundamentally acts as deploying Lambda runtimes to edge devices, and by edge I mean out in the world not a CloudFront PoPs, where you effectively get a full Lambda runtime environment, and it can execute Lambda functions in response to certain triggers on embedded devices.
Ben: That's actually only half of the story, because the other half of Greengrass is a local MQTT broker that helps the communication between those Lambdas on the device and between devices as well as up to the cloud. If you're using Greengrass with AWS IoT, Greengrass can manage your connection to AWS IoT transparently to all of the code that's running on your device and having that pub/sub system locally also enables you to make all the Lambda code that you're writing to run locally, run event based.
Corey: Is that something today that the Roombas are using themselves? For example, I have three Roombas in my house, do they start communicating between each other? Is that powered by Greengrass?
Ben: We don't use Greengrass. I think the real power of Greengrass is that it enables a company that has a lot of experience in cloud development to move on to devices without a lot of the pain of learning how to deal with devices, and iRobot doesn't have that problem, we know how to do devices. It enables some firmware update use cases that are interesting. It helps you package up your code and send it down. It gives you that familiar code environment, that execution environment that you have cloud side, and you can reflect that on devices.
Then, it helps, especially in cases where you have a gateway. So there's a notion of a Greengrass core and that's the MQTT broker that provides the communication with the cloud, and then you can have multiple Greengrass devices that are talking with this core and sharing the communication mechanism between it, so that's sort of the master of the broker, that core device which is helpful when you have that sort of star topology among multiple devices.
If you look at something like multiple robots in a home, you need each robot to be fully autonomous and operating in a federation rather than as a star topology. There's not one leader or gateway. Greengrass doesn't support that use case today as robustly as it does in some of these other use cases.
Corey: Got you. To that end, as mentioned, you're a 15-year-old company. I'm assuming that back then you didn't have access...
Ben: We are a 25-year-old company.
Corey: My apologies, it took 10 years to get the robots shipping.
Ben: Well, it took 10 years to figure out what robots were going to make money. For the first 10 years of iRobot's existence produced a number of different robots from like space exploration to underwater to like oil well robots, like going down oil wells to determine what's wrong with stuff. There are some like really creepy dolls that got made. Then in 2001, our defense business produced the PackBot, which was our first broadly successful robot for bomb disposal, so that made us some money.
Then, the Roomba came out in 2002 and then that was also successful. Between them, that sort of powered the company for a long time as we explored other businesses and such. In the past couple of years, we've spun out the defense business and become exclusively focused on consumer. But in the history of iRobot, it's also just like my long and winding journey, iRobot has taken a long and winding journey to the point we're at today.
Corey: Okay, back 15 years ago when the vacuum robots started showing up, AWS wasn't a thing. I'm assuming that there was another cloud provider, there was a data center build-out or were the robots back then entirely self-contained?
Ben: Roombas were not connected until 2015 and Roombas have always been so entirely self-contained. Even today with the cloud connection, if the cloud goes down, it's going to run on the schedule that you set it to. If it's in the middle of a firmware update and you press the clean button, it ditches that and goes and clean for you, so it's always going to have that autonomy to serve the customer on the way that the customers expect.
In the time that iRobot has been around, there are networked robots, and even we had telepresence robots that used a cloud connection for that telepresence and remote driving capabilities. There's a lot of learning that we had around what happens when you give a robot a network connection. Then in the lead up to the launch in 2015, we started developing that capability and taking that learning to our Roomba products.
Corey: In 2015, the landscape was slightly different than it is today, but I know that in 2008...
Ben: Correct, they were brand new.
Corey: Oh, absolutely.
Ben: Which was one of the things...
Corey: No, but here in 2018 if you're selecting a cloud provider, AWS is not necessarily a slam dunk anymore. There are a number of very competitive offerings from the other major players in this space. Did you folks look seriously at other providers or was it always AWS was the clear winner?
Ben: This story is also slightly complicated. At launch of our first connected Roomba, we had a full solution IoT cloud provider, sort of a turnkey solution that manage the communication, the firmware update, all of the pieces for us, but it wasn't going to scale. We found that out. It wasn't going to scale to the volumes that we need, because we sell a lot of robots and it didn't have the extensibility, so like doing an Alexa integration with it would have been very difficult.
In 2015, we determined that we're going to move off of them and went through a selection process for that connectivity layer and we landed there with AWS IoT. We also knew that we wanted to start to own the application so that we would own the extensibility of it. We knew we wanted to build that on AWS, because in 2015 and even today the range of AWS offering gives them an advantage, whether it's bigger or smaller than it used to be when we're looking at it.
In 2015, Lambda was very new, serverless itself was brand new. The serverless framework was still called JAWS, but we decided in building this that it wasn't in our interest to have to build, learn to build, maintain, own, deploy server based infrastructure for an elastic cloud IoT application to support the volumes of robots that we sell.
Therefore, we've decided to go all-in on serverless and say we're going to build this around AWS IoT and AWS Lambda, and pull in services, and figure out how to make that work for us. That's been enormously successful in both keeping the size of our teams, the costs, the development time, all of that has been really benefited by deciding to go serverless.
Corey: Right, and that's what I find interesting about iRobot's position on a lot of these things. I can envision the use case for IoT pretty easily and conversely on the serverless side of the spectrum, I can see using Lambda functions to do some processing. My podcast and my newsletter both are powered by an obnoxious array of Lambda functions now.
I can see using it to inject static headers into CloudFront, because there is in fact no God, and we have to do that dynamically instead of statically like a reasonable CDN, I digress. But I still have a hard time wrapping my head around the use case of a serverless approach to what is effectively an incredibly smart vacuum cleaner, can you distill that down a little bit?
Ben: Sure. When a cleaning mission finishes, the robot sends up a little report through AWS IoT and that goes into, say, Kinesis stream with Lambda reading from it. When it gets that, it can store that in a place that the app can find and also dispatch a push notification to you if you've opted it into those to tell you, "Hey, look, your cleaning is done," and doing that means that we’re not using Kafka to process those messages, we're not using even RDS to store those reports, and we definitely don't need some auto scaling EC2 application to read off and glue that logic together. It's a Lambda that contains basically just AWS SDK calls dictating our business logic of what we want to do with this piece of information.
Corey: Okay, by having that on demand there's an obvious economic win by not having a bunch of idle servers sitting around. Is there any other advantage that a serverless platform brings to the table?
Ben: Oh, yeah. I mean, there's the direct cost of your AWS bill that can go either way. You have idle capacity, but at the same time there are use cases where you can be highly optimized and use EC2 in a way that your bill would be lower than Lambda. But the hidden cost is your operations burden, how many people do you need to deploy and run this.
For us, with the millions of robots we sell a year, we only need single-digit FTE operations to manage that entire application that handles all of the data and functionality that our connected robots do, which would not be possible if we had a server based architecture. We need a lot more people to make sure that everything was going smoothly and that all of our servers were patched, etc.
Then on top of that, the development time becomes very low once you get good at it is because all the code you're writing is just the bits to glue your infrastructure together and the code that just says, "I want this to move here and this other thing to move here," you're writing so little code and it's directly feature-based rather than some infrastructural notions that don't relate to what you're doing as a business. It means you can churn out those features very quickly.
Corey: Which makes an awful lot of sense for a number of use cases. It's fascinating to me, not just the variety of use cases that Lambda and its brethren get put to, but how these use cases tend to cross into so many different areas of technology and of different types of platforms that just would not have occurred to me until someone mentions, "Hey, this is this thing that we're doing." Looking a little bit to the future, do you see that list of 38 services, are you starting to play with any of the higher-level machine learning tools?
Ben: I think we're looking at whatever an AWS service really the question is, "Oh, is it useful to us? Can we use it?" When you look at a service like SageMaker for developing machine learning models, it certainly is attractive in reducing the overhead, and the amount of infrastructure you need to own.
I'm actually even interested, SageMaker includes a lot of functionality for training machine learning algorithms, but you can also bring your own algorithm where you provide a docker container and then it will run that on all of the data you input, which is a great general-purpose processing tool. You don't actually need to be doing any machine learning to use SageMaker as a bulk processing tool, especially, when you look at its hyper parameter optimization.
So if you need to run something on a combinatoric piece, you can just use that to help farm out all of the different things that you need to do. In addition to looking at the on label uses of a given AWS service, we're always looking at, "Is there something else? Could we use it for another gap that we have where we have a pain point and we could make this service bended to our will to perform this other task?"
Corey: Speaking of, a common concern that is raised by companies that are doing interesting things in the entire cloud space is often the idea of lock-in gets raised. With your level of AWS services, I get the sense that it almost doesn't matter what other cloud providers do or even what AWS does. It feels to me, based from the story that you've told, that you're locked in to AWS come hell or high water, is that accurate, and if so, is that a concern?
Ben: I think if you're looking at something like machine learning, the primary lock-in that you get with any cloud provider is data gravity, and so if you consider running a given service on one cloud provider, and hooking it to a different service on another cloud provider, you're paying for the bandwidth cost to send the data between them, and the latency. I think that alone is a big obstacle to multi-cloud architectures being economically and functionally viable.
I think it's more about making sure if you're evaluating, if you're looking at SageMaker versus some of the machine learning services that Google has, its suite of deep learning training, you can evaluate both what's going to be in isolation the best performing, what's going to get me the best model, what's the easiest thing. Then, you look at well what's the cost going to be to hook this all together and then you have to weigh those two things.
I don't think anybody gets locked in as we should use it just because it's here as opposed to what's the total cost of ownership of using something that's outside of your primary cloud vendor. At the same time, I don't think lock-in is so bad. The way that cloud pricing works between the big cloud providers. It's much more public and so it's subject to market pressures in a way that like enterprise software agreements in the past haven't been. Your ability to get your data in and out is kind of up to you. You can store it in whatever format you want. You can make it portable.
I think the cloud events specification that's coming out of the cloud native computing foundation is going to help with interchange of information between cloud providers, which I think ameliorates the primary concern of cloud lock-in which is these services only work with all of these other services, and so if I'm using Kinesis I can only use Lambda to process it.
With the cloud event spec, in theory you'll be able to ship those off and run as your functions based on your Kinesis stream. In reality, I don't think anyone is actually going to do that, but it will make people sleep better at night. Because I believe that the fact is that lock-in is not that big of a deal, the fact that people worry about lock-in is itself a big deal and needs to be addressed, does that make sense?
Corey: Very much so. I've been accused at various times in some cases by the same people of being an AWS partisan to the point of being a fanboy, whereas I've also been nominated for the position of AWS community villain, so there's sort of a spectrum on that. My approach has always been that once you pick a vendor, it's somewhat alarmist and unnecessary to avoid tying into the higher-level functions. As long as I have a theorized exit strategy you're mostly fine.
Now, that does mean that for example if you're building your entire application architecture around something like GCP spanner which is a world-spanning ACID-compliant database which as far as I can tell works on magic, that is a form of lock-in in the sense of if you have to leave GCP for some reason, there is no clear-cut strategy to get out of that environment.
Ben: Yeah, at the same time the changing landscape around data governance means that these world-spanning databases, I think their utility is somewhat limited. But I completely agree and you look at the sort of alternative which is we're building that abstraction layer over it, and then you could move to Microsoft Cosmos DB which is a similar global database.
The problem with doing that is that you lose the particular aspects of the individual services that make them special and make them powerful. So I could make an abstraction for NoSQL databases that would allow you to use DynamoDB or Google's cloud NoSQL and Azure DocumentDB, but you wouldn't get to use global secondary indices which are a really powerful feature of DynamoDB.
You're limited to the least common denominator, which is not very good, and you're just hamstringing yourself for contingency that nobody ever faces. You never see people out there saying, "We went multi cloud and it saved our butts," or people saying, "We didn't go multi cloud and it really bit us and we learned our lesson."
You hear people talking about that they need to go multi cloud and how to go multi cloud, but the actual stories of the evidence or the concerns bearing out, I just don't see out there.
Corey: I tend to have a somewhat different perspective and that I'm a consultant that winds up in a lot of different shops doing things very differently. I do see multi cloud in a couple of scenarios, one; where there was a decision made at the outset to keep everything at lowest common denominator, if you will. As a result, all of the higher level services are more or less closed off to the shops. They tend to run on the things that are available everywhere, instances, load balancers, object storage, and a few other bits and bobs that generally tend to do a one to one mapping.
The other scenario where I see it in is where there was a migration at one point, say, from AWS to GCP or vice versa, and the original plan was to move everything, but it turns out a couple things are really hard to move for not a lot of benefit, so they plant a flag and then halfway through declare a multi-cloud victory and then move on to things that actually move the needle on their business.
I see more effort being placed into preparing for a theoretical migration and maintaining an agnostic layer than I've ever seen into an actual migration, because they generally don't happen and they're generally not changing.
Ben: That's exactly what I'm saying.
Corey: Yeah, I think we agreed on that.
Ben: Yeah, it's kind of like how there's a very good argument for why cow tipping is not possible. People talk about teenage pranksters tipping cows, but there's no videos of it on YouTube, and therefore it's not possible. I find that to be very compelling. Nobody is talking about it and so it's probably not happening. There may be some companies out there that just don't want to talk about it and succeeded, but on the other hand people who do successful migrations like Spotify get trumpeted.
They get Googles out there saying, "Look, Spotify moved from AWS to GCP and it was absolutely great." So in those cases where somebody actually succeeded or needed to do this, I think you would be hearing about people.
Corey: It's a story that I think people want to exist, but I think you're right, [00:45:27], the analogy I've always liked was it's astonishing how UFO sightings plunge right around the time that everyone is carrying a high-definition camera in their pocket.
Ben: Same argument, right?
Corey: It's a strange and different world out there and I think that companies are still trying to find their way. Historically in data centers it was a lot easier to be agnostic, because you're buying utilities that have become commoditized. I don't care who my power vendor is, I don't care who my bandwidth provider is, if one of them displeases me, migrating is not that difficult, whereas in higher levels it's worlds apart.
Ben: Yeah. If you're on Kubernetes, you can get Kubernetes on a lot of different places and that makes you very portable. But the question is always, "Is that portability actually worth it" versus going further down the serverless spectrum where you're using higher level services and doing less undifferentiated heavy lifting.
Corey: Which also gets back to your point earlier of data gravity, where, yes, you can save 20 cents an hour on a workload by having it done on a different provider, but that workload has to siphon in three terabytes of data from another provider. So you save 20 cents and spend dozens of dollars to move the data where it needs to be. That tends to be an economic non-starter in many cases as well.
Corey: Once again, you are Ben Kehoe, I'll put your Twitter handle into the show notes, where else can people go to learn more about you?
Ben: About me, well, most of the talks that I give are posted on YouTube. You can search my name on YouTube and find me. I've got some posts on Medium under my Twitter handle that I write about where I think we're at in serverless, and where I think we're going, and what I think we don't have yet is one of the big things that I like to talk about.
Corey: Oh, being a futurist is a terrific business. If you're right you're hailed as a visionary, if you're wrong no one ever calls you on it.
Ben: It's true. I complain less. I'm more, "This is what we don't have today," and so however it turns out later, as long as it solves the problem, I'm happy.
Corey: Well, thank you very much. Last question, what did you name your Roomba?
Ben: What did I name my Roomba? Well, I have a very small apartment and I don't actually run a Roomba in it, but I can share that the most popular name is Rosie.
Corey: Oh, the Jetsons' reference.
Corey: Wonderful. Well, thank you so much for your time. This has been Screaming In The Cloud. This has been Ben Kehoe, and I'm your host, Corey Quinn.
Ben: Thank you so much.