What is serverless? What do people want it to be? Serverless is when you write your software, deploy it to a Cloud vendor that will scale and run it, and you receive a pay-for-use bill. It’s not necessarily a function of a service, but a concept.
Today, we’re talking to Nitzan Shapira, co-founder and CEO of Epsagon, which brings observability to serverless Cloud applications by using distributed tracing and artificial intelligence (AI) technologies. He is a software engineer with experience in software development, cyber security, reverse engineering, and machine learning.
Some of the highlights of the show include:
Modern renaissance of “functions as a service” compared to past history; is as abstracted as it can be, which means almost no constraints
If you write your own software, ship it, and deploy it - it counts as serverless
Some treat serverless as event-driven architecture where code swings into action
When being strategic to make it more efficient, plan and develop an application with specific and complicated functioning
Epsagon is a global observer for what the industry is doing and how it is implementing serverless as it evolves
Trends and use cases include focusing on serverless first instead of the Cloud
Economic Argument: Less expensive than running things all the time and offers ability to trace capital flow; but be cautious about unpredictable cost
Use bill to determine how much performance and flow time has been spent
Companies seem to be trying to support every vendor’s serverless offering; when it comes to serverless, AWS Lambda appears to be used most often
Not easy to move from one provider to another; on-premise misses the point
People starting with AWS Lambda need familiarity with other services, which can be a reasonable but difficult barrier that’s worth the effort
Managing serverless applications may have to be done through a third party
Systemic view of how applications work focuses on overall health of a system, not individual function
Epsagon is headquartered in Israel, along with other emerging serverless startups; Israeli culture fuels innovation
Full Episode Transcript:
Corey: This week’s episode of Screaming In The Cloud is generously sponsored by DigitalOcean. I’m going to argue that every cloud platform out there biases for different things. Some bias for having every feature you could possibly want offered as an added service at varying degrees of maturity. Others bias for, “Hey, we heard there’s some money to be made in the cloud space. Can you give us some of it?”
DigitalOcean biases for neither. To me, they optimize for simplicity. I polled some friends of mine who are avid DigitalOcean supporters about why they’re using it for various things, and they all said more or less the same thing. Other offerings have a bunch of shenanigans, root access, and IP addresses. DigitalOcean makes it all simple, “In 60 seconds, you have root access to a Linux box with an IP,” that’s a direct quote albeit with profanity about other providers taken out.
DigitalOcean also offers fixed-price offerings. You always know what you’re going to wind up paying this month, so you don’t wind up having a minor heart issue when the bill comes in. Their services are also understandable, without spending three months going to cloud school. You don’t have to worry about going very deep to understand what you’re doing. Its click a button or making API call, and you receive a cloud resource. They also include very understandable monitoring and alerting.
Lastly, they’re not exactly what I would call small-time. Over 150,000 businesses are using them today. Go ahead and give them a try. Visit do.co/screaming and they’ll give you a free $100 credit to try that. That’s do.co/screaming. Thanks again to DigitalOcean for their support to Screaming In The Cloud.
Corey: Hello and welcome to Screaming In The Cloud. I’m joined this week by Nitzan Shapira, the co-founder and CEO of Epsagon. Before starting Epsagon, he was a software developer at Intel, and before that, an Engineering Manager in the IDF Intelligence Corps. Welcome to the show.
Nitzan: Thanks for having me.
Corey: Thanks for taking the time to speak with me. It’s always nice to get a chance to talk with someone doing interesting things in this space. Let’s start at the very beginning. What is Serverless?
Nitzan: That’s a good question. I don’t think it matters a lot like what the definition of serverless is, but it’s more about what people want it to be. Basically, people want to write software without managing infrastructure because this is not to a point that they want to do. They want to focus on their business logic, so serverless means you can just write your software, deploy it probably to the cloud vendor, and the cloud vendor will run it for you, will take care of the scale, you will be billed as pay-per-use, and so on.
This is how I and also our colleagues see serverless and not necessarily as Function as a Service. It’s more of a concept. You can just keep your code and someone else will take care of it for you for running it.
Corey: There have been a lot of talks about what that potentially could be. Originally, we started seeing things like this in the form of Heroku, or Google App Engine, or if you’re particularly unfortunate, AWS’ Elastic Beanstalk. How does this modern renaissance of Functions as a Service tend to differ from what came before?
Nitzan: In my opinion, the main difference is that Function as a Service is as abstracted as it can be, which means that you don’t have almost any constraints. You can write any code that you like, it will run as it would run on your own computer. Basically, you have an operating system, you have a container, you have Linux, your code is just running there, and it can do basically anything. You’re not inside any platform like Heroku that forces you to do things in a certain way. You can do it basically however you want as long as you live within the limitation of the environment and mostly around the running time, the memory, and the CPU. This is what for me is Function as a Service. Basically, a way to run compute freely with certain limitations and being build as pay-per-use.
But also, in my opinion, service can be a container. You can ship a container to something like ECS Fargate and basically, again, you don’t have to manage the infrastructure. Still not pay-per-use but for you, you get the same experience. As long as you can write your own stuff in your own way and then ship it and deploy it, for me this counts as serverless.
Corey: That tended to go a little bit down into the weeds with Functions as a Service recently. I agree with everything you just said, until I started using it. For me, I guess the eye-opening, the revelation that really tended to make it all start to click to me, was the idea of treating this as event-driven architecture. An event happens. Maybe it’s the passing of time, maybe a file gets uploaded, maybe another function triggers something, maybe some web service hits a webhook. Then, all of the code in question swings into action.
That, for me, to some extent was a more, I guess, revelatory moment than, “Oh, I just write my code and hand it to them, and they run it for me.” Is that abnormal or is that more or less one of the things that other people just took for granted and I wound up not seeing because I’m—let’s be honest here—an absolutely terrible developer?
Nitzan: No. I think all sorts of people to get into Function as a Service or, let’s say, Lambda, for example, they just try to write one Lambda, they run it, they get the input, they get the output, everyone is happy, and it works, it’s fine, and then they write another Lambda, and so on.
This usually help people start with serverless or what they think serverless is, which is something like AWS Lambda. But at some point, when a company or an individual have a decision to go serverless because the company doesn’t want to manage servers anymore as a strategy, it wants to be more efficient, then you need to think ahead and start planning. Now you want to develop this application with certain logic. It’s not just one function. It’s a whole bunch of functions. It’s not just function. It’s all the services in between.
You have the message queue, the database is the storage, all the external APIs that you use, if it’s any REST API or Twilio or authentication, all of it has to be combined together into a, like you said, event-driven architecture, which will usually end up being very distributed.
The effect is that once you pass a certain point, the applications you’re going to end up with is quite complicated or, in many cases, very complicated. This is, in my opinion, the real essence of serverless applications. The highly distributed manner, the event-driven, and all of those elements, not just functions but also everything else other than the functions that can actually have a great impact on the performance, on the costs, on everything that’s going on, and it’s all together. Your code and the many services. This is how I see it anyway.
Corey: Epsagon tends to provide observability into the serverless space. That’s the entire premise that your company is founded on. You’re in a somewhat privileged position almost by default, where it’s not just the serverless applications that you write and the people that you’re interacting with on a daily basis are writing. But it’s also you’re approaching the position of being a global observer for what the industry is doing and seeing how people implement this as they continue to evolve their own understanding of serverless. Is it too soon to draw conclusions based upon the trends you’re seeing? Are you starting to see the industry moving toward a certain, I guess, shared understandings or shared best practices? What trends are on the horizon?
Nitzan: That’s a great question. Everyone keeps asking me and not just me. All the leaders in the industry, all the companies that are playing around in the industry are being asked. Is it like, “What is serverless suitable for? Is it just for small things? Is it just glue? Is it cron jobs? Can I really build production complicated applications on top of these things?”
Traditionally, this is what people were doing with Lambda. We started about a year ago and actually got into this space with no particular experience in serverless. My partner is Ran [...] did some Alexa skill development for Amazon. He even get some rewards from Amazon for his top-performing skills. So he knew what Lambda is but we came from a cyber security background, reverse-engineering infrastructure technology, these kinds of things, and then we just came into service and we saw, “Okay. What does it look like?”
Obviously, people were doing different things but even a year ago, we could see very interesting use cases, even presented in conferences and blog posts. Once we started speaking with companies using serverless, we found out that many of them are actually running very important production applications on top of these things. They don’t do it just because it’s fun and easy. They do it because they want their organization to be fully serverless in five years, for example. This was one of the things we’ve heard. Or every new application is going to be serverless from now on.
People say in the past that we were cloud first. Now we are serverless first. We are seeing this trend, at least in the mindset. People really like the idea of not managing infrastructure. It makes a lot of sense for the organization, for the developers, that developer velocity is increased, the go-to market is shorter. Everything just makes a whole lot of sense. This is how, let’s say organizations think about it, some of them anyway.
Now, if you talk about the practice that we see, we see all different things. We see the big companies that are taking more time and they have application here and there, our different projects, and you see a ton of startup that are fully serverless from day one like we are and a lot of others. Then you see these companies in between that are still quite large but they also have almost 100% serverless backend. Some of them were actually talking in Serverlessconf in San Francisco two weeks ago.
You can see definitely that the applications can vary a lot but you can definitely say that there are interesting use cases today already. If I would guess, I think that two years from today, serverless is going to be the default way to develop cloud applications. This means every new application, people will think first, “Can I do it serverless and only if not, then we will go to other options.”
Today, obviously, it’s very mixed but it’s very fun to be on the state-of-the-art and speak with customers. They just tried the new service from AWS or they are using step function in a very massive way. You don’t even see these examples online and suddenly you get to see something that is large in production and very high scale. I think it’s a very interesting time to be in this technology space. I really do think that it’s the future. We are serverless and it’s not because we have to. It really does makes sense for us as well. Of course there are also downsides but this is what AWS, Azure, Google, and all the others are working on, to make the platform much, much better. So, I’m not worried about that.
Corey: I think you’re right. Right now, one of the big arguments people are making about serverless, distills down to the economic argument where it’s less expensive in running things all the time. That’s part of the value I see from the economic perspective, but I also see what Simon Wardley was talking about as far as being able to trace the capital flow through an application, when certain functions take longer than others as a defined cost affiliate with that. How are you seeing the economic story of serverless start play out?
Nitzan: I think that’s actually one of the more interesting things around serverless. People think pay-per-use is a good thing. Why wouldn’t it be? You just pay for what you use. But sometimes, you don’t know how much you’re going to use. Think about a company that has some budgets. How can they know that the budget is going to be enough? How can a startup company know that it’s not going to get $100,000 bill at the end of the month because the Lambda functions were running too much?
The concept is very nice and if you do the math, yeah you can. After 60% or 70% utilization of the server, it’s cheaper to run a server or the other way around. But that’s not the point. The cost is something that you just have to be very, very cautious about, and you have to be on top of all the time. As long as you do that, you can enjoy the major benefits of serverless which is the lack of managing infrastructure, scaling, and so on.
To me the cost is more of a disadvantage because suddenly it becomes very, very unpredictable. We spoke with companies that paid $50,000 because of a bug in the code. Even we had a case that we identified it before but using our own system, we found out that one of our functions is going to be $20,000 at the end of the month but we knew about it before, so we were able to stop it. But generally, these things are very hard to predict.
It’s not just your own code. You have your own system and your own code, your own functions, everything is okay. But suddenly, you are using an API to some external service. Maybe this API has some issues, has some problems in its own backend, and suddenly instead of taking 50 milliseconds, it’s taking 50 seconds every time. So think about an application that is running one billion times every month, this can actually be very, very expensive problem that you have and you will have a very difficult time finding out about it because it’s not even in your own code.
Corey: How realistic is that type of failure mode? Is this something that you’ve seen in the real world or this is more of a theoretical concern for most folks right now?
Nitzan: No. This is definitely something we see all the time. We have an application, we use a bunch of APIs. Some of them take five milliseconds, some of them take 500 milliseconds. Obviously, there you go, you have a very big difference. Just think about any application that uses old APIs. You have old APIs of AWS, for example. If it’s Kinesis, or DynamoDB, or S3 and so on, and now you have Ozero, Twilio, and all those other APIs in the cloud.
The application is using, I don’t know, 30 different APIs. What are the chances that none of them is going to have any issues ever? Probably pretty low. At some point, you may get this thing and you constantly in a risk of paying more because of these services. The thing is that you never know. Sometimes, AWS is having a bad day in one of the regions and sometimes, you misconfigured a service, so this is why it’s very slow. It’s kind of hard to notice until you get the bill and find out that you paid 80% of the cost in waiting for the service over time.
Corey: You’re absolutely right, even before serverless advent. Every time I want to understand what’s going on in the new client’s AWS account, I mostly ignore everything they say and everything that the console shows me. I start with the bill because that starts the point what’s actually being charged, what’s going on as far as resource usages go. I think serverless starts surfacing this at a much depth, much better level than historically just, “Oh, no. You’ve a bunch of instances running.” But you are absolutely right. A lot of, I guess, day after Monday morning quarterbacking style of observability, historically has all come down to what the bill says.
Nitzan: Yeah. We also think that this is a great way to look at our application. Some of the nice things that you can actually take the bill and you have simple equation that tells you exactly how much performance time is being spent and vice-versa, which doesn’t exist in EC2 because of the special pricing model. This way, we can take an application with, I don’t know, 100 Lambda functions and all those services in between, all the AWS services, all the APIs, can actually show a picture of everything, how it’s connected, and put dollars on every point. You know how much money or Lambda function cost you over the month and how much money you paid waiting for this API in the last month, and so on. Then you have a much better understanding of your spent.
Another thing that is really cool to this is start to look at flows. You have business flows in the application so user can register to your system, user can pay in your system and so on, and now you can see how much money every flow cost you over the month. Then you can say, “I’m spending 80% of my money on this user flow, which is daily backup, but it doesn’t generate any revenue to me and the other money that I’m spending is on the much more important flows. I’m paying so much money for my cloud bill so maybe something is wrong. Maybe I shouldn’t be paying so much money for business process that doesn’t generate so much revenue.” These things are much more difficult to comprehend in a serverless, highly-distributed architecture. So I think it’s very interesting to look and to visualize these things.
Corey: Right now, it feels a little bit like companies are falling all over themselves in the vendor space to support every serverless vendor’s offerings. In other words, “Oh, because you’re going to want to run your application on top of AWS, and GCP, and Azure, and Oracle—they have such a thing—and we’re gonna run OpenFaaS and Kubernetes, and, and, and.” Is that something anyone, other than the vendors is actively caring about today? As I look around—and I understand that I’m biased in this—it looks like it’s AWS Lambda plus a scattering of RANS in the serverless space right now. Is that just a byproduct of my living in San Francisco inside of a bubble? Or is that what’s going on at a broader context, industry-wide?
Nitzan: Well, we live in Israel currently, and we are speaking with companies from all over the world. We do see that AWS Lambda is being more than 90% definitely and although research say that numbers are different, but just like we see, AWS Lambda in serverless anyway is very, very significant compared to the others. Although I know that Azure and GCPR are very dominant on cloud in general, if I take serverless out of the equation.
I do believe that their function services will evolve very quickly and we can see in the next year or two much more reasonable ratio between the vendors but today, definitely when you say serverless, people think about AWS Lambda, even though I do not think it’s all about it. But yeah, it’s definitely the most used one by far.
Corey: It feels to me, at least, when I was starting down this path, the serverless provider I went with was already integrated with whatever else I had going on. Talking to my clients, they see the same thing. They start with a task that was either run by a cron job or had to be built into a monolith that they could break out relatively easily, and “Oh, every time we upload this file to S3, let’s go ahead and have this action,” or, “Every time an instance is tagged, why don’t we go propagate that tag to secondary or tertiary resources?”
It was very natural to wind up using the provider that already supported that event model. Having to bring something else such as OpenWhisk to wind up doing something like that, never occurred to most of us. I think that maybe part of the driver behind it. There’s also something to be said for not just the first mover in this space but also the maturity offering.
Nitzan: Yeah, definitely. Serverless is not just the functions. But it’s also the triggers, the events. If you really want to design a proper serverless architecture, you need events of different kinds. You need triggers of different kinds. If all you have is HTTP event, you didn’t really solve the problem. You really need a way to have queues, storage, database streams, and so on.
AWS has a very rich ecosystem and also I think it’s just much more stable nowadays than the others. I’m sure it’s going to change. I’m sure all the vendors will be in a very high quality and very, very soon because everyone is working on it. Regarding things like the open-source platforms that you can install, obviously the main advantage is that you avoid the cloud vendor lock so you can technically, I would say, move it from one cloud vendor to the other without much trouble. Although in reality, it’s not so easy since it’s not just the code. It’s all the other services that trigger the code as well.
I got to say, I’m not a big believer in serverless on premise at all because it misses the point but I do understand why many organizations are going to go this path. For me, serverless is cloud, cloud first. If you have to commit to cloud vendor, you might as well do that. People are usually committed anyway. It’s not like they’re jumping cloud vendors but every organization has its own strategy. We are a startup so we are happy running on AWS. It also helps us understand our customers much better because we’re also using the same tools and the same services that they’re using, and so on.
Corey: This isn’t to say, everything is rosy or happy in AWS’ serverless environment. I still think they’re suffering increasingly from a problem that’s only going to get worse, where, in order to effectively and intelligently just get started from day one in AWS Lambda, here are the 12-15 other additional services you need to have some passing familiarity with in order to do anything halfway intelligent. That feels like a terrific problem for people who have not been immersed in this cloud world for decades. I don’t see that getting much better right now.
Conversely, you’ll see other vendors who have a very streamlined onboarding experience to this where you can know nothing much about computers, let alone their other service offerings. And there’s a direct on-ramp to getting started with this. Is that something that you’ve noticed? Is that something that I’m alone in seeing as a potential weakness?
Nitzan: Well, I do think that it can take some experience to get started and some learning but I don’t think that things should be super easy. It’s okay to do some learning, to do some onboarding, some experimentation until you feel comfortable with everything. Software programming is still something that you need to know. It’s okay that things they can beat more time and I got to say there are different tools today that really can help you with deploying applications into production, Obviously, everyone knows serverless framework and other tools. The code management is a bit easier. Deployment of the services around the Lambda functions or around your own code are easier. There are also stuff like Terraform and so on.
I don’t see it as a huge barrier, especially when you start with a small application. You experiment a little. If you can do it so easily, maybe you are using a lot of templates but when you start to do something more complicated, then it’s going to break. It’s always good to know the basics when you’re going to a new platform and the basic is writing a piece of code, having it trigger, seeing how it works together, and then combining more of these elements to create an actual application.
I think it’s reasonable. I wouldn’t say it’s easy. It’s definitely not easy. All of our team, all of our engineers came from cyber security background, we didn’t know serverless from day one but we learned, and now we are developing serverless, deploying multiple times a day. We are very comfortable with the AWS Console. It’s not the worst thing, let’s say, like this, and once you get comfortable with it, it is very powerful. I think in general it’s worth the effort.
Corey: I would agree with you wholeheartedly. Again, what I’m about to say is in no way, shape, or form, your position either, personally or as a company. This is coming purely from me. But Epsagon’s entire product positioning in the market is based around the serverless observability, understanding what your functions are doing. It’s my opinion on this that this entire space wouldn’t exist other than the fact that CloudWatch is freaking terrible. By the way, they were talking about CloudWatch logs, whether they were talking about the metrics coming out of it, whether they were grabbing into X-Ray to figure out how code is working through our serverless applications, all of that is terrible to the point of, it can be equated to straining raw sewage through our teeth.
The only I ever found to manage my serverless applications intelligently, has been to use a third-party service. From a perspective of, I guess, looking into the future, if one day, someone in the CloudWatch team wakes up from a 30-year nap and realizes, “Oh, wow. There’s a revolution going on here,” and the product starts to get better, what is now your proposition on something like Epsagon or one of its competitors in the market space, when it comes to understanding what’s going on in our applications?
Nitzan: That’s a great question. We hear it a lot. First of all, our value proposition is not how your functions are doing but more than how your system is doing. We’re actually going to have a new launch soon, a new website, and also a new messaging, but generally what I say is serverless is more than just functions. Serverless is functions, it’s API, it’s services is a highly-distributed system, which means that you need more than just functions to really know what’s going on. This is our core value proposition. It as built from the ground-up as a distributed tracing and solution and technology.
On top of that, there is different things like AI and predictions, helping you troubleshoot and so on. But generally, I think if you compare it to stuff like CloudWatch or X-Ray, I don’t think it exists because CloudWatch is not perfect or X-Ray is not enough. It’s because products, third-party solutions are always going to be ahead when you talk about what Amazon usually offers because Amazon is an amazing company and it’s an infrastructure company on the basis of it. There are several hundreds people on Lambda today. What they are working on is making Lambda faster, making the performance better, reducing the call starts. All of these things are what they are taking care of. They will take care of it.
The same goes for Microsoft, Google. This is their job. This is the thing that they cared the most, to enable Lambda for more and more people, for more and more use cases. Today, it’s not suitable for every use case. This is why we don’t care about monitoring the infrastructure, the CPU, the memory, all of these thing, the call starts. This is Amazon’s job and I’m sure they will do a very good job.
CloudWatch is essential. They have to give you something to understand what’s going on. So they give you the basics. X-Ray is very, very cool. It actually lets you understand what’s going on inside your code, not just the logs.
Then when you talk about really complicated things like really distributed applications, event-driven, very high-scale, thousands of functions, hundreds of functions. These things are quite complicated and the technology needed for these things, in my opinion, is not something AWS would like to spend their resources on.
The great thing about Amazon is that they don’t try to hide anything in the way that they say, “Okay, we want to make our customers happy. We want to provide them with the best solutions for whatever they need. Sometimes we develop something ourselves. Sometimes we can offer them some other product to help them. But we want them to be happy.” So now, their customers are worried about code stuff, about performance. It’s much more basic problem than distributed tracing crossed application, which is something very related to the business logic of the specific application for the user and so on.
In my opinion, Amazon will continue to improve the services like CloudWatch, like X-Ray, CloudTrail, everything around that, but I don’t think that it would make sense to put as much resources as we put, as a company that is solely focused on distributed tracing, on visualization, on AI. Everyone has its, let’s say, the best use case and I feel that Amazon will continue to be a very good provider for infrastructure and for all the technologies that enable people to actually want software.
On top of that, there will be different solutions for code management, for monitoring, for security. I don’t think all the companies are going to disappear because Amazon will provide new service a day. Always this interesting connection between Amazon and other companies. I think it’s a good dynamics, everything is evolving, and we are actually using some of the data from CloudWatch to enrich our platform. In a way it all makes sense, in my opinion.
Corey: Absolutely and I think that your systemic view of how these applications work are one of the key differentiating points from mostly other serverless observability players. It’s not about individual functions. It’s about overall health of the system, not just the pieces that I even control or have visibility into. As you’ve mentioned, third-party APIs are probably one of my single biggest pain point right now.
Nitzan: Definitely. I think that software today is something some people like charity majors and other people say that, “First of all, if you can use something that’s already out there, use it and don’t write things that you don’t have to write.” So using APIs just make a whole lot of sense. When you’re in serverless, your functions are very, very small, so you actually cannot do many things. If you want to do machine learning, maybe you would use an API for that and if you want to do any kind of thing, there is an API for that.
I think that as time goes, the amount of APIs are going to be more and more substantial. Also the risks with these APIs working poorly or malfunctioning is going to be bigger, not to mention your distributed manner, that one service triggers another one and so on, that when you talk about the chain of events, this can be very, very complicated to troubleshoot and understand. I do think that this is the core essence of what is a serverless architecture.
Corey: Right now, there’s an awful lot of buzz in the serverless space and it all seems to lead back to Tel Aviv, in one way or another. There are a number of companies that have been launching out of that region. It’s been something of a renaissance for an area that’s always been very technically involved.
There have been a tremendous number of startups and there’s a startup book called Startup Nation, that was written about eight or nine years ago, that came out talking about Israeli culture fueling innovation. But for some reason, it seems that serverless is one area that the entire Israeli tech industry is jumping into, far beyond that anywhere of elsewhere, up to impossibly even including San Francisco. Why is that?
Nitzan: That’s actually funny thing. Traditionally, Israeli companies have been very good in deep technology infrastructure. I think it’s because the military. So many people in Israel had a lot of experience in cyber security and in mostly really core infrastructure stuff from the army. This is our natural thing to do, to focus on these things. Serverless infrastructure, cyber security, it’s all related. It’s not like developing a mobile game. We are not very good at these things, obviously. This is one thing.
Another thing is that, it’s a very small place. If you think about Tel Aviv, it’s a super small city and everyone kind of knows everyone. When somebody says, “Oh, serverless. I heard it’s very good.” “Oh, I know these investors. They think it’s good.” Everyone talks to each other and suddenly there’s this belief that serverless is a good thing to do.
I guess at some point, probably about a year or a year-and-half ago, multiple teams started to work on serverless stuff. One, because it’s technology and high technology can so on, Second is because everyone talk about it. Then when you see how things ended up today, that you have five, six, or more funded companies in serverless space from Israel. I think it’s a bit coincidental but it’s quite funny. In Serverlessconf San Francisco, there were more Israeli companies than any others. I find it more amusing than I can draw any conclusion out of it but it is a fun space. Hopefully we have some advantage in these kind of things. I don’t know, maybe.
Corey: I’m one of the program committee members for DevOpsDays Tel Aviv and every year when I go out there, it’s just a different calibre of conversation that I have with technologists working on interesting things. I attend a lot of DevOpsDays events but there’s always been something unique about, I guess, the perspective that I get whenever I’m out there visiting. I don’t even know what that ties back to. It just turns into a series of conversations I don’t really get to have anywhere else.
Nitzan: Actually, I haven’t been yet in the conference then. I’m definitely going to be there this year.
Corey: We look forward to receiving your submission for a talk.
Nitzan: I think we already submitted a few talks about that but you will get them soon. Israeli’s a mix of people who just really like technology. Very good, very small community of investors including American investors. The top three investors have their office in Israel and everyone I have friends, good friends of mine work in like [...] and in startup and everyone know each other. You just know, I think, about the revolution comes in very quickly, maybe even at the same time as it comes through San Francisco, it comes to Tel Aviv as well. This is why probably you see things change so quickly here. I think it’s a cool place to be.
Corey: Absolutely. If people want to learn more about Epsagon, where can they go?
Nitzan: First of all, anyone can definitely reach out to me. My email is email@example.com and I will respond, I promise. Our blog has some pretty cool blog posts. We did some really neat stuff like reverse engineering both the Lambda function and things that we believe are more interesting to the community. It’s not just writing general stuff. We try to do things that nobody did before. Our blog is a good place to go. We have some open source projects that you can take a look at GitHub. Of course, we’re all in the conferences like Serverlessconf, ServerlessDays, and probably some other more to come. We’re pretty easy to reach out to. Just talk to us.
Corey: Absolutely. Thank you so much for taking the time to speak with me. This has been Nitzan Shapira. I’m Corey Quinn and this is Screaming In The Cloud.