Another week, another high-profile data breach. Well, that’s what it seems like anyway. As Director of Cyber Risk Research at UpGuard, Chris Vickery knows a thing or two about why these breaches are occurring—and what organizations can do to minimize the likelihood they do. Join Corey and Chris as they talk about why so many companies leave S3 buckets publicly exposed, raising the bar of low-hanging fruit for data security, why organizations can’t blame third parties for breaches, why AWS isn’t liable for everything that goes wrong in the cloud, the recent Capital One breach, and more.
About Chris Vickery
Chris Vickery is Director of Cyber Risk Research at UpGuard. His research has protected over two and a half billion private consumer and account records which would have otherwise remained at risk of malicious exploitation. He has been cited as a cyber security expert by The New York Times, Forbes, Reuters, BBC, LA Times, Washington Post, and many other publications. Some examples of his high profile data discoveries involve entities such as Verizon, Facebook, Viacom, Donald Trump’s campaign website, branches of the US Department of Defense, Tesla Motors, and many more
Voice: Hello and welcome to Screaming In The Cloud, with your host cloud economists, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud. Thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming In The Cloud.
Corey: This week's episode of Screaming In The Cloud is sponsored by LightStep. What is LightStep? picture monitoring like it was back in 2005, and then run away screaming. We're not using Nagios at scale anymore because monitoring looks like something very different in a modern architecture. Where you have ephemeral containers spinning up and down, for example. How do you know how up your application is in an environment like that? At scale, it's never a question of whether your site is up, but rather a question of how down is it.
Corey: LightStep lets you answer that question effectively. Discover what other companies including Lyft, Twilio, Box, and Github have already learned. Visit lightstep.com to learn more. My thanks to them for sponsoring this episode of Screaming In The Cloud. Welcome to Screaming In The Cloud. I'm Corey Quinn. I'm joined this week by Chris Vickery of UpGuard. Chris, welcome to the show.
Chris: Thank you for having me. It's an honor to be here.
Corey: Oh, likewise. I've been a follower of yours for a long time. Trying to keep abreast of the interesting stuff that you do, but we'll get there. First, what do you do at upgrade?
Chris: Well, my title is Director of Cyber Risk Research. I do a lot of things. Probably the thing that I'm most well known for is for leading our breach site team, which is a platform where we give the white glove approach to enterprise level clients that have large network footprints, and want somebody to shepherd over them, and make sure there aren't any obvious glaring problems that can be potentially taken advantage of by actual bad guys.
Corey: I first became aware of you folks because it turns out there's a number of security companies out there. But I became familiar with you when I kept seeing the same type of breach announcements coming out. Well, breach is sort of a lofty term, but I'm sure we'll unpack that at some point. Where companies had not properly secured S3 buckets, and they had exposed varying amounts of customer data. These are effectively brand name companies in many cases, not random hole in the wall taxidermists. Every time I kept seeing, discovered by UpGuard, discovered by UpGuard and I was waiting inevitably to finally snap say, "All right. What's UpGuard?" And the response to be, "Not much, what's up with you?" But it turns out it's not a pun. You're actually a real company.
Chris: Yes, UpGuard is a real company. It was started about six years ago I believe. I've been with UpGuard since 2017, and it was started by a couple of Australians. Our main CEO is Mike Baukes. He moved the company over to the US after getting it started over in Australia, and we're incorporated in Delaware, and all nice and legal and headquartered in Mountain View. Things are picking up quite well.
Corey: So I've become familiar with you folks as the research company that finds publicly exposed S3 buckets and then writes about them. But I'm going to guess that when you're more than a two person company, that probably isn't where you folks start and stop.
Chris: UpGuard has about three, well right now three platforms that we offer. One we call core, and it's the internal kind of on premises, watch your configurations and make sure everything's hunky dory within your environment. Can discover all the random stuff that you have plugged in, that you maybe don't remember is plugged in, and keep everything going well. Then we have the cyber risk platform, that watches your vendor risk ratings, and aggregates the whole total for your company, and scores your vendors on a scale from zero to 950. we say anything under a 600 is pretty bad, and I could probably find some exposed data for them, if I were to have enough time in the world to look at everybody that intensely. Then we have breach site, which is the thing that I am in charge of the team for, and that's where we give the white glove approach to the network footprint of large enterprise customers, and make sure they aren't accidentally exposing anything that bad guys can take advantage of.
Corey: I'm assuming that most of the public ones, where you're cited in the newspaper articles, is stuff that you've discovered as you walk through the Internet. It's not one of those stories where, "Oh yeah, we just do this for our customers and then we write news articles about it that basically publicly shamed them." Is that accurate?
Chris: Well the take that I have on it, and we do have automated systems these days, always looking, finding this stuff, alerting us to do more manual scans and looking at things more intently. But I don't think of it as much shaming as it is raising public awareness of the problem of exposed data. Whether it's involving Amazon S3, or just open rsyncs, or anonymous FTPs, or as your Google Cloud or any number of other hosts out there, people are going to mis-configure their stuff. It's just a statistical probability and human nature. So we would like the public to take that into consideration a little bit more when they're trusting companies with their data, as well as for companies when they're hiring people to work with these platforms, and their customers' data.
Corey: This stuff is complex. I don't think it's unfair to say that no one wakes up knowing this stuff, and it's easy to understand how a lot of these mistakes got made. At least in the realm of S3 buckets a couple of years back. There was a default setting in the web console from AWS, where any authenticated user could read the data. Sure, that makes sense. This is just company confidential. What people didn't realize was that was any authenticated user globally. They since fixed that and then in turn made it increasingly difficult to accidentally do this with, are you sure dialogues, and scary labels in the console, and series of emails that go out, and entire services that are designed to stop this.
So my perception, at least from the outside public world as I've found some of these myself over the years, has been that it feels like this is tapering off. You're not seeing open S3 buckets in the same volume as you used to, and I mentioned that on Twitter and then you commented in which is what started this entire podcast recording and said, "Well, that's not what we see." You are way better positioned to see how this industry is doing across the whole, what am I not seeing?
Chris: Well, for starters, the global authenticated users setting is still an issue. I don't know what you're referring to with they've fixed that, but we notified a fairly large entity just last week of a bucket that had been open for quite a while with that exact setting being the problem to it.
Corey: Yes, for clarity, when I said they fixed that, I meant that in the console it's no longer a checkbox just sitting there waiting there as a trap for the unwary. It's no longer there in the console. You can still set it, but you have to do it explicitly by an API call.
Chris: Okay. That makes more sense to me. Yeah, that's still an issue. But we're seeing plenty of buckets exposed. There's not as many low hanging fruit hanging as low as it used to be, perhaps. Because I'd like to think efforts such as our own have made people and systems administrators more aware of the dangers of leaving data just exposed, or using publicly accessible buckets. But there are still quite a number of them out there. My team at UpGuard has been focusing a lot more recently on supporting our clients, and taking care of basic responsibilities that we have there as well as the advanced new stuff that we're always finding. So we haven't been writing as many reports, but there are still quite a few out there that could fuel a lot of coverage of the issue still.
Corey: Gotcha. For a while here in my office I really only had two pieces of art, because I have a very, well, let's say crappy aesthetic sense. But one of them is a map on the wall of all the announced and active AWS regions and cloud front edge locations, mostly because I want to keep the small map pin industry in business. The other, it was a monitor, just having a consistent ongoing scroll from the certificate transparency logs of S3 buckets that had been opened, and announced as far as there'd be an announcement of a new bucket. Great. Okay. Now there's an automated system that checks and sees if there's a quick list bucket call against this. And this is not a tool I built. This is something that was available on the larger internet, and it would continue to scan this and if it was available, it would flag it.
It also did a similar check for the authenticated user approach as well. But it seemed over time from my perspective that that was became almost entirely noise, as opposed to anything that was substantive as far as being clever and discovering these things. It almost seems like there ... And again this is also tied into the further problem where, for many use cases having an open S3 bucket is a desirable trait. That's something that people want to do, and there are financial reasons why they should continue going down that path. The problem is, is that that's not the bucket you want to start your database backups in, or your user database or credentials to access things that are expensive and important to the company.
Chris: And I completely agree, there are plenty of good use cases. If you just have an assets bucket that's just a graphics creatives, or whatever the heck, and you don't really want to mess around with authentication too much to have random web browsing behavior, be able to pull them. There's no huge problem there. You can do that. You can even make them listable, who cares if people know they're there. They're just little images or whatever the heck. Your transparency log anecdote is along the lines of what I was getting at when I said the low hanging fruit isn't hanging as low anymore. As in I don't know if Amazon changed some way that bucket name registering occurs. But I agree there's not as much of a stream coming from easy feeds like that.
I am familiar with the scripts, and the tools you're talking about there that that have kind of made the rounds. But there are still plenty of them out there as well as plenty that were discovered years ago that are still exposed. Mostly in other countries that don't speak the same language that I do, or anybody that I know does. So I still see it as a big problem, but you know, perhaps the 13 year old sitting in his parents' basement whatever, wouldn't be able to find him quite as easily.
Corey: Absolutely. I think you're right when it's about raising the bar of low hanging fruit. There is an argument, where at some point if a state level actor is working against you, you're probably going to lose for most values of you. Some people in my experience have taken that as like, "Oh, so why even try? Security's impossible." Well, not really. It's a spectrum. Most of us are not getting breached by the Mossad. We're getting breached by some random person running a script they found on the internet, because you forgot to change the default password. It's raise the bar at least enough so that you're not one of those low hanging fruit companies where it's just an easy mistake to make.
Chris: Yeah, that's the whole idea behind security in my mind. It's about resiliency and making yourself not an easy target. Raising it to the point that they're going to go after the next guy. Maybe Mossad is targeting you, but there's other targets that are equally juicy that are less secure. So they'll go after them first, and maybe you'll get ahead of the game. That's the whole security thing. It's not about being 100% impenetrable. Anything that uses electricity, just that blanket statement there, anything that uses electricity can be manipulated in ways that you and I would not anticipate, I'm certain.
Corey: One thing that I've seen with a number of these breaches that have come to public awareness, has been that the company will admit the breach as they're legally required to do. Sometimes they dragged their feet, sometimes not. But they're always very quick to say that it was, oh, a third party contractor did it. And I understand why they want to emphasize that, but on the other side of the coin, they picked that contractor. I do business with a company, I don't vet who they have business relationships with. Given that you do this for a living, more than as someone who's just sits there on the sidelines like I do and angrily observes things, where do you stand on that responsibility breakdown?
Chris: I don't think that you can contract away the liability. I don't like that argument that companies try to toss out there and obfuscate things, where they say, "Oh, look at this clause here, it says we're indemnified against mistakes that are our subcontractor makes," or whatever the heck. You can write anything you want in a contract, but doing business with a third party to handle your cloud stuff, if the third party screws up, you're not absolved of any responsibility there. It's a natural human reaction to try to put the blame on the other guy. But I'm not a fan of that argument, and I don't think there's much legal precedence to hold it up either.
Corey: No, and that becomes a somewhat serious and questionable concern as far as companies think, oh well that's okay. I'm just going to punt the responsibility to someone else. You can't. I don't think you can. It feels to me a soup to nuts that you can outsource work but never the responsibility.
Chris: Yeah, I completely agree with that. I've advocated for a long time about creating ... You can write anything you want in the contract still. But if you were to specify in contracts with third parties of where the work is going to take place, and make it a neutral zone where let's say the names of the buckets that will be used are known and written in the contract, and those are the only ones that'll be used, and you being the first party can check, and see if they're open and exposed to the public anytime you want. It's verifiable. It's what Reagan said about the Soviets. Trust, but verify. If everybody would start doing that sort of approach where anybody can check it, it would possibly keep some of these problems from happening, I think.
Corey: I strongly suspect you're right. It is possible to get this done properly. You remember that article in the Wall Street Journal about how the Pokemon company winds up inspecting the security practices of it's business partners, and the reason that jumped out to me was that it called out in the article that a vendor they were debating doing business with, had improper security controls around an S3 bucket. And their response was, "Cool, we'll use another vendor." That is I think only public example of something like that coming to the forefront. I sent them a polished bucket engraved with S3 Bucket Responsibility Award on it to their office and to my understanding, it's still on display there. They have the good sense not to let me in the building, but that's neither here nor there. But it is possible to make smart decisions. It just requires not assuming you'll double back and fix things later.
Chris: Yes, it is certainly possible to do. There's a certain level of human competency, and human nature that goes into the equation, and that really shines a light on the importance of hiring the right people. Making sure that people you have get the right training, and not just going with something because of sales or marketing something demonstration looked fancy and cool to you. You really got to have the right people that understand, and can integrate with this great new technology. Otherwise you're potentially in for some surprises.
Corey: I think one of the hardest things to get across to folks who are new to the world of cloud, at least the world of AWS, has been their vaunted shared responsibility model. Which is an incredibly boring and droll way of saying that, there are some things that AWS is responsible for, and there's other things that customers are responsible for. An easy example would be ensuring that the application doesn't have a bunch of bugs in it. That's the customer responsibility. Ensuring someone doesn't drive a truck into a data center, grab a bunch of drives and take off. That's AWS' responsibility. I'm curious where in that divide you find S3 bucket permissions.
Chris: My stance on that for a while has been that I agree with the premise that there are certain responsibilities that you just can't strap on being Amazon's liability to worry about. Things that you customize and upload to their cloud space, that you've rented from them. They have no control over what you're putting up there. So bugs in your application. Yeah, they have no control over that. Where there's a fuzzy line is in the concept of, okay, did they develop this platform in a way that is proper for the way that it's been marketed?
If they're making it sound really, really super easy, anybody with a credit card can sign up and upload data, and bam and go. But it really does take a little bit more knowledge than that to do it right, and not risk clicking on the wrong box or whatever the heck. There's an argument to be made that maybe that could be better architected. But that's a continuing goal of any business to improve their product, and make it more user friendly and less mistakes be Made. So that's where I see the line getting fuzzier.
Corey: I would absolutely agree with that. Interesting, at the time of we're recording this, a couple of days ago there was a very public breach on the part of Capital One. It initially to some people looked a lot like an S3 bucket permissions problem. A little more digging turned out that it wasn't. This has been all over the news now and I imagine most listeners have heard about it, but at a very high level, can you give a quick summary of what happened?
Chris: Well, the details still are a little fuzzy when you get down to the nitty gritty. But in essence there was an individual named Paige Thompson I believe, who through some digital trickery was able to enumerate certain information about a lot of cloud accounts. But Capital One is the one that is in the news right now, and was able to list buckets and access the data within them. at least to my knowledge, not because they were improperly made public or anything, but because there were some side doors, and little channels that you can gain information from, and use that information to get a little bit more information. Then if somebody mis-configured part of the chain, you may be able to gain some privileged information that allows you to get through the authentication wall.
It was not a simple thing that anybody on the street could probably do. It was something that required a little bit more advanced knowledge. Interestingly enough, not that this has been reported as a cause of the situation, but this person, Paige Thompson was at one point an Amazon web services employee, a little while ago. So I've brought up the concept of, we need to ask the question, did this person already know how to do the types of things that we're used to get to the data, in this situation? Or did this person have experience from being an employee at AWS, and then corrupted that knowledge into using it in this way, or what? It's not answered right now, and the affidavit that the DOJ filed with the charges doesn't do much to further illuminate that question.
Corey: I would agree. Everything that I've read so far to my mind is the sort of thing that I would do if I dropped my sense of ethics and decided, you know what, let's see how much damage I could possibly do. These are all things that don't require any insider access, and I would be in fact very surprised if it came out that there was any insider access that even remotely came into play here. But you raised the excellent question of, how much of this came from a baseline level of exposure, and experience from working there? And that's one of the fun questions that I think that a lot of companies haven't really asked, is who are the people building services at large cloud providers? Far and away almost all of them are decent, ethical, intelligent people. But as we see, it generally only takes one person going in a strange direction, to start raising uncomfortable questions like this one. The real answer, at least in the world of AWS is, we don't even know publicly how many employees of Amazon work in AWS, let alone the rest.
Chris: Yes, that is absolutely true. As I brought up before we started recording here, when you throw in the contractors and subcontractors as well, it just throws a bunch of wrenches in the machine, and people are not taking this into account when they decide to move their data center into the cloud. There's no way that Capital One has done background checks on all the AWS employees that have access to administrative level things that could be abused. Not that that was the case in this situation. But it's just a good example of, there should be a concern there that I don't think is being addressed very well.
Corey: Yes. To my understanding, we've never yet had a public case of an insider at a cloud provider causing problems like this. I still would argue that we haven't in this case as you mentioned, she left a couple of years before this wound up happening, and since then we saw the giant S3 outage in 2017. AWS is very publicly rebuilt, massive swaths of S3 in a customer transparent way. So even then some of the knowledge around how the system functions internally is going to be out of date. It just comes down to, the question now of even though we haven't seen this in years past, is this a vector for the future?
And AWS is very front and center about how most of their employees, in fact in some cases any of their employees, won't have access to customer data period provided that the customer configures all of the various security apparatus correctly. That's kind of what leads us to where we are now. It's very hard even for a company as incentivized to do that as a bank, to wind up getting all of the edge cases nailed down.
Chris: Yes, that's very true. It illustrates the kind of balance here, the seesaw of, the cloud services provider is going to higher reputable, well meaning people that aren't going to do anything to cause problems. But there's also a responsibility on the client side, the Capital One in this situation side, to configure everything correctly, and have the employees on their side that are knowledgeable enough to configure things correctly, and not expose data or vectors or side doors into data storage areas. So it's going to be a constant back and forth on where the responsibility, and the liability lies in these situations. Like you said, I don't think there's a lot of, if any public cases, or situations where that sort of thing has been really sussed out to this point.
Corey: Absolutely. The fun thing though, that from everything that's been read and reported so far, has been that the attack vector was more or less someone tricking an Edge device of some sort, to make a web request against its own local metadata endpoint. Which if you know where to look, it'll spit out a set of temporary credentials that are bounded at six hours of validity, and then you can grab those credentials, start exploring what else those things have access to. In this case it turned out there was an overly broad role. Okay, great.
Now it will list 700 buckets, and the contents of it, and transfer them out. There's a lot of things that have to happen first to be able to pull off something like that. But also in order to permit that level of oversight, why does a role assigned to a firewall have access to talk to 700 S3 buckets, is sort of the big one that I don't think anyone has a good answer for.
Chris: Yeah. The initial genesis of the techniques that were used here, that initial, how was that request made, that went to the internal facing area, that's still up in the air. Like you said, it's some sort of trick we're assuming. But until it's known more widely, and concretely how that was done, it's hard to say where the initial blame lies. If it was just a completely mis-configured open something or other, that Capital One had either run incorrectly or configured incorrectly, then the blame would lie more on that side. If this was something that was very cryptic, and hard to catch, and maybe affects a lot more clients of AWS, then that may be something that Amazon wants to take a look at. At about maybe putting a little sign, a little flashing sign saying, "Do not click this button," or something. Unless you want to expose things. But we just need to know more details at this point.
Corey: Oh absolutely. I guarantee you that there are other companies that are vulnerable to this, because let's not kid ourselves. As easy as it is to make cheap shot jokes at a company post breach, Capital One Has an awful lot of very intelligent technologists working there. They don't show up in the morning assuming they're going to do a crappy job today. If they can get this wrong, I assure you there are way more companies out there who have gotten it far more wrong.
Chris: Yeah, that's probably very true. I wouldn't have a job if these sorts of things were not widespread.
Corey: It's weird, I'm in the same boat. I fixed AWS bills. You handle cloud security. In an ideal world, neither one of us would have any sort of job that remotely resembles what we do. We'd have to go build things rather than fixing things other people have built. For better or worse here in reality, that's not the way the world works. It's one of those in theory versus in practice stories. In theory, there's no difference between theory and practice, and in practice there is.
Chris: Yeah, and if either one of us were very, very good and godlike at our jobs, we would put ourselves out of business. But it's human nature, always fighting back against that.
Corey: Oh, it's generally a requirement that this type of function be reactive. I think that cloud economics and cloud security are both in the same boat, as far as it's a number one priority for a company immediately after. It really could have benefited from being a number one priority. It always feels like a trailing reactive function, just because it's super hard to invest in this upfront. An argument I made on Twitter a couple of days ago, was that someone could have charged Capital One a million dollars to go in and just fix all the scoping on their IAM roles, and they would've been laughed out of the room if they'd proposed it. But now, because they didn't do that, according to their own statement, they're assuming this year's charge for this will be between a 100 and 150 million dollars. The ROI would've been instant and immediate. But you need to have the pain first before justifying anywhere near that spend on a project like that.
Chris: Yeah. A couple of years ago I read an article that claimed ... Some reputable polling company had talked to a bunch of chief technology officers, and the consensus was that the CTOs would pay out of the company pocket I don't know, $160,000 or something just to not deal with the smallest of breaches. They just, without even thinking twice would toss that kind of money just to not deal with any type of breach whatsoever, no matter how small. So yeah, I agree that if somebody had proposed a million dollars to go in and fix all this stuff, most executives would have laughed him out of the room. But time has told the truth, that they would've been better off doing something like that.
Corey: Oh, absolutely. I will say that it's easy to be angry and blame Capital One for this. They're a bank. They need to take responsibility and handle these things. But looking at everything we know so far, I'm not seeing this as someone just sort of phoned it in one day when they were going about their job. This is a sophisticated attack that understood deeply how all of these systems work together. This is not generally speaking, someone random off the internet who is bored in their dorm room somewhere. This is someone who has expertise in this area, and a deep knowledge of how these parts all interplay together. This is the sort of thing I might come up with, but I'm almost 20 years into my career at this point, and I've been staring at this exact problem space for an awfully long time. It's not in the same realm to me as someone just inadvertently left all of their user data sitting around in an open S3 bucket, despite the increasingly frantic warnings from AWS over the last year or so.
Chris: Yeah. This was a bit more complicated, but it raises the question of, how honest are the marketing and salespeople being when they go and they demo a how great AWS, or any other cloud provider is, and they say, "This is totally secure, as long as you don't mis-configure it etc. etc. There's no way anybody can break in." The executives or the people on the other end of that presentation may take it hook, line and sinker without any grains of salt, and believe that.
But if there is something that a sophisticated attacker can chain together if they're dedicated enough, that needs to be at least part of the fine print. Part of the, "We do as best as we can, but nothing's foolproof. You're taking a risk here, blah, blah, blah." But I get the feeling that's not being represented as realistic as it should be.
Corey: I absolutely agree with you. It's one of those things where, "Oh, don't worry about it and move it into the cloud. It'll be better." But it does raise the question that, if a company hadn't been in the cloud, would this exposure have been worse? Would it have been something, would they have a better security posture if they'd never gone into the cloud in the first place. And sure for this particular use case probably. What would they have exposed instead, by not having effectively some of the best technologists in the world at a public cloud provider, building these things out?
Chris: And how much profit would they have lost for not being as nimble as they can be by using cloud services? It's kind of an apples to oranges comparison. People ask me that all the time, would they have been better off hosting it in their own data center? But it brings up a host of other problems that you've got to deal with then, and un-optimized issues. So it really is, whether you prefer the taste of apples or you prefer the taste of oranges here. They're hard to compare, but you have a preference for one or the other, and they each have their goods and their bads.
Corey: I think that you've absolutely nailed the salient point on that. Chris, thank you so much for taking the time to speak with me today. I Appreciate it.
Chris: Thank you for having me.
Corey: If people want to hear more of your sage thoughts on these and other matters, where can they find you?
Chris: Well, you can always check out the latest blog postings at upguard.com. Or you can go to my Twitter handle, that's Vickerysec, V-I-C-K-E-R-Y-S-E-C on Twitter, and read my various musings there.
Corey: Thank you so much. Chris Vickery, UpGuard. I'm Corey Quinn. This is Screaming in the Could.
Speaker: This has been this week's episode of Screaming in the Cloud. You can also find more Corey at screaminginthecloud.com, or wherever fine snark is sold.