Companies can find working in the Cloud quite complicated. However, it’s a lot easier than it used to be, especially when trying to comply with regulations. That’s because Cloud providers have evolved and now offer more out-of-the-box services that focus on regulation requirements and compliance.
Today, we’re talking to Elliot Murphy. He’s the founder of Kindly Ops, which provides consulting advice to companies dealing with regulated workloads in the Cloud.
Some of the highlights of the show include:
Technical controls are easier, but requirements are stricter
Risk Analysis: Putting locks on things to thinking about risks to customers
Building governance and controls; making data available and removable
Secondary Losses: Scrub services to make scope and magnitude of loss smaller
Computing became ubiquitous and affordable; people started collecting data to utilize later - nobody gets rid of anything
General Data Protection Regulation (GDPR) set of regulations apply to marketing technology stacks to manage systems
Empathy building exercise and security culture diagnostic help companies understand compliance obligations
Security Culture: Beliefs and assumptions that drive decisions and actions
Evolution of understanding with public Cloud’s security and availability
Raise the bar and shift mindset from pure prevention to early detection/ mitigation; follow FAIR (factor analysis of information risk)
Full Episode Transcript
Hello and welcome to Screaming In The Cloud with your host, Cloud economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of Cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming In The Cloud.
Corey: This week’s episode of Screaming In The Cloud is generously sponsored by Digital Ocean. From where I sit, every cloud platform out there biases for something. Some bias for offering a managed service around every possible need a customer could have. Others bias for, “Hey, we heard there’s money to be made in the cloud. Maybe give some of that to us.”
Digital Ocean, from where I sit, biases for simplicity. I’ve spoken to a number of Digital Ocean customers and they all say the same thing which distills down to, they can get up and running in less than a minute and not have to spend weeks going to cloud school first. Making things simple and accessible has tremendous value in speeding up your time to market.
There’s also value in Digital Ocean offering things for a fixed price. You know what this month’s bill is going to be, you’re not going to have a minor heart issue when the bill comes due, and that winds up carrying forward a number of different ways. Their services are understandable without having to spend three months of study first. You don’t really have to go stupendously deep just to understand what you’re getting into. It’s click a button or make an API call and receive a cloud resource.
They also offer very understandable monitoring and alerting. They have managed database offering, they have an object store, and as of late last year, they offer managed Kubernetes offering that doesn’t require a deep understanding of Greek mythology for you to wrap your head around it. For those wondering what I’m talking about, Kubernetes is of course named after the Greek god of spending money on cloud services.
Lastly, Digital Ocean isn’t what I would call small-time. There are over 150,000 businesses using them today. Go ahead and give them a try or visit do.co/screaming and we’ll give you a free $100 credit to try it out. That’s do.co/screaming. Thanks again for Digital Ocean for their support of Screaming In The Cloud.
Welcome to Screaming In The Cloud, I'm Corey Quinn. I'm joined this week by Elliot Murphy, the founder of Kindly Ops. Welcome to the show Elliot.
Elliot: Hi, Corey. Thanks for having me.
Corey: Thanks for being had. There are a few interesting bits of overlap throughout our history that I figure is probably as good a starting point as any. For example, we both at one point in our lives called Maine home. I escaped, you didn't. You still live there. If we can really extend the version of the word living to cover Maine.
Elliot: There is a lot of ice right now.
Corey: Exactly. At the time of this recording, there's apparently some sort of giant snowstorm heading your way and it's chilly here in San Francisco as well. We're just under 60 degrees.
Corey: Exactly. I had to put on a jacket this morning. It was awful. What also I guess is more interesting than, “Hey, we used to live in the same geographic area,” is that your company focuses on providing consulting advice to companies that are dealing with regulated workloads in the cloud, primarily AWS, but also a bit of Azure and GCP that are scattered in there for a show as well, correct?
Elliot: Absolutely. You might think FinTech, Biotech and all that kind of stuff.
Corey: When I started my consulting company, I went through a couple of rapid iterations. I went from, “I'm a DevOps consultant,” which, great, swing a dead cat. You’ll hit 20 of those people that all look alike and it becomes a race to the bottom. The second iteration pretty rapidly was helping with compliant workloads in AWS, specifically PCI. One of the things that got me out of that was the fact the job from my perspective was kind of miserable. An ancillary fact was that I had a conversation with you and I realized, “Wow, this is what it looks like to do that right.” I'm making this look like amateur hour. So, when you find out you're not doing something super well and you get out of it, that often feels like it's the best path forward.
Elliot: It is pretty complicated at times. AWS has made it a lot easier.
Corey: Absolutely. Back in the days when I was working with a variety of different regulated industries as part of a full time job, keeping up with compliance, agreements, and the rest with your cloud provider was a massive undertaking. Increasingly, it feels like that's changing a bit as more and more services are compliant out of the box, the requirements are lessening. That is really easy to overlook, but it's a tremendous amount of work on the part of the cloud providers to be able to get there.
Elliot: Yeah. Things have evolved quite a lot. Only a couple of years ago, it was pretty common to need to provision EC2 instances with specific types of encryption setups so that you can run PostgreSQL on top of them in a way that that met your encryption at rest requirements. Now of course, you can use all different flavors of RDS with encryption, in-flight, and at rest just out of the box. Particularly around the technical controls, so much has gotten easier.
We're also kind of seeing the requirements get a little stricter. For example, the NIST Cybersecurity Framework was revised last year. One of the things that calls out is that you should be doing risk analysis. I think we're just seeing a natural maturing of practice where instead of everybody trying to figure out how to put locks on things, now that that is pretty easy to do, we're trying to level up and have people think in a mature way about the actual risks that they're facing and that they're facing on behalf of their customers, and try and make good decisions about how best to manage those.
Corey: It feels that compliance has always been a big complicated area. When I look back at the times that I worked for regulated employers, the technical stuff was by all means not trivial to wind up handling. What I recall far and away beyond that was less to do with being able to check the boxes of, “Yes, this encrypted address,” et cetera. Good for you. You have now solved for the problem of people breaking into multiple datacenters, steal a bunch of drives, and somehow combine them to get the data that you care about out of it, which is not really a threat in today’s world while we're talking large cloud providers and much more around the idea of building governance and controls into your business as a whole.
Elliot: Yeah. For example, something that might be even more important than how locked up the data is, how safe the data is, is how available the data is. Also, can you get rid of stuff? What are your retention policies? If you look at a possible bad thing happening and look at the magnitude of the loss, it's going to cost you money in a handful of different ways. Some of those ways might be around finder judgments. We typically call those secondary losses.
You could spend a bunch of money trying to make sure that data never gets breached, which we know is not realistic. Or you could look at, “Well, we don't really need any of this debugging data for longer than two weeks, so we're going to automatically scrub that stuff away so that if a breach does happen on these log servers, the scope and magnitude of the loss is much smaller, and any finder judgments that we got around that data would also be much smaller.”
Elliot: Yeah, for a long time. As computing became so ubiquitous and affordable, lots and lots of people started collecting data because we can. We might think of good things to ask this data later. The big shift with GDPR was suddenly, a set of regulations applied to marketing technology stats. Whereas previously, they had applied to your transaction, your financial processing stack, your healthcare data processing stack, but certainly not to your marketing stack. There's a whole new set of people and a whole new set of companies are having to confront these issues around how grown up are we being with how we're managing these systems. The fact is, everyone was not doing great about it, and the regulation sort of forced a little bit of a wake up.
Corey: One of the more surprising elements that I see when talking to companies who have compliance obligations is I guess their willingness to retreat into answering everything with compliance, as if it was a magic word that justified or excuse all kinds of different behavior patterns. That tends to be a very strange conversation where you get the sense of the people wielding compliance as a bat, don't really seem to grasp what their obligations are and how that has been interpreted.
Elliot: Absolutely. Dealing with a big set of rules no matter who made the rules and how much you like the rules is frustrating to begin with and bureaucratic. Then it gets even worse when people are trying to use the rules to force you to do something that you don't think makes sense. One of the things that we've been doing is insisting on an empathy-building exercise whenever we are trying to help a company transition into leveling up on compliance.
We've been using this security culture diagnostic from the book People-Centric Security by Dr. Lance Hayden. He's actually released that survey or diagnostic under Creative Commons. It's a fantastic tool that people can download from his site. It's just a Word doc that you can use. I hope that we can link that up in the show notes. He outlines four different security cultures. It's just amazing to see going through a 30-minute exercise with folks, helping them understand which ones seem to be prominent in their environment, which ones would they like to be prominent in their environment, and understand the values of each.
Their behavior towards each other totally changes and they start behaving with empathy and understanding. I think the key to that is that, this helps you to perceive culture as not like sort of the true self that you carry around or like some very singular important cultural core that exist in your company, but it's really beliefs and assumptions that drive decision and actions. That is a mental model. Suddenly, when people realize that they might have a default or a preferred way of acting and deciding—but that's a mental model they're using, they can learn about other mental models, and they can understand when those other models have value—they're able to just be so much more helpful to each other.
I'd like to real quick just outline a couple of those cultures, for example. One that you and I with small business owners have is autonomy culture. There's loose controls and you are very externally focused. You're very interested in people outside the company because there's not too many people inside the company. That is super common in early-stage startups and small consulting firms. That is a very useful way of working. You earn whatever business you go out and win for yourself. You don't get anybody else helping you and supporting you.
A totally different culture would be a compliance culture which you typically would see at a large healthcare organization. They have very tight controls and they care very much about what's outside the organization, or externally focused in terms of living up to other people's rules, so caring a lot about becoming compliant, checking off the box on these regulations.
Then a totally other perspective would be a government organization. They are also very strict like a healthcare organization, but totally internally focused. A government agency, a government organization don't look to the outside world for what's right and wrong. They don’t really care about that. They decide internally what's right or wrong, good and bad, and expect everyone in their organization to follow their own rules that are internally developed.
As you can see just from those three different cultures, you can probably spot it a mile away. This company is behaving like this, but they'd like to behave like that. We need to understand the different mental models that people are using.
Corey: Or company say that they behave like one of those and practice/behave very differently.
Elliot: Exactly. That's like that aspirational thing of where the company wants to go versus if they're looking back at like, “What have we actually done in practice over the last year?”
Corey: What I always found fascinating was the evolution of understanding as you wind up embracing different aspects of technology, as things tend to evolve. I know I told this story during a conference talk once, but I don’t think I ever told it in the podcast. I was, earlier in my career, doing a project where everything lived in AWS. This was my first outing to addressing compliance in this environment.
A financial company sent one of those painful questionnaires of, “We're debating doing business with your company. Fill out the following 80-page survey.” What I filled out addressed AWS as if it were coming from the perspective of a datacenter. No big deal. You can probably figure out how that tended to work out. I get a message a few weeks later, “Great. Here are the following dates that we’d like to go and send our security people to tour the datacenter you have in Ashburn, Virginia, or Herndon,” or whatever it was at the time that they were publicly admitting to. The response to that was, “Oh dear.” It turns out that no one gets to tour the AWS data centers. By treating it that way and telling people that at the end, it didn't go well at all. “We're not allowed to tour the datacenter? We're the third largest bank in Omaha. Who do they think they are? Stupid online bookstore.” There wasn’t an understanding yet.
What made that work more effectively in subsequent outings was talking directly to the account managers at AWS who answered these questionnaires a hundred times already. “Here, give them the following list of paperwork. It fits in a truck. When they're done with all of that and they want more, have them talk to us.” Magically, those doors started opening. Partially because AWS got better at answering those questions and partially because the understanding of these finance companies improved as they started realizing that no matter how big they are, they're not going to get to tour an AWS datacenter. It wound up getting smoother as people on both sides of that conversation learned to communicate with each other on the same wavelength.
Elliot: Yeah, absolutely. There's been a complete reversal in how it's perceived. I would be nervous these days if someone was trying to run their own datacenter for doing some critically sensitive workload, rather than using one of the big cloud providers just because economies have scale. The number of security engineers working at AWS defending that infrastructure 24/7 is so much bigger than even a big finance company is able to do for something that they're running on-premise.
Corey: The piece that I always found fascinating was that in having these conversations with folks, the story of why public cloud was not acceptable began to hold less and less water. It went from, “It’s new, and scary, and we don't trust it,” to, “Our data is important, and we don't want that living in the public cloud.” Really? Because your bank is in the cloud. You compliance body that is going to be auditing you in the cloud. Your tax authority is in the cloud. What makes your data more important than any of those other three bodies who are very happy right now in the same availability zones and regions that you are currently pooh-poohing?
Elliot: Yeah, and your military is there too.
Corey: Oh yeah, that’s right. I've forgotten that piece. It comes down to the story of, “Yeah, you're right. It's much safer if I have a bunch of half-awake people running other owned few racks down the street at the Colo.” It just doesn't work out that way.
Elliot: Yeah. It does not at all. It's really a great time to be working on some of these things because for a long time, I was a little sad thinking that all the cool tech was being applied to absolutely trivial things. But it feels like over the last year, it really kind of hit this tipping point where a lot of the cool technology is now able to be applied to the most sensitive workloads. We can do really interesting things with medical records, with financial transactions, and bring benefits to people who need cool features around those absolutely life-critical transactions.
Corey: What's interesting to me as well is that people still tend to approach this stuff as a binary rather than a spectrum. It's fascinating that someone will naively say that a payment transaction company needs to have the same level of security controls, best practices, and security policies as Twitter for Pets, more or less. It feels like that is fundamentally untrue.
Elliot: It totally is. I think a couple of things are happening. One is that we're trying to raise the minimum bar for everyone. Things like GDPR cast a very wide net and they sort of insist on, ‘if you're processing data about customers, you need to level up.’ What was okay five years ago is not okay today. Beyond that, I think there is a real spectrum. One of the things that I'm really hoping we in the tech industry learn how to do, some skills we need to acquire, are skills that the insurance industry has had for a hundred years. That's understanding how to think about risk like you said, as a spectrum, as a range of probabilities, with a range of possible losses, then choose the things that we're doing to try and protect, or minimize the amount of loss based on what really makes sense. If you have $100 at risk, it doesn't make sense to spend $10,000 to protect it. Maybe it makes more sense not to do that business, or maybe it makes more sense to buy some insurance, or maybe it makes sense to have another control that is totally much less obnoxious to the people in your organization.
An example I love to use is, “We're worried about these engineers out there in that cloud, turning stuff on and spending money,” and then like, “What if it's not working? What if they run up a big bill?” That's a legitimate concern. You absolutely want to have spending controls inside your company. But think about how it feels to have a budget alarm versus a very restrictive policy about who can create new resources.
You have a totally different amount of innovation inside the team and a totally different track record of retaining people with one versus the other. The budget alarm is going to cost you way less and tell you way sooner when something does go wrong and you're spending money that you don’t want to be spending.
Corey: It also leads the rise of Shadow IT, people working around policy when it gets in the way of doing their job. People get understandably upset when they're making six figures but aren't allowed to spin up a $50 a month instance without six weeks of approvals. It becomes working against the better interest of the company where people have to subvert process in order to effectively do their jobs. That is never something anyone wants to see happen.
I guess the way I tend to approach the security is from a perspective of, if someone wants what you have badly enough, they will get it. Assume that the battle is already lost and think of the headline risk of when it happens. Do you want to be in the headlines for getting breached after they wound up kidnapping three members of your staff and putting this incredibly advanced system into place that would eventually subvert your folks over time and the world have never seen it before, or do you want it to be because you didn't use the proper permissions policy on your S3 buckets and someone found it by accident?
It comes to raising the bar of what it takes to subvert you. At some point, I'm sorry, your startup no matter how effective it is, it’s not going to be able to withstand a coordinated assault by a nation. It just doesn't work that way.
Elliot: Yes, absolutely. I think there's real value to shifting their thinking from pure prevention—which I think it seems to be a default approach for a lot of technical folks, total focus on prevention at all costs—and shift from that to early detection and mitigation. That can lead to dramatically better customer experiences and dramatically better employee experience.
I remember the third project I worked on where I was integrating a payment gateway. I got to Stripe. That was amazing. I found out later on watching a meet-up talk that Stripe had optimized for approving accounts quickly, and disabling them quickly if they detected fraud. Whereas all of the other payment gateways I had worked for were very onerous in the sign-up process because they're trying to stop any fraudulent sign ups, Stripe was optimizing for lots of sign-ups, immediately turn off any fraudulent accounts. As a legitimate user, I had the best experience I've ever had with that payment processor because of that mindset that they had toward security.
Corey: Right. When I was selling sponsorships early on in the history of the newsletter, I was using Stripe to do it. I needed to be able to drop an invoice and someone could pay with a credit card in front of them in about 20 minutes. Stripe had it done in three. It was incredible. Now if it had been fraudulent, I suspect they would have hit me with a belt metaphorically or perhaps literally speaking, given that I know enough people over there. The fact that it got out of my way was incredibly valuable.
I'm sure they’ve run the numbers and I'm sure that there are barriers around that. If I spin something up quickly, it's not going to instantaneously let me accept a $4 million payment and transfer that into my account. There are going to be controls and oversights to makes sense. Depending on how they structure it, if the total risk is in the order of, “I'm not going to be able to process more than $5,000 worth of transactions, or whatever it is, until a human has reviewed it,” that is a lot more manageable than, “I'm trying to sell this thing for $20, and I need to wait four weeks to do that.” By that point, the buyer has long since lost interest and gone away.
Corey: What it cost them in terms of fraudulent use has got to be orders of magnitude lower than what it would cost them to go the other direction in terms of dissuaded customers.
Elliot: Yeah. That's where a much more mature way of thinking about risk modeling and understanding what is that actual amount of risk that you're trying to protect against. It can totally transform the feeling people have working with those products and services in those regulated environments.
Corey: One of the more counter-intuitive aspects of this entire world, and this applies to people who are the developers, who are the administrators, who are the rest, 90% of your security posture will come down to some very basic things. Use a unique password for every site, use a password manager, and enable two-factor auth wherever you can. If you do nothing other than those three things, you are going to be so much better off than any other ridiculous ‘five steps down the ladder things you can do’ to start optimizing these things. You can run incredibly complex security software that does amazing things, but if you aren't controlling the basic stuff, the permissions, the access control, then there's really no point to it. That's like building an incredibly thick amazing wall and forgetting to lock the door.
Elliot: Yeah, absolutely. One of the most fascinating things for me as I've gotten deeper and deeper into risk management over the last couple of years was realizing, as you start to do this proper analysis, there's a standard model called FAIR, Factor Analysis Information Risk, that’s super helpful. As you start to actually calculate out in dollars like, “What are our dollars at risk here?” more often than not, it shows you have things that are over-controlled, that you're spending too much on protecting, rather than what I would have expected that is always showing how much more you need to add controls in place. Just as many times it shows like, “You're way over controlling these stuff. It’s not actually reducing your exposure any. Let's simplify things.”
Corey: That becomes anathema. The problem is, is that effective security personnel and compliance personnel understand that there's a limit of what they can ask for and what they can't. The naive approach of, “Lock down everything, captain edge case, security, and the rest will only ever communicate via signal. They run Linux only on hardware that they control. They don't use anything they'd been using the last five years because they want to lock it all down themselves. Everything's encrypted.” And you talk to them, “Cool, how do I email you?” The answer is, “Well first, you have to install GPG,” and it goes down to this entire list of making them almost irrelevant to any conversation.
I think everyone who's worked in this space long enough knows at least five people that could be referring to it that they know from their personal histories. I get it. I love that the idea that you can go that deep. I'm not working for the NSA. I send out a snarky, sarcastic newsletter, and for my personal use case, the dangerous access that I have that is gated by all the stuff you would expect, starts and stops with access to my clients’ AWS bills. I keep a minimum of those that I need to do my job and then, novel idea, I get rid of them when I'm done.
The window for exploit is relatively small for what I do, and it doesn't get you much. That does not mean that I could pass any of these compliance regimes today, but I don't need to. If I were to go down that path of building out everything that I do for my entire business across everything is in a compliant way, I would pay for that with an awful lot of velocity. For what I do, the risk does not justify taking that level of care and diligence. There very well may come a day when that changes, but today, I do what makes sense for the risk profile that I live within. The danger comes in if that risk profile changes and I don't notice or take appropriate steps when that happens.
Elliot: Yeah. There's a built-in sort of tension against preventing anybody from a data breach. Another responsibility that folks in these environments have is the data availability, that you still have the data. They're sort of a funny failure mode in encrypted backups and all these encryption everywhere which is, if you don't have the keys, you can't get the data. It’s gone forever.
You also have the risk of availability loss which can lead to fines and judgments, lost business, and all of that stuff. You're absolutely right. All of it needs to be balanced. There is a spectrum. A range of choices. Some of those choices are going to be unique to your business, but then some of them, you were talking about that bar, are just available to you in the cloud. That’s really cool.
Corey: Surfacing a lot of these decisions up to the appropriate level is also something that tends to be overlooked at times, where it becomes very easy for an individual contributor whose configuring something to make one of these decisions on the fly. That works in small environments that are not particularly regulated. That goes away really quickly once you start having to be responsible for that for other folks. The last thing in the world that CSO wants is being told of a security posture problem that someone randomly decided in the dark of night three years ago and no one ever revisit it. It comes down to also understanding the organizational requirements.
Elliot: Yeah. This is starting to bubble up even on the agenda for board members who are responsible at the highest levels for oversight and governance of an organization. They're certainly not making decisions about how to protect things, but they want to know. They really want to know from the CSO and from the rest of the staff, “How are we on cyber security? How are we compared to where we were six months ago? How much should we be spending on it?” Referencing that insurance stuff again, it's really important that folks working around this in the tech industry learn the techniques for quantitative risk analysis.
There's a nonprofit trade organization called SIRA, Society of Information Risk Analysts. I volunteer and help them. This is something that other industries have been dealing with operational risk for decades and much longer. Their techniques that are well understood, that we can directly apply to cyber security risk and sort of express those issues, those tradeoffs that we're facing in terms of a business case, in terms of dollars at risk, in terms of how much it would cost to reduce a certain amount of risk, that is something that everybody at the board meeting can understand.
Corey: When I was going through my own business insurance processing done on my side, I was asked what my DR strategy was, and the honest answer was cool. If the internet or power goes out of my house, I'm going to go work from a coffee shop. This led to a back and forth where they wanted to know, “Okay, what if your datacenter goes offline?” “Well, I keep everything inside of AWS for what I'm working on, so that isn't really a concern.” “Okay, what if they lose an entire region?” “Well, permanently? Then I'm really not worried very much because first, most of what I've building is replicable, and secondly, I'll be too busy printing money from people who did not plan for this and have serious business concerns there. Suddenly, I'm charging 10 times what I used to, to help get sites back online.” In the event of a world shaking event that is almost cataclysmic in nature, at past a certain point, yeah, my DR plan doesn't matter anymore, “Well, what happens if something happens to you?” “I'm an independent business owner, my business closes. The end. That is the nature of what I do. I'm not necessarily building something here to outlive me.” So sorry folks, the podcast and the newsletter, and even the cost consultancy go away if I get hit by a truck. My apologies in advance.
Elliot: Yeah and that's just a level of risk that's appropriate for small business. We’re not going to defend against those, you just accept them.
Corey: What's also strange too is when you hear people talking about this from a business continuity perspective. The question is always, “What if you get hit by a truck?” instead of the much more likely scenario of, “What happens when you walk in and give your two weeks notice because you're changing jobs?” We understand that you're looking at 18-36 months average tenure for most people in the tech space, but we still talk about now magically. You're going to stay at the company that you're at now until you retire with a gold pocket watch in 25 years. That doesn't happen anymore.
Instead, we either turn into a lottery winner with this great, amazing thing happening, or something horrifying and you get hit by the bus, as opposed to you leave and go on to your next job, as is natural in the cycle of things. It winds up being an edge of disaster recovery and business continuity planning that I always found to be farcical. I had the same problem when I was to ask, “Okay, so we have a site that's an hour away, what's your plan to get there in the event that the city is in chaos?” and the answer was a very honest, “I'm going to be taking care of my family.” Then they go down the list of, “Okay, your family's okay, but now you want to do work, and the internet is broken.” Cool, and you somehow think that in that scenario, San Francisco is going to be intact, and/or I'm still going to be working here, rather than printing money from everyone else who's willing to pay me multiples of my salary. Suddenly, I wasn't invited to those meetings anymore.
Elliot: Yes. It's one of those cases like we were saying earlier, where you just really have to think about what are the actual cost of not showing up to work for a week, and then what would it cost to make it so that we could show up to work for that week. As soon as you see that it's going way out of whack like the costs are far exceeding the value you're actually trying to preserve, why keep the labor in the point? They should be just cutting that conversation short.
Corey: Yeah, and it just winds up being something that sort of only exists in a very niche scenario. The disaster you planned for, by the way, is never the disaster that hits. It's always going to be something new and exciting and complicated.
Corey: Thank you so much for taking the time to speak with me. If people want to learn more about the nonsense that it I that you and I once upon a time do for a living, where can they wind up learning more?
Elliot: Check out our website kindlyops.com. We have a knowledge base there, which has some free words and some free software around risk analysis, and how to think about these things. Maybe how to get people off your back at work a little bit if you're having to deal with it. Yeah, so kindlyops.com.
Corey: I would absolutely endorse the stuff you folks do. In the past, when I've had weird compliance questions, you're generally my first stop. That is not a paid endorsement, that is simply the reality that you are better at this than I am, and I don't want to do it.
Elliot: Well, thank you very much.
Corey: Thank you so much once again for your time. I appreciate it. Elliott Murphy, Kindly Ops. I'm Corey Quinn and this is Screaming In The Cloud.