Artwork

Innehåll tillhandahållet av Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.
Player FM - Podcast-app
Gå offline med appen Player FM !

Defensive Security Podcast Episode 272

51:40
 
Dela
 

Manage episode 428285719 series 1344233
Innehåll tillhandahållet av Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Links:

https://www.darkreading.com/cybersecurity-operations/a-cisos-guide-to-avoiding-jail-after-a-breach

https://www.csoonline.com/article/2512955/us-supreme-court-ruling-will-likely-cause-cyber-regulation-chaos.html/

https://sansec.io/research/polyfill-supply-chain-attack

https://www.securityweek.com/over-380k-hosts-still-referencing-malicious-polyfill-domain-censys/

https://www.tenable.com/blog/how-the-regresshion-vulnerability-could-impact-your-cloud-environment

Transcript
===

[00:00:00]

jerry: All right. Here we go. Today is Sunday, July 7th, 2024, and this is episode 272 of the defensive security podcast. My name is Jerry Bell and joining me tonight as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. This is a newly reestablished record twice in a week or

jerry: twice in a week. I can’t believe it.

Andrew: I know. Awesome. Yeah. You just had to, quit that crappy job of yours that provided income for your family and pets and you know everything else but now that you’re unemployed house But now that you’re an unemployed bum.

jerry: Yeah, I can podcast all I want 24 7 I think i’m gonna become an influencer like i’m gonna just be live all the time now

Andrew: you could I really I look forward to you asking me to subscribe and hit that notify button.

jerry: That’s right. Hit that subscribe button

Andrew: Like leave a rating and a comment

jerry: like and subscribe All [00:01:00] right getting with the program we’re we’re getting back into our normal rhythm. As per normal, we’ve got a couple of stories to talk about. The first one comes from Dark Rating and the title is, A CISO’s Guide to Avoiding Jail After a Breach.

Andrew: Before we get there.

Andrew: I want to throw out the disclaimer that thoughts and opinions do not reflect any of our employers, past, present, or future.

jerry: That’s a great point. Or, my cats.

Andrew: Unlike you, I have to worry about getting fired.

jerry: I still have a boss. She can fire me.

Andrew: That’s called divorce, sir. But true.

jerry: Yeah.

Andrew: Anyway, back to your story.

jerry: Anyway, yeah. CISO’s Guide to Avoiding Jail After a Breach. So this is this is following on a upcoming talk at, I think it’s Black Hat talking about how CISOs can try to insulate themselves from the [00:02:00] potential legal harms or legal perils that can arise as a result of their jobs. It’ll be interesting to see what’s actually in that talk, because the article itself, in my estimation, despite what the title says, doesn’t actually give you a lot of actionable information on, How to avoid jail. They do they do a quote Mr. Sullivan, who was the CISO for Uber.

jerry: And they give a little bit of background and how it’s interesting that he he is, now a convicted felon. Although I think that’s still working its way through the the appeals process. Though he previously was appointed to a cybersecurity board by president Obama.

jerry: And before that he was a federal prosecutor. And in fact, as the article points out, he was one of the process, he was the prosecutor who prosecuted the first DMCA case, which I thought was quite interesting. You didn’t know that about him, but what’s interesting is this article at least is based a lot on [00:03:00] interviews with him and including recommendations on things like communicating with your your board and your executive leadership team. But I’m assuming that He had done that at Uber.

Andrew: Yeah, this is such a tough one for me, and it makes, I think a lot of good people make references in the article. I want to shy away from being a CISO if there’s this sort of potential personal liability. When, there’s a lot of factors that come into play about why a company might be breached that aren’t always within the control of the CISO, whether it be budget, whether it be focus, whether it be company priorities, and you have an active adversary who is looking for any possible way to get into your environment.

Andrew: So what becomes the benchmark of what constitutes a breach? Negligence up to the point of going to jail is the one that [00:04:00] I’ve struggled with so much and I think those who haven’t really worked in the field much can very easily just point to mistakes that are made, but they don’t necessarily understand the complexity of what goes in to that chain of events and chain of decisions that led to that situation.

Andrew: Every job I’ve been in where we were making serious decisions about cybersecurity was a budgetary trade off and a priority trade off and a existential threat to the company if we don’t do X, Y, and Z. Coming from five or six different organizations at the same time coming up to that CFO or the CEO and they have to make hard calls about where that those resources go and those priorities go to keep people employed. And you pair that with a very hostile, third party intentionally trying to breach you it’s a tough situation and I don’t think any of us knows what the rules look like. At this point to keep yourself out of [00:05:00] trouble. You’ve been in this position, not in the, going to jail part, but that this threat was much more meaningful to you in your last role than it is to me.

jerry: It is very uncomfortable. I’ll tell you when when the Uber CISO got got charged and the CISO of SolarWinds got charged, that’s It’s an uncomfortable feeling an exposed feeling. In criminal law, there’s this concept of strict liability.

jerry: And strict liability basically means, it means the thing happened. And because the thing happened and you are responsible for the thing, it doesn’t matter that, there, there’s no mitigating factors. Your your state of mind, your motivations, , none of that matters in a strict liability case.

jerry: And to some extent, it feels like that in this instance, I don’t think it really is, although, when you’re a CISO sometimes that thought can cross your mind. Now in the article, they actually point out that, though the CISO is the [00:06:00] lightning rod when things go wrong. It is not just the CISO that is responsible for, what went wrong.

jerry: As they describe it, it takes a community and the results of that community are, as we’ve now seen or is alleged is, being pinned on a particular individual. And I, I think and I know from having read the Uber case I’ve not. I’m not so familiar with the SolarWinds case although I’m obviously familiar with what happened in SolarWinds case, with Uber, it was a situation where they they had a a, basically a data breach and the allegation was that the ad, the adversary was trying to hold it for ransom and they They successfully negotiated having that, at least this is my understanding of how the case went they negotiated a payment through [00:07:00] the bug bounty program to the adversaries, perhaps, maybe adversaries isn’t the right word allegedly deleted the data and because of that, they didn’t report the breach.

jerry: And so it was really, the failure to report that breach which the government was coming after him for, basically being deceptive to investors. And it’s not necessarily that he was malicious or what have you, but no, basically my layman’s rate is he was defrauding

jerry: investors by withholding information about a breach that he was obligated to report. So that’s a tough situation. And what concerns me is that this is somebody who was a federal prosecutor so I had I had plenty of competent legal counsel surrounding me.

jerry: And that was a good thing. It felt good. And I’m quite certain he did too, further he himself [00:08:00] was a prosecutor. And so I have a hard time accepting, and maybe it’s just very naive of me. I’d have a hard time accepting that, He was actually trying to misrepresent things or hide things.

jerry: I guess that’s where I’m at on this one. It feels bad and the article points out that, because of this, one of the, one of the whispers as they describe it in the industry is that it’s forcing people who are qualified for the role and understand the perils that they face to shy away from taking that role.

jerry: And that then leads to people who are maybe not as qualified taking the role and then obviously not doing as good of a job. And therefore actually, the net effect is a weaker security posture.

Andrew: Yeah. I think one thing that you can, if we try to get some advice out of this or try to give some advice out of this, and the one thing they mentioned in For lack of a better [00:09:00] term, tie some other people in the organization to the same decision, right?

Andrew: Make sure that your board is aware and your executives are aware and that you’re not the only one holding the risk bag at the end of the day that, if you have to own the risk yourself, then you need to have formal control. Now, in this case, we’re talking about. In theory, he got in trouble because he didn’t notify the SEC and it was a public company, it was material breach.

Andrew: And, so stockholders weren’t informed more so than he was negligent in his cybersecurity duties in terms of technical controls and audits and that sort of thing. However, that feels the way things are going. We hear more and more calls for hold companies accountable directly and legally with risk of jail for breaches.

Andrew: And this, there’s a lot of nuance here that’s not exactly what happened here. But I find that very troubling and [00:10:00] obviously, I have a bias because I’m in the industry and I would be at risk of that potentially. But I just don’t think it’s that simple. There’s no CISO that has that much control over an environment that they should be solely responsible for taking the fall if a breach were to happen, although that does happen all the time, but it’s one thing to lose your job is another thing to go to jail.

jerry: Yeah. And I think that the author here points out that at least as Mr. Sullivan describes it, he feels like he was put forward by Uber as a sacrificial lamb. I guess what I don’t really understand was how much better would it have been for him if, He had done a more effective job at, creating what I’ll loosely call co conspirators within the company.

jerry: I think what they’re trying to say is that you as a CISO should go to the board, to your CEO, to whoever, and articulate the risk, [00:11:00] not with the intention of them again, becoming co conspirators, but of them saying, gosh, now I know about it I don’t want to go to jail. I’m going to reallocate the money or do what, do whatever is required in order to address the particular risk. Now, I think in this instance, it wasn’t like a, we have to go spend more money on security. It was more, Hey, we had this issue. Do we disclose it or not?

jerry: And I think, that’s a slight, maybe a slightly different take, I would assume by the way, just again, having played in this pool he didn’t make that decision alone.

Andrew: Sure. Part of me, and this maybe is not exactly apples to apples, but I think about a lawyer advising an executive on the legality of something that executive can take that advice or reject that advice, a CISO advising a company on the legality or outcome or [00:12:00] risk of a decision. They don’t always make that decision. They’re somewhat beholden to their leadership on which way the company wants to go.

jerry: There was a an unwritten aspect to this that I wanted to discuss a bit. And that is the subtext of all of this, I think, is going to create an adversarial relationship between the CISO and the CISO’s employer, because it feels to me like what the government would have preferred is for the CISO to to run to the government and say, Hey, my employer isn’t acting ethically.

jerry: Necessarily saying that’s what happened in Uber’s case or any of these cases, but I think that’s what the government is trying to push. Now, granted there’s a not so gray line, beyond which you have an ethical duty to to rat on your employer.

jerry: You can imagine all sorts of situations not [00:13:00] even in the realm of security where, you would be obligated to go and and report them. But it feels to me they’re trying to lower that bar.

Andrew: Yeah, I can see that. Unfortunately this is probably going to be messy to get sorted out. And it’s going to take a lot of case law and it’s going to take a lot of precedence. That makes me nervous. If I were offered a CISO opportunity at a public company, I’d probably think real long and hard about it, about passing on it or trying to assure some level of security to avoid this problem.

jerry: Our next story throws some throw some sand in the gears there. This one comes from CSO online and the title here is US Supreme Court ruling will likely cause cyber Regulation chaos.

jerry: And so unless you’ve been living under a rock or perhaps just not in the U S you’re probably aware that the Supreme Court, I guess it was last [00:14:00] week, overturned what has been called or referred to as the Chevron deference doctrine. And the name comes from the oil company Chevron, and it stems back from a 1984 so 40-year-old ruling by the Supreme Court that basically I, I’ll sum summarize it to say that ambiguous laws passed by Congress can be interpreted by regulators like the FCC, the FDA, the SEC and so on. In the U S at least a lot of regulations are very high level. It’ll say something, I’m going to make it pick a stupid example. It’ll have that will say, use a strong authentication . And then it’ll be up to a regulator to say strong authentication means that you use multi factor authentication.

jerry: That isn’t SMS based.

jerry: That initial ruling was intended to establish that courts aren’t experts [00:15:00] in all matters of law.

jerry: And by default courts should be deferring to these regulators. And that has stood the test of time for quite a long time. And now it was overturned in in this session of the Supreme whatever you want to think about the sensibility of it.

jerry: I think the challenge that we now have is going to be a have made the joke on social media that right now the most promising career opportunities has got to be trial lawyers, because there’s going to be all manner of court cases, challenging different regulations which, in the past were, pretty well established as following regulations set by the executive branch in the U. S. But now as this article points out, things ranging from the SEC’s requirements around data breach [00:16:00] notifications to the Graham Leach Bliley Act of 1999.

jerry: There’s a broad range in the security space of regulations, which, are likely to be challenged in court because the prescription behind those laws basically don’t cover the way they’re currently being enacted. And so we should assume that they will, these will be challenged in court and given the Supreme Court’s ruling, the established prescription coming out of the executive branch is no longer to be deferred to.

jerry: And it’s unclear at this point, by the way, how courts are going to pick up their new mantle of responsibility and interpreting these things because, judges aren’t experts in security. So I think that’s why they’re calling it chaos right now, because we don’t really know what’s going to happen. For the longterm, think things will normalize.

Andrew: Yeah. Businesses hate uncertainty.

jerry: [00:17:00] Yes.

Andrew: And for good or ill, businesses can have a huge impact on government legislation. So I think this will get sorted out eventually, but I think you’re right. I think what we counted on, or at least tried to work around or With these regulatory agencies and understand these rules have now all changed, and I think you’re right.

Andrew: There’s going to be probably a ton of. of these rules that have the force of law being challenged now in court. And I think ultimately Congress has probably the reins to fix this if they want, but I think that’s another interesting problem. If SCOTUS is saying, look, You regulatory agencies are taking the power of law in your own hands and we don’t like that.

Andrew: So the power of law comes from Congress and elected officials in Congress. Then Congress, you need to do a better job of defining these rules specifically. That presents its [00:18:00] own set of interesting challenges because how well will they do that? And we’ve seen a lot of well intentioned laws, especially in very complex areas, have their own set of problems because of all of the trade offs and problems that go into legislative work in Congress causing issues.

Andrew: So it will be very interesting. This could have a lot of wide ranging impacts. And again, to your point, I’m not getting anywhere near whether they should or shouldn’t have done this, but I think the intent was you unelected regulators shouldn’t make law, Congress should make law. Okay. But that’s easier said than done.

jerry: Yeah. It’s, I think it’s that plus the constitution itself. very directly says that it is up to the judicial branch to interpret laws passed by Congress. Yeah. Yeah. And not the executive branch. And that’s [00:19:00] what, that’s where I think if you read the majority opinion, that’s basically to sum up, that’s what they’re saying.

jerry: I think the, the challenges that when the constitution was written, like there was, it was a much, much simpler time.

Andrew: There’s a lot of interesting arguments about. That you see out there and there’s a lot of very passionate opinions on this. So I’m trying very hard to stay away from the political rhetoric around it and just, I concur that this throws a lot of accepted precedent around our industry into question.

jerry: But, going back to the previous story, I don’t know, again, I’m not a, I’m not an attorney. However, if I were Joe Sullivan, I would feel like I have a new avenue of appeal.

Andrew: Sure. Yeah. Did the SEC made this law in essence could, would be his argument. And based on this particular ruling by SCOTUS [00:20:00] that was an inappropriate ruling and, or an inappropriate law.

Andrew: And therefore his. Obviously I’m not a lawyer because I’m not articulating this like a lawyer, but he could say that’s why I shouldn’t have been trying to convicted and please politely pound sand.

jerry: I do think the, I do think the opinion did say something along the lines of it doesn’t overturn, previously held court cases, people are due their day in court.

jerry: So if he has an avenue for appeal, that’s how the justice system works. This is hot off the presses. I think. I think the echoes are still circling the earth, we’ll be seeing the outcome of this for a while and I don’t think we exactly know what’s going to happen next. Stay tuned and we’ll check in on this periodically.

jerry: Okay. The next one comes from Sansec and there’s actually two stories, one from Sansec and one from security week. And this is [00:21:00] regarding the polyfill. io issue. I’m hesitant to call it a supply chain attack, but I guess that’s what everybody’s calling it.

Andrew: Come on, get on the bandwagon.

jerry: I know, I know.

Andrew: If you want to be an influencer, man, you got to use the influencer language.

jerry: I feel, it makes me feel dirty to call it a supply chain attack. So why what makes you so uncomfortable calling it a supply chain attack? I don’t know. I don’t know. I, that’s a good question. And I, the answer is I don’t really know.

jerry: It just feels wrong.

Andrew: Did your mother talk to you a lot about supply chain attacks?

jerry: See that’s, maybe that’s the problem.

Andrew: Okay. Imagine you’re walking in a desert and you come across a supply chain attack upside down stuck on its back. Do you help it? But you’re not turning it over. Why aren’t you turning it over, Jerry?

jerry: I don’t even know where this is going.

Andrew: I had to lighten it up after the last two stories, man. You were being a downer.

jerry: Polyfill is [00:22:00] a is a JavaScript library that many organizations included in their own website. It does oversimplifying it. It enables some types of more advanced functions or newer functions of modern web browsers to work in older versions of web browsers. And so I don’t fully understand the sanity behind this. I think it’s, maybe this will start to cause some rethink on how this works, but , this JavaScript library is called by reference rather than it being served up by your web server, you are referring to it, as a remote entity remote document hosted on, in this instance, polyfill. io.

Andrew: So instead of the static code living in your. HTML code. You’re saying go get the code snippet from this bot and serve it up.

jerry: Correct. It’s telling the web browser to go get the codes directly. Yeah. What happened [00:23:00] back in February was I don’t fully understand, what precipitated this, but the maintainer of the polyfill. js library in the polyfill. io domain. Was sold to a Chinese company. And that company then started using they all basically, they altered the JavaScript script library to alt, alternatively, depending on where you’re located and other factors either serve you malware or serve you spam ads and so on.

Andrew: So you’re saying there are not hot singles in my area ready to meet me?

jerry: It’s surprising, but there probably are actually.

Andrew: carry on.

jerry: They can’t all be using Polyfil. Anyhow, there, there were, depending on who you believe somewhere ranging from 100, 000 [00:24:00] websites that were including this polyfill. io code to tens of millions as purported by CloudFlare. So at this point, by the way, that the issue is somewhat mitigated.

jerry: I’ll come back to why I say somewhat mitigated that the poly field that IO domain, which was hosting the malicious code has been taken down. Most of the big CDN providers are redirecting to their own local known good copies, but again, they haven’t solved the underlying issue that it’s still pointing to JavaScript code that’s hosted by somebody else. Although, presumably companies like CloudFlare and Akamai and Fastly are probably more trustworthy than, Funnel in China.

Andrew: Yeah yeah, because they actually came out and denied any malicious intent and cried foul on this whole thing too, which was interesting.

jerry: Yes. [00:25:00] But people have done a pretty good job. And in fact, this, the San Sec report gives it pretty good. Pretty thorough examination of what was being served up. And, you can very clearly see it it’s serving up some domain lookalikes, like I find it hilarious, Googie dash any analytics. Com, which is supposed to look like googleanalytics. com. And I suppose if it were in all caps, it would probably look a lot more like that. But the other interesting thing is that these researchers, noticed that the same company also in several other domains, some of which have been also serving up malware.

jerry: And those have also been taken down, but there are also others that aren’t serving, or haven’t been seen serving malware yet and are still active. And so it’s it’s probably worth having your threat Intel teams. Take a look at this because my guess would be that at some point in the future the [00:26:00] other domains that this organization owns will probably likewise be used to serve up a malware.

Andrew: Bold of you to assume that all of us have threat Intel teams.

jerry: Fair enough. You do you just, it just may be you.

Andrew: Correct. Me and Google.

jerry: Yes.

Andrew: And my RSS feed of handy blogs, but yes,

jerry: that’s right,

Andrew: but yeah, they seem to have, oh, a wee bit of a history of being up to no good.

Andrew: This particular Chinese developer.

jerry: Yes, defending against this, I think is pretty, pretty tough beyond what I said on the supply side. I think it’s, I think it’s a bad idea. Maybe I’m a purist. Maybe I’m old school and it should be out the pasture. I think it’s a risky as we’ve seen many times now.

jerry: This is not by far the first time this has happened to be including by reference things [00:27:00] hosted, as part of some kind of an open source program. Not necessarily picking on open source there. I think it happens less often with commercial software. As we’ve seen it now happen quite a few times with these open source programs, either, including things like browser extensions and whatnot.

jerry: I, now having said that, you can imagine a universe where this existed as a just simply and solely a GitHub repo and companies, instead of referring to polyfill. io we’re downloading the polyfill code to their own web server. And most likely you, you would have between a hundred thousand and 10 million websites serving locally, modified code, but then again, nobody updates

Andrew: right? It would be impacted, but we’re running 28 year old versions.

jerry: So maybe not.

Andrew: Yeah, but boy, to your point, it gives me a little bit of a [00:28:00] heebie jeebies to say that the website that you’re responsible for is dynamically loading content and serving it that you don’t have control over, but that’s perhaps very naive of me.

Andrew: I don’t do much website development. I don’t know if that’s common, but as a security guy, that makes me go, Ooh, that’s risky. So we don’t control that at all. Some third party does. And we’re serving that to our customers or visitors to come to our website and we just have to trust it. Okay. But that probably exists in many other aspects of a modern supply chain or a modern development environment where you just have to trust it and hope that.

Andrew: People are picking up any sort of malicious behavior and reporting it as they did in this case, which is helpful But then it causes everybody to scramble to find where they’re using this which then goes to hey How good is your software building materials or software asset management program to how quickly can you identify you for using this?

Andrew: and then there was a lot of confusion when this first came out because there’s different sort of kind of [00:29:00] styles or Instances of polyfill that some were impacted some were not how much of this is You know, what truly was at risk? And the upside is that the domain was black hole pretty quick. Anyway, it seems so fragile, right? You’ve got this third party code that you don’t control. You don’t know what’s the other end. You probably have ignored that it’s even out there and forgotten about it, especially this is defunct code. And that’s a whole other area that drives me a little crazy at night is how do you know when an open source software is no longer being maintained and is silently or quietly gone end of life and you should be replacing it? I’ve contemplated things like, hey, if there hasn’t been an update within one year, Do we call that no longer maintained?

Andrew: I don’t know. I don’t have a good answer. I play around with that idea with my developers and talking about, because we want to make sure that code is well maintained and third party code that we’re using is being up to date. We don’t want end of life code in general, but I don’t know what [00:30:00] constitutes the end of life in open source anymore.

jerry: I think we will eventually see some sort of health rating for open source projects. And that health rating will be based on like, where are the developers located in the world? How long on average does it take for reported vulnerabilities to get fixed? How frequently are commits and releases of code being made and other things like that. But that doesn’t necessarily mean a whole lot. Look at what happened with, what was it? X Z.

Andrew: Yeah. Yeah.

jerry: That was a very, arguably, won’t call it healthy, right?

jerry: But it was an active project that had a malicious a malicious contributor who found ways of contributing malicious code in ways that were difficult to discern. And then, you look at what happened with open SSL and then open SSH and [00:31:00] it’s not a guarantee, but I think

jerry: it would be good to know that, hey, you have code in your environment that is included by reference and it was just bought by a company who’s known to be a malicious adversary. And we don’t have that. We don’t have any way of doing that today.

Andrew: So you want like a restaurant health inspector to just show up and be like, all right, show me your cleanliness.

jerry: They so I think that we will get there.

Andrew: You want a sign in the window, this restaurant slash get hub repository earned a B minus, but has great brisket.

jerry: Sometimes you just have to risk it. Good, good brisket is good brisket. So I think that’s going to happen, but what that doesn’t solve is the demand side. So that’s. I think part of the supply side, you still have to know to go look for the health score.

Andrew: Or have some sort of tooling or third party tool [00:32:00] that, some sort of software security suite that, scans your code and alerts you on these things in some way, like in theory. And I’m sure by the way, that there’s probably vendors out there that think they do this today and be happy to pimp us on their solution.

jerry: Oh I’m, I feel quite certain that my LinkedIn. DMs will be lit up with people wanting to come on the show to talk about their fancy AI enabled source code analyzer.

Andrew: But it’s just one more thing devs that now have to worry about as security teams have to worry about. And. This is a competition against developing new features and new functionality and fixing bugs is, this is now just one more input to worry about, which competes for priorities, which is why it’s not that simple.

jerry: It’s very true. Way back when I was a CISO.

Andrew: You mean two weeks ago?

jerry: Way back. The way I had always characterized it is using open source software is like adopting a puppy. You can’t ignore it. It needs to be cared for. You have to feed it and clean up after [00:33:00] it and walk it and whatnot. I don’t think that is a common approach. I think we typically consume it as a matter of convenience and assume that it will be good forever. I think we’re getting, we’re starting to get better about developing an inventory of what you have through SBOM. And that of course will lead to better intelligence on what needs to be updated when it has a vulnerability and that’s certainly goodness, but I think that the end to end process in many organizations needs a lot of work.

Andrew: Yeah. I also think that this is never going to go away in terms of companies. I think rightly or wrongly, or we’ll always be reliant on third party open source software now. And so we’ve got to find, and this is also a relatively rare event that we’re aware of the hundreds or maybe thousands of open source projects that people use regularly.

Andrew: This doesn’t happen very [00:34:00] often.

jerry: It’s the Shark attack syndrome, you hear about it every time it happens. And so it’s, it seems like it happens often, but when it does happen, it can be spectacular. .

Andrew: It’s interesting because when these things hit a certain level of press awareness, it also drives a third party risk management engagement of various vendors to vendors and Inevitably, at least in my experience, when we see something like this hit you will inevitably see, if you were a vendor to other businesses, their third party risk management team spinning up questionnaires to their suppliers, hey, are you impacted by this and what’s your plan?

Andrew: Which then drives another sense of urgency and a sense of reaction. That may be false urgency that’s taking your resources away from something that’s more important. But you can’t really ignore it. The urgency goes up when customers are demanding a reaction in this way, whether or not it’s truly your most important risk that you’re working, it doesn’t matter.

jerry: Having come from a service provider, I [00:35:00] lived that pain. And, and I’m sure you, you do too. Like you, you have to deal with it both ways. You have your own customers who you want you to answer their questions, but then you have your own suppliers. If for no other reason than to be able to answer your customer’s questions with a straight face, you’ve got to go and answer them. I think one of the challenges with that is where does it end ? I’m a supplier to some other company and I have suppliers and they have suppliers and they have suppliers and they have turtles all the way down, and If you think about everybody, assuming everybody acted responsibly and they all got their vendor questionnaires out at right away, but how long would it take to actually be able to authoritatively answer those questions?

jerry: I don’t know. I think it’s. I think there’s a lot of kabuki dance, I don’t know if that’s an appropriate term there.

Andrew: It’s executives saying, we have to do something, go do something. [00:36:00]

jerry: That’s true.

Andrew: And so then the risk management folks or third party risk manager or whoever do something and then they could point, Hey, look, we did something.

Andrew: We’re waiting for responses back from Bob’s budget cloud provider.

jerry: There’s a lot of hand wringing that goes on. I will also say having worked, in certain contexts you end up having small suppliers. You may end up with small suppliers who may not know they have to go do something.

jerry: And so your questionnaire may in fact be the thing that prompts them to go take action because their job is to deliver parts. They’re not a traditional service provider. They have some other business focus.

jerry: In those instances, it could very well be because like you said, not everybody has a threat intel team, that you are in fact telling them that they have to worry about something it’s, it doesn’t make it any less annoying though, especially if you have a, a real, a more robust security program in place. Because I don’t [00:37:00] know, in my experience, I’m not sure anything genuinely beneficial has come from those vendor questionnaires other than put potentially, like I said, the occasional you’re telling a supplier who was otherwise unaware.

Andrew: I think it breeds a false sense of security that you’ve got a well managed supply chain and a well managed third party risk management.

Andrew: I question the effectiveness.

jerry: Yeah I can agree with that.

Andrew: So not to be too cynical about it, but, and then I always wonder, what are you going to do? Okay, let’s say. Let’s say you’re, how soon could you shift to another provider? Okay. Let’s play this out. Let’s say you ask me and I’m running Bob’s budget cloud provider.

Andrew: Do I have polyfill? And I say, I don’t know. What are you going to do? You’re going to cancel your contract. Maybe you’re going to choose to go someplace else. Maybe it’s going to take time. Yeah, it could influence your decision to renew or continue new [00:38:00] business or whatnot. But

jerry: it’s, I think what you’re trying to say, and I agree is it doesn’t change the facts for that particular situation.

Andrew: Right yeah. And do you want me to spend time answering your questions or go fixing the problem?

jerry: I want you to do both, dammit. That’s that’s their view. What do I pay you for?

Andrew: I don’t know. I have a tough spot. I don’t have a really warm fuzzy about these sort of fire drills that get spun up around Big media InfoSec events.

Andrew: I think they’re, I think it’s the shark attack and it’s, do you have sharks in your lagoon? Maybe.

jerry: I feel like this whole area is very immature. It’s a veneer that, in most instances, I think is worse than useless because it does create a false sense of security.

Andrew: Yeah, I agree. And how do you know I’m not lying to you when I fill out your little form?

jerry: That’s the concern. We’re lying and there was a breach, like you would, you as the [00:39:00] customer would, crucify them in the media, or in a lawsuit

Andrew: Yeah, at the end of the day, it either becomes a breach of contract or a, I don’t know, I’m not a lawyer, but I haven’t fully articulated my thoughts on this yet. But there’s something I’ve just never really felt was very effective or useful about these sorts of questionnaires that go out around these well publicized security events.

jerry: Yeah, I agree. I agree. I think there is likely something sensible as a consumer.

jerry: Yeah. It is helpful to know the situation with your suppliers and how exposed you are, because then your management wants to know, Hey what’s my level of exposure to this thing? And you don’t want to turn your pockets inside out and say, I don’t know. But at the same time, I’m not sure that the way that we’re doing it today is really establishing that level of reliable intelligence. The last story comes from tenable the title is how the regression vulnerability could impact your cloud environment. So the [00:40:00] regression is cutely spelled with the SSH capitalized. So regression, this regression vulnerability was a recently discovered this slash disclosed vulnerability in open SSH.

jerry: I think it was for versions released between 2021 and as recently as a couple of weeks or months ago and can under certain circumstances allow for remote code execution. So kind of bad

Andrew: Yeah remote code execution Unauthenticated against open SSH that’s open to the world.

Andrew: Correct, but It’s not that easy to pull off.

jerry: Correct. There’s a lot of, there’s a lot of caveats and it’s not necessarily the easiest thing to exploit. So I think they say it takes about 10, 000 authentication attempts. And even with that, you have to understand the exact version of OpenSSH and information about the platform it’s running on, like [00:41:00] it’s a 32 bit, 64 bit, et cetera.

Andrew: Yeah. And I think that those tests were, a 32 bit. And it’s much tougher against 64 bit because you’ve got to basically get the right address collision in memory, is my understanding. Take that with a little grain of salt. But that was my understanding.

jerry: But not impossible. And so the point of this post is, OpenSSH is exposed everywhere.

jerry: Like it’s everywhere. And they point back to cloud and I think they point to cloud for two reasons. Reason number one is, in, I think cloud incentivizes or makes it really easy and in some instances, preferable to expose SSH as a way of managing your, your cloud systems. And in those instances, there’s almost always going to be open SSH. Unless it’s RDP, then it’s all good.

Andrew: It’s much preferred.

jerry: RDP is way better.

Andrew: There’s a GUI. There’s pictures.

jerry: There’s pictures. That’s right.

Andrew: A mouse works.

jerry: How [00:42:00] much better could it get? And then the other reason they are picking on cloud providers is that as a consumer, you are provisioning based on images that usually with most cloud providers, You’re provisioning your servers using images provided by the cloud provider. And those images may not be updated as frequently as maybe they should be. And so therefore, when you provision a system, it is quite likely, to come vulnerable right out of the gate. And you’ve got to get in there and patch it right away.

jerry: You’ve got to know that’s your responsibility and it’s not actually protected by the magic cloud security dust.

Andrew: At least, not your cloud. Maybe Bob’s budget secure cloud is, I don’t know, that joke didn’t work out, but you make an interesting point. And I think I was talking to somebody about this and I was trying to make the example that when we started doing this stuff pre cloud, because we’re old. [00:43:00] The concept of something being exposed to the internet was a big deal. Everything was in a data center behind a firewall, typically. And typically if you wanted to expose something to the internet, like an SSH Port or an HTTP port, an HTTPS port, that usually had a lot of steps to go through, and most companies would also make sure that you’re hardening it and making sure that, it really needed to be exposed.

Andrew: But with cloud, and I think you referenced this, it’s exposed by default. Most of the time there’s this, there’s not this concept of this thick firewall that, that only the most important things and well vetted and well secured things would be exposed to the internet. There is no more quote unquote perimeter. Everything’s just open to the internet. And that’s the way the paradigm is taught now with a lot of cloud providers, that there isn’t this concept necessarily of private stuff in the cloud versus public stuff. It’s just. stuff. And yeah they, talk to limited ACLs and only open the ports you have to and that sort of thing.

Andrew: But I think it’s super easy and super simple for people to just build something and I got to [00:44:00] get to it. So open SSH and, or whatever, or literally RDP and do what they got to do. And to your point, yeah, most of these images are not. hand rolled images. There’s something, some sort of image that you grab off of some catalog and spin it up and probably has a bunch of vulnerable stuff in it.

Andrew: But SSH we think of as safe ish. And, even security folks are like only have SSH open. But this to me speaks more and more to, it still matters what your attack service is, and you still shouldn’t be exposing stuff that doesn’t need to be exposed to the internet because you never know when something like this is going to come along even on quote unquote, your safe protocols to be open to the internet.

Andrew: So the less you have exposed, the less you have to worry about this. Now, I’m not saying that the only thing gets attacked is the stuff that’s open to the internet. We know that’s true, but it’s one more. hurdle that the bad guy has to get through. And again, buys you more time to manage stuff if it’s not directly exposed as an attack surface to a random guy coming from [00:45:00] China.

jerry: So the the recommendations coming out of this are a couple. First is making sure that you update, obviously that you update for this vulnerability, patch the vulnerability. Second is that when you are using cloud services and you’re provisioning systems with. A cloud provided image, make sure that you are keeping them patched, even newly provisioned systems are probably missing patches and they need to be patched post haste, limiting access, they talk about least privilege and they talk about that on two axes.

jerry: The first axis is With regard to network access to SSH, not everything should have access to SSH. It is not a bad practice to go back to the bastion host approach on a relatively untrusted system that then you use as a jumping off point to get deeper into the network where you don’t have every one of your systems. SSH exposed to the internet. It gives you [00:46:00] one place to patch. It gives you a lot more ability to focus your monitoring and whatnot. Now the other access they point out is that in the context of cloud providers, you can assign access privileges to systems. And so if your system is compromised it’s going to inherit all the access that you’ve given to it through your cloud provider. And so that could be access to S3 storage buckets or, other cloud resources that may be not directly on the system that was compromised, but because that system was delegated access to other resources they provide basically seamless access for an adversary to get to them. And that’s another, in my view, a benefit to that relatively untrusted bastion host concept that doesn’t have any of those privileges associated with it.

Andrew: Yeah, it’s a tough sell. I don’t think most cloud [00:47:00] architects think about it that way at all.

jerry: You are absolutely right. They don’t think about that until they’ve been breached. And then they do. Yeah. And I can authoritatively say that given where I came from.

Andrew: That’s fair. And part of the goal of this show is to try to take lessons. So you don’t have to learn the hard way.

jerry: There is a better way. And it’s not, no, it’s not as convenient. Not everything that we used to do back in the old days, when we rode around on dinosaurs was a bad idea. There are certain things that, probably are still apt even in today’s cloud based world.

jerry: I think one of the, one of the challenges I’ve seen is the, how best to describe it, the, like the bastardized embracing of zero trust. Again, in concept, it’s a great idea, it’s a great idea, but like the whole NIST password guidance that came out a couple of years ago where people looked at it said, Oh, NIST says I don’t need to change my [00:48:00] passwords anymore. It does actually say that, but it’s in the context of several other things that need to be in place, in, in the context of in the context of zero trust, that also portends certain other things. I think where zero trust starts to break down is when you have vulnerabilities that allow the bypassing of those trust enforcement points.

Andrew: Yeah. If you can’t trust the actual authentication authorization technology involved zero trust dependent upon that. I think the takeaway for me is you can never get to zero risk, but you never know when you might have to rapidly patch something really critical.

Andrew: And are you built to respond quickly? Can you identify quickly? Can you find it quickly? And can you patch it quickly? That’s the question.

jerry: And you can make it harder or easier on yourself. Design choices you make can make that harder or easier.

Andrew: Yeah. As well as how you run your teams. One thing that I’ve often tried to instill in [00:49:00] the teams that I work with is I can’t tell you what vulnerabilities are going to show up in the next quarter, but I know something’s going to show up. So you should plan for 10 to 20 percent of your cycles to be unplanned, interrupt driven work driven by security.

Andrew: And if you don’t, if you’re committing all of your time to things, not security, when I show up, it’s a fire drill, but I know I’m going to show up and I know I’m going to have asks. So plan for them. Even if I can’t tell you what they are, a smart team will reserve that time as an insurance policy, but that’s a tough sell.

Andrew: It’s a tough sell. Yeah. They don’t always buy into it, but that’s my theory. I try to do it, to explain at least and try to get them to buy into. And sometimes it works. Sometimes it doesn’t.

jerry: All right. I think I think with that, we’ll call it a show.

Andrew: Given the weather gods are fighting us today.

jerry: Yeah. I see [00:50:00] that it’s starting to move into my area, so it’ll probably be here as well. So thank you to everybody for joining us again. Hopefully you found this interesting and helpful. If you did tell a friend and subscribe.

Andrew: And buy something from our sponsor today, sponsored by Jerry’s llamas,

jerry: The best llamas there are. All right.

Andrew: I feel like all the podcasts need to need, use our code Jerry’s big llama box. Dot com.

Andrew: I’m just going to stop before this goes completely off the rails.

jerry: That happened about 45 minutes ago.

jerry: So just a reminder, you can follow the podcast on our website at defensive security. org. You can follow Lerg at

Andrew: Lerg L E R G on both x slash Twitter and InfoSec. Exchange slash Mastodon.

jerry: And you can follow me on InfoSec. Exchange at Jerry. And [00:51:00] with that, we will talk again next week. Thank you.

Andrew: Have a great week, everybody.

Andrew: Bye bye.

  continue reading

267 episoder

Artwork
iconDela
 
Manage episode 428285719 series 1344233
Innehåll tillhandahållet av Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat. Allt poddinnehåll inklusive avsnitt, grafik och podcastbeskrivningar laddas upp och tillhandahålls direkt av Jerry Bell and Andrew Kalat, Jerry Bell, and Andrew Kalat eller deras podcastplattformspartner. Om du tror att någon använder ditt upphovsrättsskyddade verk utan din tillåtelse kan du följa processen som beskrivs här https://sv.player.fm/legal.

Links:

https://www.darkreading.com/cybersecurity-operations/a-cisos-guide-to-avoiding-jail-after-a-breach

https://www.csoonline.com/article/2512955/us-supreme-court-ruling-will-likely-cause-cyber-regulation-chaos.html/

https://sansec.io/research/polyfill-supply-chain-attack

https://www.securityweek.com/over-380k-hosts-still-referencing-malicious-polyfill-domain-censys/

https://www.tenable.com/blog/how-the-regresshion-vulnerability-could-impact-your-cloud-environment

Transcript
===

[00:00:00]

jerry: All right. Here we go. Today is Sunday, July 7th, 2024, and this is episode 272 of the defensive security podcast. My name is Jerry Bell and joining me tonight as always is Mr. Andrew Kalat.

Andrew: Good evening, Jerry. This is a newly reestablished record twice in a week or

jerry: twice in a week. I can’t believe it.

Andrew: I know. Awesome. Yeah. You just had to, quit that crappy job of yours that provided income for your family and pets and you know everything else but now that you’re unemployed house But now that you’re an unemployed bum.

jerry: Yeah, I can podcast all I want 24 7 I think i’m gonna become an influencer like i’m gonna just be live all the time now

Andrew: you could I really I look forward to you asking me to subscribe and hit that notify button.

jerry: That’s right. Hit that subscribe button

Andrew: Like leave a rating and a comment

jerry: like and subscribe All [00:01:00] right getting with the program we’re we’re getting back into our normal rhythm. As per normal, we’ve got a couple of stories to talk about. The first one comes from Dark Rating and the title is, A CISO’s Guide to Avoiding Jail After a Breach.

Andrew: Before we get there.

Andrew: I want to throw out the disclaimer that thoughts and opinions do not reflect any of our employers, past, present, or future.

jerry: That’s a great point. Or, my cats.

Andrew: Unlike you, I have to worry about getting fired.

jerry: I still have a boss. She can fire me.

Andrew: That’s called divorce, sir. But true.

jerry: Yeah.

Andrew: Anyway, back to your story.

jerry: Anyway, yeah. CISO’s Guide to Avoiding Jail After a Breach. So this is this is following on a upcoming talk at, I think it’s Black Hat talking about how CISOs can try to insulate themselves from the [00:02:00] potential legal harms or legal perils that can arise as a result of their jobs. It’ll be interesting to see what’s actually in that talk, because the article itself, in my estimation, despite what the title says, doesn’t actually give you a lot of actionable information on, How to avoid jail. They do they do a quote Mr. Sullivan, who was the CISO for Uber.

jerry: And they give a little bit of background and how it’s interesting that he he is, now a convicted felon. Although I think that’s still working its way through the the appeals process. Though he previously was appointed to a cybersecurity board by president Obama.

jerry: And before that he was a federal prosecutor. And in fact, as the article points out, he was one of the process, he was the prosecutor who prosecuted the first DMCA case, which I thought was quite interesting. You didn’t know that about him, but what’s interesting is this article at least is based a lot on [00:03:00] interviews with him and including recommendations on things like communicating with your your board and your executive leadership team. But I’m assuming that He had done that at Uber.

Andrew: Yeah, this is such a tough one for me, and it makes, I think a lot of good people make references in the article. I want to shy away from being a CISO if there’s this sort of potential personal liability. When, there’s a lot of factors that come into play about why a company might be breached that aren’t always within the control of the CISO, whether it be budget, whether it be focus, whether it be company priorities, and you have an active adversary who is looking for any possible way to get into your environment.

Andrew: So what becomes the benchmark of what constitutes a breach? Negligence up to the point of going to jail is the one that [00:04:00] I’ve struggled with so much and I think those who haven’t really worked in the field much can very easily just point to mistakes that are made, but they don’t necessarily understand the complexity of what goes in to that chain of events and chain of decisions that led to that situation.

Andrew: Every job I’ve been in where we were making serious decisions about cybersecurity was a budgetary trade off and a priority trade off and a existential threat to the company if we don’t do X, Y, and Z. Coming from five or six different organizations at the same time coming up to that CFO or the CEO and they have to make hard calls about where that those resources go and those priorities go to keep people employed. And you pair that with a very hostile, third party intentionally trying to breach you it’s a tough situation and I don’t think any of us knows what the rules look like. At this point to keep yourself out of [00:05:00] trouble. You’ve been in this position, not in the, going to jail part, but that this threat was much more meaningful to you in your last role than it is to me.

jerry: It is very uncomfortable. I’ll tell you when when the Uber CISO got got charged and the CISO of SolarWinds got charged, that’s It’s an uncomfortable feeling an exposed feeling. In criminal law, there’s this concept of strict liability.

jerry: And strict liability basically means, it means the thing happened. And because the thing happened and you are responsible for the thing, it doesn’t matter that, there, there’s no mitigating factors. Your your state of mind, your motivations, , none of that matters in a strict liability case.

jerry: And to some extent, it feels like that in this instance, I don’t think it really is, although, when you’re a CISO sometimes that thought can cross your mind. Now in the article, they actually point out that, though the CISO is the [00:06:00] lightning rod when things go wrong. It is not just the CISO that is responsible for, what went wrong.

jerry: As they describe it, it takes a community and the results of that community are, as we’ve now seen or is alleged is, being pinned on a particular individual. And I, I think and I know from having read the Uber case I’ve not. I’m not so familiar with the SolarWinds case although I’m obviously familiar with what happened in SolarWinds case, with Uber, it was a situation where they they had a a, basically a data breach and the allegation was that the ad, the adversary was trying to hold it for ransom and they They successfully negotiated having that, at least this is my understanding of how the case went they negotiated a payment through [00:07:00] the bug bounty program to the adversaries, perhaps, maybe adversaries isn’t the right word allegedly deleted the data and because of that, they didn’t report the breach.

jerry: And so it was really, the failure to report that breach which the government was coming after him for, basically being deceptive to investors. And it’s not necessarily that he was malicious or what have you, but no, basically my layman’s rate is he was defrauding

jerry: investors by withholding information about a breach that he was obligated to report. So that’s a tough situation. And what concerns me is that this is somebody who was a federal prosecutor so I had I had plenty of competent legal counsel surrounding me.

jerry: And that was a good thing. It felt good. And I’m quite certain he did too, further he himself [00:08:00] was a prosecutor. And so I have a hard time accepting, and maybe it’s just very naive of me. I’d have a hard time accepting that, He was actually trying to misrepresent things or hide things.

jerry: I guess that’s where I’m at on this one. It feels bad and the article points out that, because of this, one of the, one of the whispers as they describe it in the industry is that it’s forcing people who are qualified for the role and understand the perils that they face to shy away from taking that role.

jerry: And that then leads to people who are maybe not as qualified taking the role and then obviously not doing as good of a job. And therefore actually, the net effect is a weaker security posture.

Andrew: Yeah. I think one thing that you can, if we try to get some advice out of this or try to give some advice out of this, and the one thing they mentioned in For lack of a better [00:09:00] term, tie some other people in the organization to the same decision, right?

Andrew: Make sure that your board is aware and your executives are aware and that you’re not the only one holding the risk bag at the end of the day that, if you have to own the risk yourself, then you need to have formal control. Now, in this case, we’re talking about. In theory, he got in trouble because he didn’t notify the SEC and it was a public company, it was material breach.

Andrew: And, so stockholders weren’t informed more so than he was negligent in his cybersecurity duties in terms of technical controls and audits and that sort of thing. However, that feels the way things are going. We hear more and more calls for hold companies accountable directly and legally with risk of jail for breaches.

Andrew: And this, there’s a lot of nuance here that’s not exactly what happened here. But I find that very troubling and [00:10:00] obviously, I have a bias because I’m in the industry and I would be at risk of that potentially. But I just don’t think it’s that simple. There’s no CISO that has that much control over an environment that they should be solely responsible for taking the fall if a breach were to happen, although that does happen all the time, but it’s one thing to lose your job is another thing to go to jail.

jerry: Yeah. And I think that the author here points out that at least as Mr. Sullivan describes it, he feels like he was put forward by Uber as a sacrificial lamb. I guess what I don’t really understand was how much better would it have been for him if, He had done a more effective job at, creating what I’ll loosely call co conspirators within the company.

jerry: I think what they’re trying to say is that you as a CISO should go to the board, to your CEO, to whoever, and articulate the risk, [00:11:00] not with the intention of them again, becoming co conspirators, but of them saying, gosh, now I know about it I don’t want to go to jail. I’m going to reallocate the money or do what, do whatever is required in order to address the particular risk. Now, I think in this instance, it wasn’t like a, we have to go spend more money on security. It was more, Hey, we had this issue. Do we disclose it or not?

jerry: And I think, that’s a slight, maybe a slightly different take, I would assume by the way, just again, having played in this pool he didn’t make that decision alone.

Andrew: Sure. Part of me, and this maybe is not exactly apples to apples, but I think about a lawyer advising an executive on the legality of something that executive can take that advice or reject that advice, a CISO advising a company on the legality or outcome or [00:12:00] risk of a decision. They don’t always make that decision. They’re somewhat beholden to their leadership on which way the company wants to go.

jerry: There was a an unwritten aspect to this that I wanted to discuss a bit. And that is the subtext of all of this, I think, is going to create an adversarial relationship between the CISO and the CISO’s employer, because it feels to me like what the government would have preferred is for the CISO to to run to the government and say, Hey, my employer isn’t acting ethically.

jerry: Necessarily saying that’s what happened in Uber’s case or any of these cases, but I think that’s what the government is trying to push. Now, granted there’s a not so gray line, beyond which you have an ethical duty to to rat on your employer.

jerry: You can imagine all sorts of situations not [00:13:00] even in the realm of security where, you would be obligated to go and and report them. But it feels to me they’re trying to lower that bar.

Andrew: Yeah, I can see that. Unfortunately this is probably going to be messy to get sorted out. And it’s going to take a lot of case law and it’s going to take a lot of precedence. That makes me nervous. If I were offered a CISO opportunity at a public company, I’d probably think real long and hard about it, about passing on it or trying to assure some level of security to avoid this problem.

jerry: Our next story throws some throw some sand in the gears there. This one comes from CSO online and the title here is US Supreme Court ruling will likely cause cyber Regulation chaos.

jerry: And so unless you’ve been living under a rock or perhaps just not in the U S you’re probably aware that the Supreme Court, I guess it was last [00:14:00] week, overturned what has been called or referred to as the Chevron deference doctrine. And the name comes from the oil company Chevron, and it stems back from a 1984 so 40-year-old ruling by the Supreme Court that basically I, I’ll sum summarize it to say that ambiguous laws passed by Congress can be interpreted by regulators like the FCC, the FDA, the SEC and so on. In the U S at least a lot of regulations are very high level. It’ll say something, I’m going to make it pick a stupid example. It’ll have that will say, use a strong authentication . And then it’ll be up to a regulator to say strong authentication means that you use multi factor authentication.

jerry: That isn’t SMS based.

jerry: That initial ruling was intended to establish that courts aren’t experts [00:15:00] in all matters of law.

jerry: And by default courts should be deferring to these regulators. And that has stood the test of time for quite a long time. And now it was overturned in in this session of the Supreme whatever you want to think about the sensibility of it.

jerry: I think the challenge that we now have is going to be a have made the joke on social media that right now the most promising career opportunities has got to be trial lawyers, because there’s going to be all manner of court cases, challenging different regulations which, in the past were, pretty well established as following regulations set by the executive branch in the U. S. But now as this article points out, things ranging from the SEC’s requirements around data breach [00:16:00] notifications to the Graham Leach Bliley Act of 1999.

jerry: There’s a broad range in the security space of regulations, which, are likely to be challenged in court because the prescription behind those laws basically don’t cover the way they’re currently being enacted. And so we should assume that they will, these will be challenged in court and given the Supreme Court’s ruling, the established prescription coming out of the executive branch is no longer to be deferred to.

jerry: And it’s unclear at this point, by the way, how courts are going to pick up their new mantle of responsibility and interpreting these things because, judges aren’t experts in security. So I think that’s why they’re calling it chaos right now, because we don’t really know what’s going to happen. For the longterm, think things will normalize.

Andrew: Yeah. Businesses hate uncertainty.

jerry: [00:17:00] Yes.

Andrew: And for good or ill, businesses can have a huge impact on government legislation. So I think this will get sorted out eventually, but I think you’re right. I think what we counted on, or at least tried to work around or With these regulatory agencies and understand these rules have now all changed, and I think you’re right.

Andrew: There’s going to be probably a ton of. of these rules that have the force of law being challenged now in court. And I think ultimately Congress has probably the reins to fix this if they want, but I think that’s another interesting problem. If SCOTUS is saying, look, You regulatory agencies are taking the power of law in your own hands and we don’t like that.

Andrew: So the power of law comes from Congress and elected officials in Congress. Then Congress, you need to do a better job of defining these rules specifically. That presents its [00:18:00] own set of interesting challenges because how well will they do that? And we’ve seen a lot of well intentioned laws, especially in very complex areas, have their own set of problems because of all of the trade offs and problems that go into legislative work in Congress causing issues.

Andrew: So it will be very interesting. This could have a lot of wide ranging impacts. And again, to your point, I’m not getting anywhere near whether they should or shouldn’t have done this, but I think the intent was you unelected regulators shouldn’t make law, Congress should make law. Okay. But that’s easier said than done.

jerry: Yeah. It’s, I think it’s that plus the constitution itself. very directly says that it is up to the judicial branch to interpret laws passed by Congress. Yeah. Yeah. And not the executive branch. And that’s [00:19:00] what, that’s where I think if you read the majority opinion, that’s basically to sum up, that’s what they’re saying.

jerry: I think the, the challenges that when the constitution was written, like there was, it was a much, much simpler time.

Andrew: There’s a lot of interesting arguments about. That you see out there and there’s a lot of very passionate opinions on this. So I’m trying very hard to stay away from the political rhetoric around it and just, I concur that this throws a lot of accepted precedent around our industry into question.

jerry: But, going back to the previous story, I don’t know, again, I’m not a, I’m not an attorney. However, if I were Joe Sullivan, I would feel like I have a new avenue of appeal.

Andrew: Sure. Yeah. Did the SEC made this law in essence could, would be his argument. And based on this particular ruling by SCOTUS [00:20:00] that was an inappropriate ruling and, or an inappropriate law.

Andrew: And therefore his. Obviously I’m not a lawyer because I’m not articulating this like a lawyer, but he could say that’s why I shouldn’t have been trying to convicted and please politely pound sand.

jerry: I do think the, I do think the opinion did say something along the lines of it doesn’t overturn, previously held court cases, people are due their day in court.

jerry: So if he has an avenue for appeal, that’s how the justice system works. This is hot off the presses. I think. I think the echoes are still circling the earth, we’ll be seeing the outcome of this for a while and I don’t think we exactly know what’s going to happen next. Stay tuned and we’ll check in on this periodically.

jerry: Okay. The next one comes from Sansec and there’s actually two stories, one from Sansec and one from security week. And this is [00:21:00] regarding the polyfill. io issue. I’m hesitant to call it a supply chain attack, but I guess that’s what everybody’s calling it.

Andrew: Come on, get on the bandwagon.

jerry: I know, I know.

Andrew: If you want to be an influencer, man, you got to use the influencer language.

jerry: I feel, it makes me feel dirty to call it a supply chain attack. So why what makes you so uncomfortable calling it a supply chain attack? I don’t know. I don’t know. I, that’s a good question. And I, the answer is I don’t really know.

jerry: It just feels wrong.

Andrew: Did your mother talk to you a lot about supply chain attacks?

jerry: See that’s, maybe that’s the problem.

Andrew: Okay. Imagine you’re walking in a desert and you come across a supply chain attack upside down stuck on its back. Do you help it? But you’re not turning it over. Why aren’t you turning it over, Jerry?

jerry: I don’t even know where this is going.

Andrew: I had to lighten it up after the last two stories, man. You were being a downer.

jerry: Polyfill is [00:22:00] a is a JavaScript library that many organizations included in their own website. It does oversimplifying it. It enables some types of more advanced functions or newer functions of modern web browsers to work in older versions of web browsers. And so I don’t fully understand the sanity behind this. I think it’s, maybe this will start to cause some rethink on how this works, but , this JavaScript library is called by reference rather than it being served up by your web server, you are referring to it, as a remote entity remote document hosted on, in this instance, polyfill. io.

Andrew: So instead of the static code living in your. HTML code. You’re saying go get the code snippet from this bot and serve it up.

jerry: Correct. It’s telling the web browser to go get the codes directly. Yeah. What happened [00:23:00] back in February was I don’t fully understand, what precipitated this, but the maintainer of the polyfill. js library in the polyfill. io domain. Was sold to a Chinese company. And that company then started using they all basically, they altered the JavaScript script library to alt, alternatively, depending on where you’re located and other factors either serve you malware or serve you spam ads and so on.

Andrew: So you’re saying there are not hot singles in my area ready to meet me?

jerry: It’s surprising, but there probably are actually.

Andrew: carry on.

jerry: They can’t all be using Polyfil. Anyhow, there, there were, depending on who you believe somewhere ranging from 100, 000 [00:24:00] websites that were including this polyfill. io code to tens of millions as purported by CloudFlare. So at this point, by the way, that the issue is somewhat mitigated.

jerry: I’ll come back to why I say somewhat mitigated that the poly field that IO domain, which was hosting the malicious code has been taken down. Most of the big CDN providers are redirecting to their own local known good copies, but again, they haven’t solved the underlying issue that it’s still pointing to JavaScript code that’s hosted by somebody else. Although, presumably companies like CloudFlare and Akamai and Fastly are probably more trustworthy than, Funnel in China.

Andrew: Yeah yeah, because they actually came out and denied any malicious intent and cried foul on this whole thing too, which was interesting.

jerry: Yes. [00:25:00] But people have done a pretty good job. And in fact, this, the San Sec report gives it pretty good. Pretty thorough examination of what was being served up. And, you can very clearly see it it’s serving up some domain lookalikes, like I find it hilarious, Googie dash any analytics. Com, which is supposed to look like googleanalytics. com. And I suppose if it were in all caps, it would probably look a lot more like that. But the other interesting thing is that these researchers, noticed that the same company also in several other domains, some of which have been also serving up malware.

jerry: And those have also been taken down, but there are also others that aren’t serving, or haven’t been seen serving malware yet and are still active. And so it’s it’s probably worth having your threat Intel teams. Take a look at this because my guess would be that at some point in the future the [00:26:00] other domains that this organization owns will probably likewise be used to serve up a malware.

Andrew: Bold of you to assume that all of us have threat Intel teams.

jerry: Fair enough. You do you just, it just may be you.

Andrew: Correct. Me and Google.

jerry: Yes.

Andrew: And my RSS feed of handy blogs, but yes,

jerry: that’s right,

Andrew: but yeah, they seem to have, oh, a wee bit of a history of being up to no good.

Andrew: This particular Chinese developer.

jerry: Yes, defending against this, I think is pretty, pretty tough beyond what I said on the supply side. I think it’s, I think it’s a bad idea. Maybe I’m a purist. Maybe I’m old school and it should be out the pasture. I think it’s a risky as we’ve seen many times now.

jerry: This is not by far the first time this has happened to be including by reference things [00:27:00] hosted, as part of some kind of an open source program. Not necessarily picking on open source there. I think it happens less often with commercial software. As we’ve seen it now happen quite a few times with these open source programs, either, including things like browser extensions and whatnot.

jerry: I, now having said that, you can imagine a universe where this existed as a just simply and solely a GitHub repo and companies, instead of referring to polyfill. io we’re downloading the polyfill code to their own web server. And most likely you, you would have between a hundred thousand and 10 million websites serving locally, modified code, but then again, nobody updates

Andrew: right? It would be impacted, but we’re running 28 year old versions.

jerry: So maybe not.

Andrew: Yeah, but boy, to your point, it gives me a little bit of a [00:28:00] heebie jeebies to say that the website that you’re responsible for is dynamically loading content and serving it that you don’t have control over, but that’s perhaps very naive of me.

Andrew: I don’t do much website development. I don’t know if that’s common, but as a security guy, that makes me go, Ooh, that’s risky. So we don’t control that at all. Some third party does. And we’re serving that to our customers or visitors to come to our website and we just have to trust it. Okay. But that probably exists in many other aspects of a modern supply chain or a modern development environment where you just have to trust it and hope that.

Andrew: People are picking up any sort of malicious behavior and reporting it as they did in this case, which is helpful But then it causes everybody to scramble to find where they’re using this which then goes to hey How good is your software building materials or software asset management program to how quickly can you identify you for using this?

Andrew: and then there was a lot of confusion when this first came out because there’s different sort of kind of [00:29:00] styles or Instances of polyfill that some were impacted some were not how much of this is You know, what truly was at risk? And the upside is that the domain was black hole pretty quick. Anyway, it seems so fragile, right? You’ve got this third party code that you don’t control. You don’t know what’s the other end. You probably have ignored that it’s even out there and forgotten about it, especially this is defunct code. And that’s a whole other area that drives me a little crazy at night is how do you know when an open source software is no longer being maintained and is silently or quietly gone end of life and you should be replacing it? I’ve contemplated things like, hey, if there hasn’t been an update within one year, Do we call that no longer maintained?

Andrew: I don’t know. I don’t have a good answer. I play around with that idea with my developers and talking about, because we want to make sure that code is well maintained and third party code that we’re using is being up to date. We don’t want end of life code in general, but I don’t know what [00:30:00] constitutes the end of life in open source anymore.

jerry: I think we will eventually see some sort of health rating for open source projects. And that health rating will be based on like, where are the developers located in the world? How long on average does it take for reported vulnerabilities to get fixed? How frequently are commits and releases of code being made and other things like that. But that doesn’t necessarily mean a whole lot. Look at what happened with, what was it? X Z.

Andrew: Yeah. Yeah.

jerry: That was a very, arguably, won’t call it healthy, right?

jerry: But it was an active project that had a malicious a malicious contributor who found ways of contributing malicious code in ways that were difficult to discern. And then, you look at what happened with open SSL and then open SSH and [00:31:00] it’s not a guarantee, but I think

jerry: it would be good to know that, hey, you have code in your environment that is included by reference and it was just bought by a company who’s known to be a malicious adversary. And we don’t have that. We don’t have any way of doing that today.

Andrew: So you want like a restaurant health inspector to just show up and be like, all right, show me your cleanliness.

jerry: They so I think that we will get there.

Andrew: You want a sign in the window, this restaurant slash get hub repository earned a B minus, but has great brisket.

jerry: Sometimes you just have to risk it. Good, good brisket is good brisket. So I think that’s going to happen, but what that doesn’t solve is the demand side. So that’s. I think part of the supply side, you still have to know to go look for the health score.

Andrew: Or have some sort of tooling or third party tool [00:32:00] that, some sort of software security suite that, scans your code and alerts you on these things in some way, like in theory. And I’m sure by the way, that there’s probably vendors out there that think they do this today and be happy to pimp us on their solution.

jerry: Oh I’m, I feel quite certain that my LinkedIn. DMs will be lit up with people wanting to come on the show to talk about their fancy AI enabled source code analyzer.

Andrew: But it’s just one more thing devs that now have to worry about as security teams have to worry about. And. This is a competition against developing new features and new functionality and fixing bugs is, this is now just one more input to worry about, which competes for priorities, which is why it’s not that simple.

jerry: It’s very true. Way back when I was a CISO.

Andrew: You mean two weeks ago?

jerry: Way back. The way I had always characterized it is using open source software is like adopting a puppy. You can’t ignore it. It needs to be cared for. You have to feed it and clean up after [00:33:00] it and walk it and whatnot. I don’t think that is a common approach. I think we typically consume it as a matter of convenience and assume that it will be good forever. I think we’re getting, we’re starting to get better about developing an inventory of what you have through SBOM. And that of course will lead to better intelligence on what needs to be updated when it has a vulnerability and that’s certainly goodness, but I think that the end to end process in many organizations needs a lot of work.

Andrew: Yeah. I also think that this is never going to go away in terms of companies. I think rightly or wrongly, or we’ll always be reliant on third party open source software now. And so we’ve got to find, and this is also a relatively rare event that we’re aware of the hundreds or maybe thousands of open source projects that people use regularly.

Andrew: This doesn’t happen very [00:34:00] often.

jerry: It’s the Shark attack syndrome, you hear about it every time it happens. And so it’s, it seems like it happens often, but when it does happen, it can be spectacular. .

Andrew: It’s interesting because when these things hit a certain level of press awareness, it also drives a third party risk management engagement of various vendors to vendors and Inevitably, at least in my experience, when we see something like this hit you will inevitably see, if you were a vendor to other businesses, their third party risk management team spinning up questionnaires to their suppliers, hey, are you impacted by this and what’s your plan?

Andrew: Which then drives another sense of urgency and a sense of reaction. That may be false urgency that’s taking your resources away from something that’s more important. But you can’t really ignore it. The urgency goes up when customers are demanding a reaction in this way, whether or not it’s truly your most important risk that you’re working, it doesn’t matter.

jerry: Having come from a service provider, I [00:35:00] lived that pain. And, and I’m sure you, you do too. Like you, you have to deal with it both ways. You have your own customers who you want you to answer their questions, but then you have your own suppliers. If for no other reason than to be able to answer your customer’s questions with a straight face, you’ve got to go and answer them. I think one of the challenges with that is where does it end ? I’m a supplier to some other company and I have suppliers and they have suppliers and they have suppliers and they have turtles all the way down, and If you think about everybody, assuming everybody acted responsibly and they all got their vendor questionnaires out at right away, but how long would it take to actually be able to authoritatively answer those questions?

jerry: I don’t know. I think it’s. I think there’s a lot of kabuki dance, I don’t know if that’s an appropriate term there.

Andrew: It’s executives saying, we have to do something, go do something. [00:36:00]

jerry: That’s true.

Andrew: And so then the risk management folks or third party risk manager or whoever do something and then they could point, Hey, look, we did something.

Andrew: We’re waiting for responses back from Bob’s budget cloud provider.

jerry: There’s a lot of hand wringing that goes on. I will also say having worked, in certain contexts you end up having small suppliers. You may end up with small suppliers who may not know they have to go do something.

jerry: And so your questionnaire may in fact be the thing that prompts them to go take action because their job is to deliver parts. They’re not a traditional service provider. They have some other business focus.

jerry: In those instances, it could very well be because like you said, not everybody has a threat intel team, that you are in fact telling them that they have to worry about something it’s, it doesn’t make it any less annoying though, especially if you have a, a real, a more robust security program in place. Because I don’t [00:37:00] know, in my experience, I’m not sure anything genuinely beneficial has come from those vendor questionnaires other than put potentially, like I said, the occasional you’re telling a supplier who was otherwise unaware.

Andrew: I think it breeds a false sense of security that you’ve got a well managed supply chain and a well managed third party risk management.

Andrew: I question the effectiveness.

jerry: Yeah I can agree with that.

Andrew: So not to be too cynical about it, but, and then I always wonder, what are you going to do? Okay, let’s say. Let’s say you’re, how soon could you shift to another provider? Okay. Let’s play this out. Let’s say you ask me and I’m running Bob’s budget cloud provider.

Andrew: Do I have polyfill? And I say, I don’t know. What are you going to do? You’re going to cancel your contract. Maybe you’re going to choose to go someplace else. Maybe it’s going to take time. Yeah, it could influence your decision to renew or continue new [00:38:00] business or whatnot. But

jerry: it’s, I think what you’re trying to say, and I agree is it doesn’t change the facts for that particular situation.

Andrew: Right yeah. And do you want me to spend time answering your questions or go fixing the problem?

jerry: I want you to do both, dammit. That’s that’s their view. What do I pay you for?

Andrew: I don’t know. I have a tough spot. I don’t have a really warm fuzzy about these sort of fire drills that get spun up around Big media InfoSec events.

Andrew: I think they’re, I think it’s the shark attack and it’s, do you have sharks in your lagoon? Maybe.

jerry: I feel like this whole area is very immature. It’s a veneer that, in most instances, I think is worse than useless because it does create a false sense of security.

Andrew: Yeah, I agree. And how do you know I’m not lying to you when I fill out your little form?

jerry: That’s the concern. We’re lying and there was a breach, like you would, you as the [00:39:00] customer would, crucify them in the media, or in a lawsuit

Andrew: Yeah, at the end of the day, it either becomes a breach of contract or a, I don’t know, I’m not a lawyer, but I haven’t fully articulated my thoughts on this yet. But there’s something I’ve just never really felt was very effective or useful about these sorts of questionnaires that go out around these well publicized security events.

jerry: Yeah, I agree. I agree. I think there is likely something sensible as a consumer.

jerry: Yeah. It is helpful to know the situation with your suppliers and how exposed you are, because then your management wants to know, Hey what’s my level of exposure to this thing? And you don’t want to turn your pockets inside out and say, I don’t know. But at the same time, I’m not sure that the way that we’re doing it today is really establishing that level of reliable intelligence. The last story comes from tenable the title is how the regression vulnerability could impact your cloud environment. So the [00:40:00] regression is cutely spelled with the SSH capitalized. So regression, this regression vulnerability was a recently discovered this slash disclosed vulnerability in open SSH.

jerry: I think it was for versions released between 2021 and as recently as a couple of weeks or months ago and can under certain circumstances allow for remote code execution. So kind of bad

Andrew: Yeah remote code execution Unauthenticated against open SSH that’s open to the world.

Andrew: Correct, but It’s not that easy to pull off.

jerry: Correct. There’s a lot of, there’s a lot of caveats and it’s not necessarily the easiest thing to exploit. So I think they say it takes about 10, 000 authentication attempts. And even with that, you have to understand the exact version of OpenSSH and information about the platform it’s running on, like [00:41:00] it’s a 32 bit, 64 bit, et cetera.

Andrew: Yeah. And I think that those tests were, a 32 bit. And it’s much tougher against 64 bit because you’ve got to basically get the right address collision in memory, is my understanding. Take that with a little grain of salt. But that was my understanding.

jerry: But not impossible. And so the point of this post is, OpenSSH is exposed everywhere.

jerry: Like it’s everywhere. And they point back to cloud and I think they point to cloud for two reasons. Reason number one is, in, I think cloud incentivizes or makes it really easy and in some instances, preferable to expose SSH as a way of managing your, your cloud systems. And in those instances, there’s almost always going to be open SSH. Unless it’s RDP, then it’s all good.

Andrew: It’s much preferred.

jerry: RDP is way better.

Andrew: There’s a GUI. There’s pictures.

jerry: There’s pictures. That’s right.

Andrew: A mouse works.

jerry: How [00:42:00] much better could it get? And then the other reason they are picking on cloud providers is that as a consumer, you are provisioning based on images that usually with most cloud providers, You’re provisioning your servers using images provided by the cloud provider. And those images may not be updated as frequently as maybe they should be. And so therefore, when you provision a system, it is quite likely, to come vulnerable right out of the gate. And you’ve got to get in there and patch it right away.

jerry: You’ve got to know that’s your responsibility and it’s not actually protected by the magic cloud security dust.

Andrew: At least, not your cloud. Maybe Bob’s budget secure cloud is, I don’t know, that joke didn’t work out, but you make an interesting point. And I think I was talking to somebody about this and I was trying to make the example that when we started doing this stuff pre cloud, because we’re old. [00:43:00] The concept of something being exposed to the internet was a big deal. Everything was in a data center behind a firewall, typically. And typically if you wanted to expose something to the internet, like an SSH Port or an HTTP port, an HTTPS port, that usually had a lot of steps to go through, and most companies would also make sure that you’re hardening it and making sure that, it really needed to be exposed.

Andrew: But with cloud, and I think you referenced this, it’s exposed by default. Most of the time there’s this, there’s not this concept of this thick firewall that, that only the most important things and well vetted and well secured things would be exposed to the internet. There is no more quote unquote perimeter. Everything’s just open to the internet. And that’s the way the paradigm is taught now with a lot of cloud providers, that there isn’t this concept necessarily of private stuff in the cloud versus public stuff. It’s just. stuff. And yeah they, talk to limited ACLs and only open the ports you have to and that sort of thing.

Andrew: But I think it’s super easy and super simple for people to just build something and I got to [00:44:00] get to it. So open SSH and, or whatever, or literally RDP and do what they got to do. And to your point, yeah, most of these images are not. hand rolled images. There’s something, some sort of image that you grab off of some catalog and spin it up and probably has a bunch of vulnerable stuff in it.

Andrew: But SSH we think of as safe ish. And, even security folks are like only have SSH open. But this to me speaks more and more to, it still matters what your attack service is, and you still shouldn’t be exposing stuff that doesn’t need to be exposed to the internet because you never know when something like this is going to come along even on quote unquote, your safe protocols to be open to the internet.

Andrew: So the less you have exposed, the less you have to worry about this. Now, I’m not saying that the only thing gets attacked is the stuff that’s open to the internet. We know that’s true, but it’s one more. hurdle that the bad guy has to get through. And again, buys you more time to manage stuff if it’s not directly exposed as an attack surface to a random guy coming from [00:45:00] China.

jerry: So the the recommendations coming out of this are a couple. First is making sure that you update, obviously that you update for this vulnerability, patch the vulnerability. Second is that when you are using cloud services and you’re provisioning systems with. A cloud provided image, make sure that you are keeping them patched, even newly provisioned systems are probably missing patches and they need to be patched post haste, limiting access, they talk about least privilege and they talk about that on two axes.

jerry: The first axis is With regard to network access to SSH, not everything should have access to SSH. It is not a bad practice to go back to the bastion host approach on a relatively untrusted system that then you use as a jumping off point to get deeper into the network where you don’t have every one of your systems. SSH exposed to the internet. It gives you [00:46:00] one place to patch. It gives you a lot more ability to focus your monitoring and whatnot. Now the other access they point out is that in the context of cloud providers, you can assign access privileges to systems. And so if your system is compromised it’s going to inherit all the access that you’ve given to it through your cloud provider. And so that could be access to S3 storage buckets or, other cloud resources that may be not directly on the system that was compromised, but because that system was delegated access to other resources they provide basically seamless access for an adversary to get to them. And that’s another, in my view, a benefit to that relatively untrusted bastion host concept that doesn’t have any of those privileges associated with it.

Andrew: Yeah, it’s a tough sell. I don’t think most cloud [00:47:00] architects think about it that way at all.

jerry: You are absolutely right. They don’t think about that until they’ve been breached. And then they do. Yeah. And I can authoritatively say that given where I came from.

Andrew: That’s fair. And part of the goal of this show is to try to take lessons. So you don’t have to learn the hard way.

jerry: There is a better way. And it’s not, no, it’s not as convenient. Not everything that we used to do back in the old days, when we rode around on dinosaurs was a bad idea. There are certain things that, probably are still apt even in today’s cloud based world.

jerry: I think one of the, one of the challenges I’ve seen is the, how best to describe it, the, like the bastardized embracing of zero trust. Again, in concept, it’s a great idea, it’s a great idea, but like the whole NIST password guidance that came out a couple of years ago where people looked at it said, Oh, NIST says I don’t need to change my [00:48:00] passwords anymore. It does actually say that, but it’s in the context of several other things that need to be in place, in, in the context of in the context of zero trust, that also portends certain other things. I think where zero trust starts to break down is when you have vulnerabilities that allow the bypassing of those trust enforcement points.

Andrew: Yeah. If you can’t trust the actual authentication authorization technology involved zero trust dependent upon that. I think the takeaway for me is you can never get to zero risk, but you never know when you might have to rapidly patch something really critical.

Andrew: And are you built to respond quickly? Can you identify quickly? Can you find it quickly? And can you patch it quickly? That’s the question.

jerry: And you can make it harder or easier on yourself. Design choices you make can make that harder or easier.

Andrew: Yeah. As well as how you run your teams. One thing that I’ve often tried to instill in [00:49:00] the teams that I work with is I can’t tell you what vulnerabilities are going to show up in the next quarter, but I know something’s going to show up. So you should plan for 10 to 20 percent of your cycles to be unplanned, interrupt driven work driven by security.

Andrew: And if you don’t, if you’re committing all of your time to things, not security, when I show up, it’s a fire drill, but I know I’m going to show up and I know I’m going to have asks. So plan for them. Even if I can’t tell you what they are, a smart team will reserve that time as an insurance policy, but that’s a tough sell.

Andrew: It’s a tough sell. Yeah. They don’t always buy into it, but that’s my theory. I try to do it, to explain at least and try to get them to buy into. And sometimes it works. Sometimes it doesn’t.

jerry: All right. I think I think with that, we’ll call it a show.

Andrew: Given the weather gods are fighting us today.

jerry: Yeah. I see [00:50:00] that it’s starting to move into my area, so it’ll probably be here as well. So thank you to everybody for joining us again. Hopefully you found this interesting and helpful. If you did tell a friend and subscribe.

Andrew: And buy something from our sponsor today, sponsored by Jerry’s llamas,

jerry: The best llamas there are. All right.

Andrew: I feel like all the podcasts need to need, use our code Jerry’s big llama box. Dot com.

Andrew: I’m just going to stop before this goes completely off the rails.

jerry: That happened about 45 minutes ago.

jerry: So just a reminder, you can follow the podcast on our website at defensive security. org. You can follow Lerg at

Andrew: Lerg L E R G on both x slash Twitter and InfoSec. Exchange slash Mastodon.

jerry: And you can follow me on InfoSec. Exchange at Jerry. And [00:51:00] with that, we will talk again next week. Thank you.

Andrew: Have a great week, everybody.

Andrew: Bye bye.

  continue reading

267 episoder

Alla avsnitt

×
 
Loading …

Välkommen till Player FM

Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.

 

Snabbguide