WEBVTT 1 00:00:07.470 --> 00:00:09.960 Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm 2 00:00:09.960 --> 00:00:13.560 Anna Delaney. This week, we're looking at generative AI and how 3 00:00:13.560 --> 00:00:17.700 it's reshaping security across multi-cloud environments, plus a 4 00:00:17.700 --> 00:00:21.450 reality check for where we stand with zero trust and IoT 5 00:00:21.450 --> 00:00:25.080 security. And guiding us through these critical issues is none 6 00:00:25.080 --> 00:00:28.410 other than our esteemed friend, Troy Leach, chief strategy 7 00:00:28.440 --> 00:00:31.830 officer at Cloud Security Alliance. Excellent to have you 8 00:00:31.830 --> 00:00:32.370 back Troy. 9 00:00:32.940 --> 00:00:34.530 Troy Leach: Thank you Anna. Great to be back. 10 00:00:35.250 --> 00:00:37.980 Anna Delaney: And also joining me are ISMG superstars - Tom 11 00:00:37.980 --> 00:00:41.070 Field, senior vice president of editorial, and Mathew Schwartz, 12 00:00:41.100 --> 00:00:44.040 executive editor of DataBreachToday in Europe. Great 13 00:00:44.040 --> 00:00:44.580 to see you all. 14 00:00:45.420 --> 00:00:46.800 Tom Field: I thought about sitting this one out, letting 15 00:00:46.800 --> 00:00:49.890 Troy have me visit via AI again, but you know, I guess I'd be 16 00:00:49.890 --> 00:00:50.820 here along for this one. 17 00:00:51.180 --> 00:00:52.980 Troy Leach: It's more authentic this way. I like it. 18 00:00:54.000 --> 00:00:57.690 Anna Delaney: That was a classic August 2023 episode. So to 19 00:00:57.690 --> 00:01:00.000 viewers who haven't seen that or watched that, please do check it 20 00:01:00.000 --> 00:01:03.630 out. Anyway, as you may recall Troy, we like to ask our 21 00:01:03.630 --> 00:01:06.240 speakers where they are virtually. So where are you 22 00:01:06.240 --> 00:01:06.660 today? 23 00:01:07.650 --> 00:01:11.850 Troy Leach: Well I am in San Juan, Puerto Rico. This is where 24 00:01:11.850 --> 00:01:15.120 I was supposed to be for the retail service provider association 25 00:01:15.120 --> 00:01:18.420 has their annual meeting, and unfortunately, I'm not 26 00:01:18.420 --> 00:01:23.850 going, but I'm still longing for a nice dive off the coast of 27 00:01:23.850 --> 00:01:24.570 Puerto Rico. 28 00:01:25.470 --> 00:01:30.450 Anna Delaney: Me too. And Mat, not in Puerto Rico I assume, but 29 00:01:30.870 --> 00:01:31.740 still by water. 30 00:01:32.520 --> 00:01:35.220 Mathew Schwartz: Yes, still by water. This is in Dundee, 31 00:01:35.220 --> 00:01:39.240 Scotland. I was out for a run earlier today, actually. And 32 00:01:39.270 --> 00:01:45.120 this is the remnants of Storm Jocelyn following fast on the 33 00:01:45.120 --> 00:01:50.400 heels of Storm Aila. It's been a non-stop storm season here. 34 00:01:50.610 --> 00:01:54.600 We've been told to shelter indoors repeatedly because of 35 00:01:54.930 --> 00:01:58.800 risk to life, and so we're getting it playing out now. This 36 00:01:58.830 --> 00:02:04.290 is tidal estuary, or tidal anyway, area where the North Sea 37 00:02:04.290 --> 00:02:08.520 meets the Tay River, and you can see the train going across there 38 00:02:08.520 --> 00:02:14.070 from Dundee to Fife. So getting very nice now, but it's been a 39 00:02:14.070 --> 00:02:16.350 really stormy season lately. 40 00:02:17.850 --> 00:02:19.350 Anna Delaney: And Tom! 41 00:02:20.160 --> 00:02:22.560 Tom Field: This was my risk to life last week. I was in 42 00:02:22.560 --> 00:02:28.410 Chicago, where the temperature was a robust -9 °F, and I wore 43 00:02:28.500 --> 00:02:32.400 only this jacket for the trip, so I step out outside just long 44 00:02:32.400 --> 00:02:36.180 enough to get this photo of this iconic building next to my 45 00:02:36.180 --> 00:02:39.540 hotel, and I knew it would be good for today's discussion. 46 00:02:40.320 --> 00:02:42.480 Anna Delaney: Well, I'm transporting you to another 47 00:02:42.480 --> 00:02:45.540 iconic building in Amsterdam, where I was last year, to the 48 00:02:45.540 --> 00:02:49.350 very elegant Tuschinski movie theater, which is a blend of Art 49 00:02:49.350 --> 00:02:52.800 Deco and Art Nouveau styles, but it's a spectacular building - 50 00:02:53.100 --> 00:02:58.620 well worth a visit if you're in Amsterdam. So Troy, we've got 51 00:02:58.620 --> 00:03:01.230 some questions for you, and I'm going to hand over to Tom to 52 00:03:01.230 --> 00:03:01.800 start us off. 53 00:03:02.250 --> 00:03:06.150 Tom Field: Beautiful! Troy, as we start this year 2024, what 54 00:03:06.150 --> 00:03:09.450 are some of the real-life use cases you're seeing of 55 00:03:09.450 --> 00:03:14.160 organizations actually applying gen AI to their cloud security 56 00:03:14.160 --> 00:03:17.280 practices? I've had this discussion with some CISOs and 57 00:03:17.280 --> 00:03:20.610 security leaders, and I hear a lot about, can do, going to do, 58 00:03:20.610 --> 00:03:24.060 would like to do. I don't hear a lot about what they're doing, so 59 00:03:24.150 --> 00:03:24.780 I turn to you. 60 00:03:25.530 --> 00:03:28.560 Troy Leach: Yeah. And what I'm hearing, obviously, I'm hearing 61 00:03:28.560 --> 00:03:33.060 from some of the frontier models that are out there working or 62 00:03:33.060 --> 00:03:36.810 piloting, some of their work. And some of this AI has actually 63 00:03:36.810 --> 00:03:40.350 been around for five plus years, and I'll give a couple examples 64 00:03:40.350 --> 00:03:45.390 of that. But lot is happening. And reality is cloud security is 65 00:03:45.390 --> 00:03:48.900 now cybersecurity. Everyone has migrated to the cloud. In some 66 00:03:48.930 --> 00:03:52.320 levels, lot of organizations, more than half of their critical 67 00:03:52.320 --> 00:03:55.620 business assets are now in cloud. But the things that I'm 68 00:03:55.620 --> 00:03:59.220 seeing in AI that are already out there and being implemented, 69 00:03:59.880 --> 00:04:03.960 some user behavior analysis tools. These are being able to 70 00:04:03.960 --> 00:04:09.240 - some GPTs - they're able to detect anomalies of some of the 71 00:04:09.240 --> 00:04:13.290 employees' behaviors. And in one example that I was told, they 72 00:04:13.290 --> 00:04:16.650 were actually not only if someone's asking for access to a 73 00:04:16.650 --> 00:04:19.860 particular file, it'd be out of character for them to do so or 74 00:04:20.070 --> 00:04:23.850 out of the timezone that they normally would be in. This AI 75 00:04:23.850 --> 00:04:27.630 will go out and send a Slack message to them and have a 76 00:04:27.630 --> 00:04:32.130 conversation, just as ChatGPT would do, and try to understand 77 00:04:32.130 --> 00:04:36.180 and comprehend the legitimacy of that, but then also provide 78 00:04:36.180 --> 00:04:38.370 immediate training if it was something they should not have 79 00:04:38.370 --> 00:04:41.730 been asking for. So that's one thing, and I'm already seeing 80 00:04:41.730 --> 00:04:46.710 that being used for training in organizations. I see in the 81 00:04:46.710 --> 00:04:50.940 financial service industry, we're starting to see one 82 00:04:50.940 --> 00:04:53.550 example that's been given publicly - Discover Financial - 83 00:04:53.700 --> 00:04:56.340 with all of their call centers having more personalized 84 00:04:56.340 --> 00:05:01.410 training so as they listen into calls, the AI is no longer just 85 00:05:01.410 --> 00:05:04.980 that script of going through a series of what could the problem 86 00:05:04.980 --> 00:05:08.580 be. AI is actually listening to the problem and getting to the 87 00:05:08.580 --> 00:05:12.930 issue much more quickly. And that's what we're also seeing in 88 00:05:13.230 --> 00:05:16.800 some of the code. So we see Anthropic, we see Google 89 00:05:16.800 --> 00:05:20.670 DeepMind, we see others that have the ability to do some of 90 00:05:20.670 --> 00:05:26.190 this secure code analysis. And so as the code is being 91 00:05:26.190 --> 00:05:30.060 downloaded and received by the company, sometimes being able to 92 00:05:30.060 --> 00:05:32.790 reverse engineer, find any vulnerabilities that might even 93 00:05:32.790 --> 00:05:37.290 not be known yet; they may not have an existing CVE. So a 94 00:05:37.290 --> 00:05:41.460 common vulnerability number to them, but what they'll do is 95 00:05:41.460 --> 00:05:44.160 actually sometimes go and correct the code if they have 96 00:05:44.160 --> 00:05:48.180 the authorization to do so. And with all of that, we're starting 97 00:05:48.180 --> 00:05:51.900 to see one of the security metrics is MTTR right - the 98 00:05:51.900 --> 00:05:56.370 median time to remediation, and I think that already we're 99 00:05:56.370 --> 00:06:00.000 starting to see significant changes in how we are going to 100 00:06:00.000 --> 00:06:04.230 use that metric, because AI is making it so much more 101 00:06:04.380 --> 00:06:06.900 convenient. I'll plug one other thing just because of my 102 00:06:06.900 --> 00:06:10.650 background with PCI and doing a lot of regulatory frameworks, 103 00:06:10.860 --> 00:06:14.760 including at CSA, with our Cloud Controls Matrix, is with one of 104 00:06:14.760 --> 00:06:18.780 the frontier models, they were mentioning that they're seeing 105 00:06:18.780 --> 00:06:22.050 with their customers and ask for compliance and showing the 106 00:06:22.050 --> 00:06:26.640 demonstration of security within the cloud environments, and they 107 00:06:26.640 --> 00:06:32.040 are going through and having AI complete all these forms and 108 00:06:32.040 --> 00:06:35.880 then do a manual check for all the, you know, validation, and 109 00:06:35.880 --> 00:06:40.410 coming up with about 90% or more accuracy. So there's real 110 00:06:40.770 --> 00:06:44.490 excitement around the minimization of all the 111 00:06:44.490 --> 00:06:46.950 documents that people have to fill out in the security 112 00:06:46.950 --> 00:06:50.460 industry, helping them having good security people focus on 113 00:06:50.460 --> 00:06:54.150 hard security problems rather than just filling out form after 114 00:06:54.150 --> 00:06:55.050 form after form. 115 00:06:55.650 --> 00:06:57.450 Tom Field: So given all that Troy, where do you see the 116 00:06:57.450 --> 00:07:01.200 greatest opportunities to apply gen AI to bolster these security 117 00:07:01.200 --> 00:07:03.330 efforts across multi-cloud environments today? 118 00:07:04.500 --> 00:07:07.920 Troy Leach: Yeah, it's a great question, and I think it's going 119 00:07:07.920 --> 00:07:11.730 to evolve over time. And in fact, it's why at CSA, we have 120 00:07:11.790 --> 00:07:15.270 four research working groups right now just on AI for 121 00:07:15.270 --> 00:07:20.070 multi-cloud environments, and this is for AI, both gen AI but 122 00:07:20.070 --> 00:07:23.250 also discriminative AI, you know, I think they offer 123 00:07:23.280 --> 00:07:26.910 assurances that the intentions in one environment is going to 124 00:07:26.910 --> 00:07:30.570 be able to replicate, without air into another cloud 125 00:07:30.570 --> 00:07:33.510 architecture. So the biggest problem we have today, and I 126 00:07:33.510 --> 00:07:36.450 hear this all the time, especially in the financial 127 00:07:36.450 --> 00:07:39.420 service industry, where they have the upcoming requirements 128 00:07:39.420 --> 00:07:44.010 of DORA, they have other types of expectations of regulation 129 00:07:44.010 --> 00:07:47.250 that will say, for your cloud service providers, we want to 130 00:07:47.250 --> 00:07:50.970 have better resiliency. We're working on the US Treasury. They 131 00:07:50.970 --> 00:07:54.000 came out with a report last year, same thing of we want to 132 00:07:54.000 --> 00:07:58.260 see more multi-cloud and that type of resiliency in such a 133 00:07:58.260 --> 00:08:01.350 critical infrastructure as financial services. And with 134 00:08:01.350 --> 00:08:04.290 that, they're having to double their staff because the 135 00:08:04.290 --> 00:08:07.560 architectures are not the same. So Azure is not like AWS, which 136 00:08:07.560 --> 00:08:11.730 is not like IBM, so there's going to be a need to have 137 00:08:11.730 --> 00:08:14.610 additional staff that are trained up on all of the 138 00:08:14.610 --> 00:08:17.430 intricacies of each type of architecture. And what we're 139 00:08:17.430 --> 00:08:21.510 seeing with AI is it's going to support good practices once it 140 00:08:21.510 --> 00:08:25.800 understands and it's fine-tuned and trained to what the intent 141 00:08:25.800 --> 00:08:29.730 is, it's going to be able to assure that what was the 142 00:08:29.730 --> 00:08:32.970 intention in one environment is going to stay in another cloud 143 00:08:32.970 --> 00:08:36.300 environment. So I think that's really exciting for me, and I 144 00:08:36.300 --> 00:08:40.260 think in general, the security we're going to see with general 145 00:08:40.260 --> 00:08:45.390 cloud architecture is going to be very similar to how we did 146 00:08:45.450 --> 00:08:49.890 with AI. So what we saw with the I-as-a-service, that's 147 00:08:49.890 --> 00:08:52.500 infrastructure-as-a-service, that's probably going to be your public 148 00:08:52.500 --> 00:08:55.740 and private large language model, and how you manage those 149 00:08:55.740 --> 00:08:59.550 different types of shared responsibilities, the APIs that 150 00:08:59.550 --> 00:09:04.620 work with GPTs and the gen AI software as a service. I think 151 00:09:04.620 --> 00:09:07.200 they'll be very similar to the controls that we eventually 152 00:09:07.230 --> 00:09:11.550 created for PaaS and SaaS. I think this is going to be just 153 00:09:11.550 --> 00:09:14.670 security that matures over a period of time, and we're going 154 00:09:14.670 --> 00:09:18.660 to see like different but similar security strategies that 155 00:09:18.660 --> 00:09:22.380 include a shared security responsibility model. That's the 156 00:09:22.380 --> 00:09:26.250 biggest question I hear, whether it's in Congress or elsewhere, 157 00:09:26.250 --> 00:09:30.420 is who has that liability and responsibility? Is it the 158 00:09:30.420 --> 00:09:33.810 creators of the model? the large language model? Is it those that 159 00:09:33.810 --> 00:09:41.730 create APIs for to engage and use the model? Is it the 160 00:09:41.730 --> 00:09:45.240 enterprise and how they insert data and they create their 161 00:09:45.240 --> 00:09:48.300 datasets? So I think it's going to be a lot of good questions, 162 00:09:48.300 --> 00:09:52.050 but I think we're on ... have a good path, a good roadmap I 163 00:09:52.050 --> 00:09:55.440 should say, with how we conducted security with cloud 164 00:09:55.440 --> 00:09:56.880 over the last 10-15 years. 165 00:09:57.390 --> 00:09:59.220 Tom Field: Excellent to hear you so bullish on the topic Troy. 166 00:09:59.220 --> 00:10:00.990 Thank you. I'm turning you over now to Anna. 167 00:10:01.470 --> 00:10:03.510 Anna Delaney: Thank you so much. Well, I'd like to turn to 168 00:10:03.540 --> 00:10:06.600 authorization and authentication. So with the 169 00:10:06.600 --> 00:10:10.890 ongoing advancements in AI, how do you foresee its influence on 170 00:10:10.890 --> 00:10:14.580 well-established practices of authorization and authentication 171 00:10:14.580 --> 00:10:17.910 in organizational security, and what are the potential changes 172 00:10:17.910 --> 00:10:21.630 that organizations should anticipate, prepare for and how 173 00:10:21.630 --> 00:10:22.260 do they get there? 174 00:10:23.370 --> 00:10:26.940 Troy Leach: Yeah, you know, the biggest influence is that what 175 00:10:26.940 --> 00:10:31.920 I'm seeing with authorization or authentication, in general, is 176 00:10:31.950 --> 00:10:36.300 how easy it is now to spoof biodata. And we talked about 177 00:10:36.300 --> 00:10:40.170 that in that August session, about being able to capture 178 00:10:40.170 --> 00:10:44.370 someone's video likeness, capture their voice and look 179 00:10:44.400 --> 00:10:50.040 very authentic in asking. We've seen some, at least red team 180 00:10:50.070 --> 00:10:55.140 exercises where CFOs have been mimicked on a zoom call with 181 00:10:55.140 --> 00:10:59.190 their video, with their voice, with their inflections, and be 182 00:10:59.190 --> 00:11:02.580 able to ask for wire transfers. And I think that's going to be 183 00:11:02.580 --> 00:11:07.530 something that we see quite a bit more of. And so we're going 184 00:11:07.530 --> 00:11:11.220 to see a large spike in attacks and the ability to manufacture 185 00:11:11.760 --> 00:11:18.450 these types of false images. And the reason for that is we've put 186 00:11:18.450 --> 00:11:21.750 a lot of faith into our ability to have these as gatekeepers of 187 00:11:21.750 --> 00:11:24.540 this, you have a unique voice, you have a neat, unique 188 00:11:24.540 --> 00:11:28.440 fingerprint. And AI is going to be challenging that a little 189 00:11:28.440 --> 00:11:34.200 bit ... almost the ... also the amount of phishing, successful 190 00:11:34.200 --> 00:11:38.220 phishing attacks, is going to skyrocket with this, because 191 00:11:38.430 --> 00:11:43.620 malicious GPTs like Wolf GPT, WormGPT, FraudGPT, I can go on 192 00:11:43.650 --> 00:11:48.240 list at least 16 or more that I'm aware of. They're lowering 193 00:11:48.240 --> 00:11:54.090 the bar to entry to minimize how easy it is to create a phishing 194 00:11:54.090 --> 00:11:58.500 attack. And those things that we used to use as parameters in our 195 00:11:58.500 --> 00:12:01.830 authentication, or even just the human element of this has poor 196 00:12:01.830 --> 00:12:06.060 grammar, it has misspellings, it has blatantly bad domains that 197 00:12:06.060 --> 00:12:09.990 it's using. It's going to be very difficult for us to do 198 00:12:09.990 --> 00:12:13.800 that, so organizations are going to have to combat that, just 199 00:12:13.800 --> 00:12:17.610 because it's a significant volume of new attacks, new 200 00:12:17.610 --> 00:12:18.600 phishing attacks. 201 00:12:19.710 --> 00:12:21.720 Anna Delaney: From your perspective, are there any 202 00:12:21.750 --> 00:12:25.680 specific AI-driven technologies or techniques that are proving 203 00:12:25.680 --> 00:12:29.040 particularly effective in enhancing authorization and 204 00:12:29.280 --> 00:12:30.780 authentication practices? 205 00:12:31.770 --> 00:12:34.770 Troy Leach: Yeah, the best defense that I'm hearing is 206 00:12:34.800 --> 00:12:39.120 actually AI defending against AI. And so this is where you cue 207 00:12:39.120 --> 00:12:41.790 the Terminator music and Schwarzenegger supposed to drop 208 00:12:41.790 --> 00:12:47.280 in, but we're going to need to use AI more quickly to evaluate 209 00:12:47.280 --> 00:12:50.760 source code at end-user locations. I mentioned the 210 00:12:50.760 --> 00:12:56.070 reverse engineering and a faster way to just react to some of the 211 00:12:56.220 --> 00:12:59.370 easier traps. I think we're going to have to rely on 212 00:12:59.370 --> 00:13:02.670 techniques that are not using antiquated methods of just 213 00:13:02.670 --> 00:13:07.680 signature-based known CVEs. It just the development pace of 214 00:13:07.680 --> 00:13:10.650 malicious software is just moving too fast. I think another 215 00:13:10.650 --> 00:13:14.520 technique, especially when we look at the GPT risk, such as 216 00:13:15.000 --> 00:13:20.130 evasion, extraction, poisoning, these are some of the main 217 00:13:20.130 --> 00:13:22.950 concerns that you've seen in the frameworks that are being built, 218 00:13:22.950 --> 00:13:27.660 whether it's at CSA or MITRE or DARPA, these are consistently 219 00:13:27.690 --> 00:13:32.190 the areas that are the biggest concern of threat ... is really 220 00:13:32.220 --> 00:13:37.260 developing security policies within APIs that evaluate the 221 00:13:37.260 --> 00:13:41.730 output and see If there's anything questionable, rerun it 222 00:13:41.730 --> 00:13:44.970 back through the prompt before delivering it to the end user. 223 00:13:45.210 --> 00:13:50.460 So for example, let's say you had an output that generated PII 224 00:13:50.520 --> 00:13:54.480 and you're concerned about GDPR, or you have executable code 225 00:13:54.480 --> 00:13:58.350 that's noticed and shouldn't be released back or it was supposed 226 00:13:58.350 --> 00:14:01.620 to produce executable code and it doesn't. You know these would 227 00:14:01.620 --> 00:14:05.970 be prompts that the ... some of these APIs can detect, recognize 228 00:14:06.270 --> 00:14:11.610 and put back through the AI before having output be received 229 00:14:11.610 --> 00:14:15.840 by an end user. So it'll be a little bit like Minority Report, 230 00:14:16.140 --> 00:14:21.060 where you know the AI is act like a precog, and you find this 231 00:14:21.270 --> 00:14:24.300 vulnerability before there's a problem to exploit it. 232 00:14:25.560 --> 00:14:27.780 Anna Delaney: Love a bit of Tom Cruise. That's great. Thank you 233 00:14:27.780 --> 00:14:30.870 Troy. I'm passing on the baton to Mat. That's been usually 234 00:14:30.870 --> 00:14:31.740 informative. Thank you. 235 00:14:32.820 --> 00:14:34.860 Mathew Schwartz: Yeah, so is it going to be Mission Impossible 236 00:14:34.860 --> 00:14:37.920 do you think for making zero trust better? And I don't know 237 00:14:37.920 --> 00:14:42.240 if AI factors into this discussion, because I forget how 238 00:14:42.240 --> 00:14:45.720 many buzzwords it was before AI, but zero trust has definitely 239 00:14:45.720 --> 00:14:52.050 been, and I think continues to be a real target for 240 00:14:52.050 --> 00:14:55.230 organizations. I mean, they're attempting to get to a place 241 00:14:55.230 --> 00:14:59.760 where they can apply zero trust principles. Is AI going to help 242 00:14:59.760 --> 00:15:02.130 with that? Are there other things that are going to help 243 00:15:02.130 --> 00:15:03.960 with that? Where are we at? 244 00:15:04.980 --> 00:15:07.920 Troy Leach: Yeah! Well, I think zero trust has the inverse 245 00:15:07.920 --> 00:15:12.600 problem of AI, which AI is very complex to understand, the 246 00:15:12.600 --> 00:15:15.360 intricacies of how large language models actually truly 247 00:15:15.360 --> 00:15:20.100 operate, how you train, but it's easy to implement, whereas zero 248 00:15:20.100 --> 00:15:24.930 trust is a very easy concept, but one that is uniquely 249 00:15:24.930 --> 00:15:28.620 challenging to consistently implement ... when what we've 250 00:15:28.620 --> 00:15:32.610 seen no it's been three years since the ... almost in May it 251 00:15:32.610 --> 00:15:37.740 will be three years since the executive order of 2021 where on 252 00:15:37.800 --> 00:15:41.370 improving the nation's cybersecurity and zero trust 253 00:15:41.580 --> 00:15:45.900 really was shown a light on and emphasized within that executive 254 00:15:45.900 --> 00:15:50.070 order. So at least here in the U.S., I think abroad as well, 255 00:15:50.400 --> 00:15:54.060 we're doing a zero trust meeting in Switzerland and back in April 256 00:15:54.360 --> 00:15:58.530 for CSA, I think there is awareness at the executive level 257 00:15:58.530 --> 00:16:02.640 and buy in to ... that's a pretty dramatic step in the last 258 00:16:02.640 --> 00:16:06.630 three years, considering John Kindervag and colleagues have 259 00:16:06.630 --> 00:16:10.410 been coining this term for more than a decade before that. So 260 00:16:10.680 --> 00:16:15.900 like the term AI, I think Mat, you touched on this is it's 261 00:16:15.900 --> 00:16:19.560 widely overused - the term zero trust - and abused in marketing 262 00:16:19.560 --> 00:16:22.410 campaigns to the point that you have these CISOs completely 263 00:16:22.410 --> 00:16:26.700 shutting down if the words even utter in front of them. So I 264 00:16:26.700 --> 00:16:31.680 think the key is education and, you know, reminding what the 265 00:16:31.680 --> 00:16:36.060 purpose of this is, how you know zero trust is to start small and 266 00:16:36.060 --> 00:16:41.070 identify the most critical or highest risk business asset. And 267 00:16:41.070 --> 00:16:44.670 what I'm encouraging here from business case perspective, is 268 00:16:44.670 --> 00:16:48.330 that zero trust is able to demonstrate operational 269 00:16:48.330 --> 00:16:52.230 efficiency, something I've preached for many years is it's 270 00:16:52.230 --> 00:16:56.040 not just a security metric, but a business financial metric to 271 00:16:56.040 --> 00:17:00.360 demonstrate that if you have truly focused security, it can 272 00:17:00.360 --> 00:17:04.110 streamline business process and improve the overall health of 273 00:17:04.110 --> 00:17:06.960 the organization. And we're seeing that quite a bit ... at 274 00:17:06.960 --> 00:17:10.230 CSA, we launched the Zero Trust Advancement Center last year 275 00:17:10.230 --> 00:17:13.950 with a surprising number of people engaged with that and 276 00:17:13.980 --> 00:17:16.950 doing training. And what we're seeing the most interest and why 277 00:17:16.950 --> 00:17:21.750 people need this education is they understand the concepts, 278 00:17:21.780 --> 00:17:25.560 but it's hard to grasp how you take the type of cloud access 279 00:17:25.560 --> 00:17:29.970 controls, monitor for continuous authorization, establish the 280 00:17:30.000 --> 00:17:33.990 right security policies for each type of cloud service provider, 281 00:17:34.170 --> 00:17:37.980 establish good public cloud architecture, and then how to 282 00:17:37.980 --> 00:17:41.010 segment that, which is a big part of zero trust. So I think 283 00:17:41.010 --> 00:17:44.340 all these things are easy on their own. But to apply a zero 284 00:17:44.340 --> 00:17:48.450 trust philosophy, it's a lot more difficult. And I'm 285 00:17:48.450 --> 00:17:52.920 encouraged to see that it's at least at the forefront of people 286 00:17:52.920 --> 00:17:56.070 trying to educate themselves is how they go about applying zero 287 00:17:56.070 --> 00:18:00.420 trust. So where we are is, I think we're better than we were 288 00:18:00.420 --> 00:18:03.240 three years ago but a long road ahead of us. 289 00:18:03.990 --> 00:18:06.420 Mathew Schwartz: Well and of course it's never static, and 290 00:18:06.450 --> 00:18:10.890 the requirement to stay educated and ahead of things keeps 291 00:18:10.890 --> 00:18:14.310 changing. Which brings me to another area I wanted to ask you 292 00:18:14.310 --> 00:18:16.680 about, because I know that you've been keeping a close eye 293 00:18:17.100 --> 00:18:22.650 conceptually on the Internet of Things, and I feel like we keep 294 00:18:22.830 --> 00:18:27.240 having to talk about this, because we continue to see such 295 00:18:27.240 --> 00:18:31.260 an explosion in internet-connected devices of 296 00:18:31.260 --> 00:18:34.620 all different stripes. For example, though, just to pick 297 00:18:34.620 --> 00:18:38.520 one - automotive. We're seeing increasingly connected cars, 298 00:18:38.520 --> 00:18:41.370 which I think for anyone who has been in the business as long as 299 00:18:41.370 --> 00:18:47.100 we have, might stoke some fear factor. And so I just wanted to 300 00:18:47.100 --> 00:18:52.500 ask how CSA's efforts are touching on IoT. Do you see this 301 00:18:52.890 --> 00:18:57.120 deceptively maybe complicated area getting the focus that it 302 00:18:57.120 --> 00:19:01.140 needs to be getting as well from a manufacturing standpoint? And 303 00:19:01.140 --> 00:19:03.060 you were just touching again on education. 304 00:19:04.290 --> 00:19:07.050 Troy Leach: Yeah, and it's been I don't know how many years 305 00:19:07.050 --> 00:19:10.410 since Charlie Miller, you know, took a couple of vehicles that 306 00:19:10.440 --> 00:19:14.460 and show how to remotely control them, and over, you know, 307 00:19:14.490 --> 00:19:18.720 controlling the cam of the vehicle. I think it's something 308 00:19:18.720 --> 00:19:23.250 that we do need to take seriously and something that 309 00:19:23.280 --> 00:19:27.330 there is growth. You mentioned there's major adoption happening 310 00:19:27.330 --> 00:19:31.560 in IoT all around the world. But today, I'd say the vast majority 311 00:19:31.560 --> 00:19:36.540 of the 16 billion+ IoT devices are in North America and 312 00:19:36.540 --> 00:19:40.500 Europe. But the accessibility of cloud services, the growth of 313 00:19:40.500 --> 00:19:45.270 smart devices, we're starting to see more adoption in India, 314 00:19:45.270 --> 00:19:50.040 Japan, other parts of APAC especially, and I'm also hearing 315 00:19:50.070 --> 00:19:53.940 about this revenue growth optim .. this that we ... Tom earlier 316 00:19:53.940 --> 00:19:56.670 talked I was bullish on AI. There's a lot of people that are 317 00:19:56.670 --> 00:20:00.720 bullish on IoT saying they're expecting 30% year-over-year 318 00:20:01.170 --> 00:20:04.860 revenue growth for the next 10 years. We'll probably see about 319 00:20:04.860 --> 00:20:08.160 that. I also predicted that I would keep my New Year's 320 00:20:08.160 --> 00:20:12.480 resolution, and that's long past. So I think it's to be 321 00:20:12.480 --> 00:20:16.260 determined, but I know at Cloud Security Alliance, we developed 322 00:20:16.260 --> 00:20:19.650 an IoT framework several years ago, currently on version 3, 323 00:20:20.010 --> 00:20:24.030 and it's picking up interest in smart transportation, automotive 324 00:20:24.030 --> 00:20:27.480 industry in particular, and we're working with ENX 325 00:20:27.510 --> 00:20:30.690 Association. It's an association of European vehicle 326 00:20:30.690 --> 00:20:33.420 manufacturers and likely be working with U.S.-based 327 00:20:33.450 --> 00:20:37.650 manufacturers well to have this their recognized certifications. 328 00:20:37.650 --> 00:20:42.360 So they have TISAX, have that recognized by our star registry 329 00:20:42.540 --> 00:20:48.510 for IoT and for their, I guess, the safe - their framework in 330 00:20:48.600 --> 00:20:52.230 applicability to IoT, but also in cloud. And all of this is 331 00:20:52.230 --> 00:20:54.750 going to be determined in the next little bit. The project 332 00:20:54.750 --> 00:20:57.900 kicks off this month and should run about three months. So we 333 00:20:57.900 --> 00:21:01.170 are a nonprofit organization with volunteers. So anyone and 334 00:21:01.230 --> 00:21:04.980 everyone is welcome to participate in developing that 335 00:21:04.980 --> 00:21:10.020 next framework of how cars are going to be safe for the next 336 00:21:10.140 --> 00:21:11.400 several generations. 337 00:21:12.570 --> 00:21:15.450 Mathew Schwartz: Looking forward to that safety in our automotive 338 00:21:15.990 --> 00:21:18.780 industry. So excellent. Thank you for your efforts, and we'll 339 00:21:18.780 --> 00:21:23.010 have to check back on those. And I'd like to hand over now to 340 00:21:23.010 --> 00:21:23.940 Anna, if I may. 341 00:21:24.720 --> 00:21:27.000 Anna Delaney: Excellent! Brilliant stuff. So finally, and 342 00:21:27.000 --> 00:21:30.600 just for fun, if you could choose an AI system to direct a 343 00:21:30.600 --> 00:21:35.070 remake of a classic movie, which movie would it be and how might 344 00:21:35.070 --> 00:21:39.120 the AI bring a fresh perspective to the story? Troy, do you want 345 00:21:39.120 --> 00:21:39.780 to go first? 346 00:21:42.000 --> 00:21:45.960 Troy Leach: So there's a lot of good choices out there. I should 347 00:21:45.960 --> 00:21:49.320 have ran this through ChatGPT and came up with a more creative 348 00:21:49.320 --> 00:21:54.630 answer, but I would go with the nuance of the film - 2001: A 349 00:21:54.660 --> 00:21:58.380 Space Odyssey. And because, you know, there's a lot of sci-fi in 350 00:21:58.380 --> 00:22:01.830 the 1950s and 1960s about the future of artificial 351 00:22:01.830 --> 00:22:05.850 intelligence as the main antagonist and, you know, as 352 00:22:05.850 --> 00:22:10.140 bad. And so you have how that is based, and this is, you know, 353 00:22:10.170 --> 00:22:16.200 35-40 years ago, of what AI would look like by 2001 and now 354 00:22:16.200 --> 00:22:21.810 here we are in 2024 and I'd be interesting to get AI's take on 355 00:22:21.810 --> 00:22:26.310 itself and how it rewrites the script of how good AI could be, 356 00:22:26.640 --> 00:22:32.370 and see if it could take a nice little spin on what AI will 357 00:22:32.370 --> 00:22:33.420 look like in the future. 358 00:22:33.990 --> 00:22:36.270 Anna Delaney: Nice. We like that one. Tom? 359 00:22:37.170 --> 00:22:39.990 Tom Field: Slightly less cerebral. I'm going back to one 360 00:22:39.990 --> 00:22:42.600 of my favorites - Young Frankenstein. And imagine this. 361 00:22:42.930 --> 00:22:46.410 Imagine if our friend, the monster here, instead of been 362 00:22:46.410 --> 00:22:51.540 given the brain of Abby Normal, was given artificial 363 00:22:51.540 --> 00:22:54.150 intelligence. And what a different film this might be. 364 00:22:57.180 --> 00:23:00.000 Anna Delaney: That is creative and I love it. Mat? 365 00:23:01.770 --> 00:23:03.750 Mathew Schwartz: I'm looking forward to the AI hallucinations 366 00:23:03.750 --> 00:23:08.760 coming out in that one. So this isn't a particular film, even 367 00:23:08.760 --> 00:23:12.660 though it's been filmed multiple times, but Beowulf, and 368 00:23:12.930 --> 00:23:18.330 obviously, you know, Beowulf is the hero but there's a very 369 00:23:18.330 --> 00:23:21.600 famous novel from 1971 by American author John Gardner, 370 00:23:21.690 --> 00:23:27.870 which flipped it and looked at Grendel as the hero, if you 371 00:23:27.870 --> 00:23:31.740 will. And so I think if we brought this methodology to bear 372 00:23:31.740 --> 00:23:36.810 on some AI films where it is the easy, obvious villain ... 373 00:23:37.230 --> 00:23:39.270 obviously she does one was ... that's good ... But I was 374 00:23:39.270 --> 00:23:42.540 thinking like The Matrix, like, who is the matrix? What does it 375 00:23:42.540 --> 00:23:45.720 want? What are its hopes and dreams? Terminator? Is it just 376 00:23:45.720 --> 00:23:49.410 about shiny, chrome killing machines? Or, you know, is it 377 00:23:49.410 --> 00:23:52.890 secretly into puppies? So I think if we could just flip some 378 00:23:52.890 --> 00:23:58.290 of those things, we could have a really interesting re-evaluation 379 00:23:58.380 --> 00:24:00.090 of these movie villain tropes. 380 00:24:01.650 --> 00:24:03.960 Anna Delaney: Very good. I've gone from villain to the Wizard 381 00:24:03.960 --> 00:24:07.950 of Oz. So there is a villain in that sort of. Have you heard of 382 00:24:07.950 --> 00:24:10.230 the film Bandersnatch? 383 00:24:11.190 --> 00:24:11.520 Tom Field: No. 384 00:24:11.540 --> 00:24:13.370 Mathew Schwartz: Yeah. Netflix - choose your own adventure. 385 00:24:13.460 --> 00:24:16.520 Anna Delaney: Exactly! So I think it's Charlie Brooker 386 00:24:17.480 --> 00:24:19.730 allows you to shape the story. So it's blurring the lines 387 00:24:19.730 --> 00:24:23.720 between a game and a story. And I was thinking it could be 388 00:24:23.720 --> 00:24:26.240 interesting to do something similar to the classic - The 389 00:24:26.240 --> 00:24:29.810 Wizard of Oz - using deep fake technology. So you can make 390 00:24:29.810 --> 00:24:33.050 decisions at certain points, like choosing the challenges on 391 00:24:33.050 --> 00:24:36.440 the yellow brick road, or what happens to the ruby slippers, 392 00:24:36.440 --> 00:24:40.910 maybe they have a different fate. So, yeah, fun take on a 393 00:24:41.240 --> 00:24:42.050 classic. 394 00:24:43.340 --> 00:24:46.490 Troy Leach: I like that one a lot. It reminds me of going back 395 00:24:46.490 --> 00:24:50.660 to school days and those books that you would jump around, you 396 00:24:50.660 --> 00:24:50.810 know. 397 00:24:50.840 --> 00:24:51.410 Tom Field: Choose your own adventure. 398 00:24:51.410 --> 00:24:53.240 Troy Leach: If you choose this, go to choose your own adventure. 399 00:24:53.240 --> 00:24:53.690 I love it. 400 00:24:54.000 --> 00:24:54.270 Tom Field: Yeah. 401 00:24:55.860 --> 00:24:57.960 Anna Delaney: Well, maybe we will be choosing our own 402 00:24:57.990 --> 00:25:01.830 adventures in the future with AI. But thank you so much for 403 00:25:01.830 --> 00:25:04.710 joining us on this adventure Troy. You've been absolutely 404 00:25:04.710 --> 00:25:05.070 brilliant. 405 00:25:05.880 --> 00:25:07.590 Troy Leach: I appreciate the invite. Thank you! 406 00:25:08.670 --> 00:25:10.740 Anna Delaney: And thank you so much for watching. Until next 407 00:25:10.740 --> 00:25:11.010 time.