WEBVTT 1 00:00:07.170 --> 00:00:09.660 Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm 2 00:00:09.660 --> 00:00:12.300 Anna Delaney, and this is a weekly program dedicated to 3 00:00:12.300 --> 00:00:15.750 keeping you informed about the latest developments and news in 4 00:00:15.750 --> 00:00:18.660 the fields of information and cybersecurity. And of course, 5 00:00:18.690 --> 00:00:22.680 AI. We've a merry gang of editors joining me today, 6 00:00:22.740 --> 00:00:26.310 Rashmi, Ramesh, assistant editor global news desk, and Mathew 7 00:00:26.310 --> 00:00:29.460 Schwartz, executive editor of DataBreachToday and Europe. Good 8 00:00:29.460 --> 00:00:30.210 to see you both. 9 00:00:31.320 --> 00:00:32.160 Great to be here. 10 00:00:34.200 --> 00:00:37.110 Well, actually, that's a stunning sky behind you. Where 11 00:00:37.000 --> 00:00:41.230 Rashmi Ramesh: Oh, this is a neighborhood lake. I live about 12 00:00:37.110 --> 00:00:37.440 is it? 13 00:00:41.230 --> 00:00:45.490 five minutes from here. So I go here on the weekends, and take 14 00:00:45.490 --> 00:00:46.570 some pictures. 15 00:00:46.600 --> 00:00:51.820 Anna Delaney: So sunset then. Very good. Mat, another lovely 16 00:00:51.820 --> 00:00:53.380 but more curious guy behind you. 17 00:00:54.880 --> 00:00:57.850 Down and out in Dundee. Yeah, this is me down in the gutter. 18 00:00:58.150 --> 00:01:01.540 Actually I was really just like right over the water trying not 19 00:01:01.540 --> 00:01:05.050 to fall in. But we've had so much rain here. And you get 20 00:01:05.050 --> 00:01:08.350 cooped up being inside during the winter, when the sun sets at 21 00:01:08.350 --> 00:01:11.860 like three in the afternoon, that I'll go out for long walks 22 00:01:11.860 --> 00:01:16.930 with my camera, a little bit like Rashmi, only instead of a 23 00:01:16.930 --> 00:01:19.030 gorgeous lake, we do have beautiful water, just to be 24 00:01:19.030 --> 00:01:23.770 clear. I found this more urban scene where I was able to pick 25 00:01:23.770 --> 00:01:25.990 up some of the nighttime reflections, actually more like 26 00:01:25.990 --> 00:01:26.560 dusk. 27 00:01:26.000 --> 00:01:31.580 The reflections are brilliant, very artistic as always. Well, 28 00:01:31.580 --> 00:01:35.360 this is a picture taken back in October when I was in Toronto, 29 00:01:35.360 --> 00:01:40.250 and this is the city's Union Station. And it's Canada's 30 00:01:40.250 --> 00:01:43.730 largest and most opulent railway station, apparently designed to 31 00:01:43.730 --> 00:01:48.080 the bizarre style. And it was opened by England's Prince 32 00:01:48.080 --> 00:01:52.280 Edward, Prince of Wales, in 1927, in a ribbon cutting 33 00:01:52.280 --> 00:01:56.900 ceremony with a gold pair of scissors. So there you go. Mat, 34 00:01:56.930 --> 00:02:00.590 you're starting the ceremony off this week. I hope you've got 35 00:02:00.590 --> 00:02:03.110 your goals as to the ready. You were talking about the 36 00:02:03.140 --> 00:02:06.710 escalating use of APIs and the challenges many organizations 37 00:02:06.710 --> 00:02:08.690 face in managing them. So tell us more. 38 00:02:09.680 --> 00:02:13.760 Yes, so this is an interesting report that's just come out from 39 00:02:13.790 --> 00:02:19.220 Cloudflare, looking at dynamic traffic flowing across the 40 00:02:19.220 --> 00:02:25.100 internet. So much of it these days is handled by APIs - 41 00:02:25.190 --> 00:02:28.460 application programming interfaces. And however 42 00:02:28.460 --> 00:02:33.200 sophisticated that might sound, all it really boils down to is 43 00:02:33.230 --> 00:02:36.500 one software component communicating with another 44 00:02:36.500 --> 00:02:40.100 software component. So every time you go on your phone, if 45 00:02:40.100 --> 00:02:43.040 you're like me, and you're checking the weather in a 46 00:02:43.040 --> 00:02:46.220 slightly obsessive manner, sometimes, every time you ping 47 00:02:46.220 --> 00:02:50.840 it to update, that's an API doing a call to a server saying, 48 00:02:51.020 --> 00:02:55.400 hey, give me the latest weather data, and it flings it back at 49 00:02:55.400 --> 00:03:00.950 you. So very, very pervasive. As you can imagine, it's already 50 00:03:00.950 --> 00:03:05.930 pervasive, and it's growing even more pervasive. So we have this 51 00:03:05.930 --> 00:03:09.380 interesting report that's come out, which gives us some trends. 52 00:03:09.710 --> 00:03:12.920 For example, Cloudflare, said that the amount of API traffic 53 00:03:12.920 --> 00:03:16.430 crossing the internet, that it's seen has continued to increase 54 00:03:16.640 --> 00:03:24.260 is now accounting for 57% of all dynamic HTTP traffic. So dynamic 55 00:03:24.290 --> 00:03:27.650 means things that are generated as a one off, so like I said, 56 00:03:27.650 --> 00:03:30.950 you're going to get your weather or you're checking your email 57 00:03:30.950 --> 00:03:33.950 and doing a handshake to get that back. So many different 58 00:03:33.950 --> 00:03:36.050 things, your bank account, all these different things are 59 00:03:36.050 --> 00:03:39.380 typically handled by APIs. So the visit the location the 60 00:03:39.380 --> 00:03:42.320 device, anytime that changes that's dynamic, you're getting a 61 00:03:42.320 --> 00:03:47.030 different response than somebody else will. So huge when it comes 62 00:03:47.030 --> 00:03:51.560 to IoT platforms, ride sharing, or rail, as we were just talking 63 00:03:51.560 --> 00:03:55.910 about with the Toronto bus, taxi, legal services, 64 00:03:55.910 --> 00:03:59.960 multimedia, games, logistics, supply chains, these are 65 00:03:59.960 --> 00:04:04.130 industries where a huge amount of their traffic is down to 66 00:04:04.130 --> 00:04:09.290 APIs. So this leads into some interesting discussions inside 67 00:04:09.290 --> 00:04:14.180 organizations that CIOs and CISOs should be leading. Because 68 00:04:14.240 --> 00:04:17.930 Cloudflare can analyze the kind of data that's flowing across 69 00:04:17.930 --> 00:04:21.890 its networks. And some of these APIs will announce themselves 70 00:04:21.920 --> 00:04:26.330 correctly, many of them don't. And so that has led the 71 00:04:26.330 --> 00:04:29.930 organization along with a little bit more sleuthing to conclude 72 00:04:30.050 --> 00:04:35.150 that about 1/3 of all API traffic is not being accounted 73 00:04:35.150 --> 00:04:38.600 for by the organizations that own it. So basically, there's 74 00:04:38.660 --> 00:04:45.920 31% more API endpoints than organizations know about. Why is 75 00:04:45.920 --> 00:04:49.220 this a concern? Obviously, it's a concern or we wouldn't have a 76 00:04:49.220 --> 00:04:51.860 report about it. But it's a concern because if you're trying 77 00:04:51.860 --> 00:04:56.270 to secure APIs, you need to know that they're there. If you're 78 00:04:56.270 --> 00:05:02.600 using a service like Cloudflare, or other DDoS defense providers, 79 00:05:02.720 --> 00:05:07.040 others exist many other options. It really helps to know what 80 00:05:07.040 --> 00:05:10.910 good API traffic looks like. One of the ways that you defend 81 00:05:10.910 --> 00:05:16.100 against DDoS attacks is turning down your response rate. So if 82 00:05:16.370 --> 00:05:22.400 hundred, let's say API calls per second is typical, and it spikes 83 00:05:22.400 --> 00:05:25.880 to 10,000, and you think that's because you're having a DDoS 84 00:05:25.880 --> 00:05:29.000 attack, you're going to attempt to defend against it in a 85 00:05:29.000 --> 00:05:33.470 certain way. If, however, there's been an increase to 86 00:05:33.530 --> 00:05:38.300 5,000, or 6,000 in normal traffic, maybe because the 87 00:05:38.300 --> 00:05:41.990 Securities and Exchange Administration's X or Twitter 88 00:05:41.990 --> 00:05:45.620 account got hacked with information that suddenly 89 00:05:45.650 --> 00:05:48.980 Bitcoin, spot trading is going to be allowed. And there's this 90 00:05:48.980 --> 00:05:53.390 huge rush for people to trade cryptocurrency, well, maybe you 91 00:05:53.390 --> 00:05:59.720 need to be able to support that kind of behavior. And that is 92 00:05:59.810 --> 00:06:02.870 the new normal. So you need to have better insights basically, 93 00:06:02.960 --> 00:06:08.540 into what is going on with your APIs. Sounds fine in theory. But 94 00:06:08.870 --> 00:06:12.020 the report here and the finding that there's just been this 95 00:06:12.050 --> 00:06:16.670 surge in API traffic is a big reminder that organizations need 96 00:06:16.670 --> 00:06:21.380 to have some sort of formal program for looking at their 97 00:06:21.380 --> 00:06:26.120 APIs, not just looking at them, but discovering them. Because as 98 00:06:26.150 --> 00:06:31.850 with so many things, there is a huge shadow IT component. APIs 99 00:06:31.850 --> 00:06:35.240 may have been stood up by parts of the business without checking 100 00:06:35.240 --> 00:06:38.960 in with command and control, IT management, IT administrators, 101 00:06:38.960 --> 00:06:42.530 the CISO, the CIO, could be a different business unit in a 102 00:06:42.530 --> 00:06:45.920 different country. It could be that whoever was keeping track 103 00:06:45.920 --> 00:06:50.450 of it moved on and people have forgotten that it exists. We 104 00:06:50.450 --> 00:06:55.820 have seen some massive data breaches, because of APIs that 105 00:06:55.820 --> 00:07:00.080 were created, for example, to pass billing information to 106 00:07:00.110 --> 00:07:03.650 companies that work with healthcare entities, and 107 00:07:03.650 --> 00:07:07.070 somebody figures out that this API exists. And they can send 108 00:07:07.070 --> 00:07:10.820 out an API call and get the information back as well. Many 109 00:07:10.820 --> 00:07:16.310 times these APIs are not being properly defended, for example, 110 00:07:16.310 --> 00:07:20.150 by requiring credentials to access them, or by limiting the 111 00:07:20.150 --> 00:07:24.230 organizations that can make the API calls. So many security 112 00:07:24.230 --> 00:07:29.390 components, as with all things involving cybersecurity, and IT, 113 00:07:29.450 --> 00:07:33.350 but so many things, which if you take the time to do some good 114 00:07:33.350 --> 00:07:36.800 discovery, have a good governance program, also think 115 00:07:36.800 --> 00:07:40.850 ahead about what you're going to do if these get misused. All of 116 00:07:40.850 --> 00:07:44.210 that can pay massive dividends in the event that things break 117 00:07:44.210 --> 00:07:49.520 down. Hackers come calling, steal data via APIs, and you're 118 00:07:49.520 --> 00:07:53.360 having to do clean up very quickly, and try to figure out 119 00:07:53.360 --> 00:07:54.710 what happened and what to do about it. 120 00:07:56.040 --> 00:08:00.330 Huge problem, this API challenge for organizations. And before 121 00:08:00.330 --> 00:08:05.340 Christmas, I actually spoke with Sandy Carielli, who worked on 122 00:08:05.700 --> 00:08:08.610 the report, the eight components of API security, which I 123 00:08:09.000 --> 00:08:11.940 recommend, it's worth a read. She highlighted actually a 124 00:08:11.940 --> 00:08:15.960 positive trend. So she said a year ago, inquiries, mostly 125 00:08:15.960 --> 00:08:20.850 focused on API discovery. Organizations now, she sees, 126 00:08:21.060 --> 00:08:24.690 recognize the need to invest in discovery as a foundational 127 00:08:24.690 --> 00:08:29.820 step. And the current inquiries she's receiving show a notable 128 00:08:29.820 --> 00:08:33.150 increase in API security testing concerns. So questions 129 00:08:33.150 --> 00:08:37.440 surrounding protection, detection and response, and 130 00:08:37.440 --> 00:08:40.890 ensuring correct acre construction, establishing, as 131 00:08:40.890 --> 00:08:45.450 you say, a robust governance program and ensuring production 132 00:08:45.630 --> 00:08:51.000 throughout the API life cycle. So perhaps this shows a maturing 133 00:08:51.180 --> 00:08:53.820 in organizations understanding and management of APIs, but 134 00:08:53.820 --> 00:08:57.390 there's still a way to go, as you've highlighted there. And to 135 00:08:57.390 --> 00:09:02.610 your point about governance, do you have any recommendations to 136 00:09:02.610 --> 00:09:04.980 organizations about how they can go about establishing a 137 00:09:04.980 --> 00:09:06.990 governance program or perhaps where to begin? 138 00:09:08.340 --> 00:09:12.150 Great question. I think if you need some muscle or some impetus 139 00:09:12.180 --> 00:09:16.140 to convince the board or senior management that this needs to be 140 00:09:16.140 --> 00:09:20.430 taken more seriously, there are some regulations that are going 141 00:09:20.430 --> 00:09:23.160 to require this. I think the healthcare sector we're seeing 142 00:09:23.160 --> 00:09:26.940 some moves to ensure organizations are paying close 143 00:09:26.940 --> 00:09:31.290 attention to this. Also the payment card industry's Data 144 00:09:31.290 --> 00:09:36.510 Security Standard, the PCI DSS v4.0, which has been circulating 145 00:09:36.510 --> 00:09:40.800 for a while, it's set to take effect at the end of March. And 146 00:09:40.800 --> 00:09:45.600 that is going to require for the very first time API security 147 00:09:45.630 --> 00:09:49.920 checks, at least in the code review and the testing process. 148 00:09:50.220 --> 00:09:54.000 They're looking at any attempts, for example, to abuse or to 149 00:09:54.000 --> 00:09:57.810 bypass applications, features and functions ... while in 150 00:09:57.810 --> 00:10:01.380 applications features and functionality via manipulating 151 00:10:01.560 --> 00:10:06.360 APIs, so basically using APIs to grab data from databases, that 152 00:10:06.360 --> 00:10:09.330 sort of stuff. Another best practice, which isn't going to 153 00:10:09.330 --> 00:10:14.940 be mandatory until about 12 months from now is knowing what 154 00:10:14.940 --> 00:10:19.140 API components you have, even if it's in a third-party components 155 00:10:19.170 --> 00:10:23.070 or software that you buy. So that's a bit more of a supply 156 00:10:23.070 --> 00:10:27.810 chain thing. But definitely, that shows the direction of 157 00:10:27.840 --> 00:10:30.960 travel, you need to be keeping an eye on these things. 158 00:10:31.200 --> 00:10:34.500 Unfortunately, there's a lot of legacy tech, as with all things, 159 00:10:34.530 --> 00:10:38.880 enterprise IT, lots of legacy stuff. So this move toward 160 00:10:38.880 --> 00:10:43.260 discovery is great. I think it does need to get extended to 161 00:10:43.260 --> 00:10:45.990 your supply chain, not just the stuff that you build, so that 162 00:10:45.990 --> 00:10:50.100 you have a sense of what is there because so often, there's 163 00:10:50.220 --> 00:10:55.140 stuff you didn't know about, which can hurt you massively 164 00:10:55.170 --> 00:10:57.990 unless steps were taken to lock it down. 165 00:10:59.640 --> 00:11:03.390 Great advice. That's excellent. Thank you, Mat. Rashmi, The New 166 00:11:03.390 --> 00:11:07.080 York Times has filed a lawsuit against open AI and its primary 167 00:11:07.080 --> 00:11:11.340 support of Microsoft, accusing them of copyright infringement. 168 00:11:11.340 --> 00:11:14.760 So it's getting thorny in the AI world, talk us through this 169 00:11:14.550 --> 00:11:17.911 Rashmi Ramesh: Perfect. So a little bit of background first. 170 00:11:14.760 --> 00:11:15.240 case. 171 00:11:17.986 --> 00:11:22.693 So The New York Times said that OpenAI used without permission, 172 00:11:22.767 --> 00:11:27.250 millions of its copyrighted articles to train large language 173 00:11:27.324 --> 00:11:31.658 models that power chatbots like ChatGPT. It also said that 174 00:11:31.732 --> 00:11:35.991 ChatGPT's responses were often nearly identical to Times' 175 00:11:36.065 --> 00:11:39.726 articles, but also that sometimes it inaccurately 176 00:11:39.801 --> 00:11:43.910 attributed its responses to information source from The 177 00:11:43.984 --> 00:11:48.616 Times. So Times said that these issues have a direct impact on 178 00:11:48.691 --> 00:11:53.397 it. It said that OpenAI is using its content without permission 179 00:11:53.472 --> 00:11:58.104 to develop products that will directly compete with The Times. 180 00:11:58.179 --> 00:12:02.885 So that threatens its business financially by taking away users 181 00:12:02.960 --> 00:12:07.442 and OpenAI in turn gets a free ride by not acknowledging the 182 00:12:07.517 --> 00:12:12.298 investments that Times has made into its journalism. And it also 183 00:12:12.373 --> 00:12:16.930 said that if journalists stop making original and independent 184 00:12:17.005 --> 00:12:21.637 content, that is not a vacuum AI can fill. So why is Microsoft 185 00:12:21.712 --> 00:12:26.045 party to this? Because it's OpenAI's biggest backup, it is 186 00:12:26.119 --> 00:12:30.228 intimately involved in its operations. It uses OpenAP's 187 00:12:30.303 --> 00:12:34.487 technology in its own products, and the LLM it uses also 188 00:12:34.561 --> 00:12:38.521 provides, and this is I'm quoting from the complaint, 189 00:12:38.595 --> 00:12:43.302 "infringing content while being chat or Copilot" as we know it 190 00:12:43.377 --> 00:12:47.710 now. Now, the context to this complaint is also important. 191 00:12:47.784 --> 00:12:52.043 Times said that it filed the case because it said that it 192 00:12:52.118 --> 00:12:56.525 tried to negotiate for months and failed to get a deal with 193 00:12:56.460 --> 00:13:24.178 Anna Delaney: Great overview there. So what about other media 194 00:12:56.600 --> 00:13:01.157 OpenAI, where the latter would pay the media house to license 195 00:13:01.232 --> 00:13:06.013 its content. And this stands out a bit because OpenAI has struck 196 00:13:06.088 --> 00:13:09.973 deals with other media companies. Axel Springer, the 197 00:13:10.047 --> 00:13:14.754 publisher of Business Insider, for one now allows OpenAI to use 198 00:13:14.829 --> 00:13:19.610 its data for about three years, in exchange for what they say is 199 00:13:19.685 --> 00:13:23.943 tens of millions of euros. We don't really know the exact 200 00:13:24.018 --> 00:13:28.127 amount yet. And the AP also signed a two year deal that 201 00:13:24.768 --> 00:14:01.923 platforms? I think you cited in your article that other outlets 202 00:13:28.201 --> 00:13:32.684 allows OpenAI to use some of its news content back from even 203 00:13:32.758 --> 00:13:36.867 1985, I think, to train an algorithm. So basically, the 204 00:13:36.942 --> 00:13:41.126 idea is that the more accurate the input is to train the 205 00:13:41.200 --> 00:13:45.309 models, the more accurate the results will be. And they 206 00:13:45.384 --> 00:13:49.568 desperately need it. In my experience, so far, at least, 207 00:13:49.642 --> 00:13:54.349 the results that I've got from ChatGPT whenever I have used it, 208 00:13:54.424 --> 00:13:58.159 to maybe get a summary of anything or see what the 209 00:13:58.234 --> 00:14:02.791 background is, they're riddled with inaccuracies, and so many 210 00:14:02.513 --> 00:14:39.668 have been experimenting with AI, particularly in the context of 211 00:14:02.866 --> 00:14:07.647 misattributions. Now, the Times has not sought specific damages, 212 00:14:07.722 --> 00:14:11.905 but it said that it wants to hold OpenAI responsible for 213 00:14:11.980 --> 00:14:16.686 billions of dollars in statutory and actual damages. So this is 214 00:14:16.761 --> 00:14:20.795 the background, the latest update from Monday, is that 215 00:14:20.870 --> 00:14:24.830 OpenAI responded to the allegations. It said that The 216 00:14:24.904 --> 00:14:29.461 New York Times is not telling the full story. It said that it 217 00:14:29.536 --> 00:14:34.019 provides an opt out process for publishers to prevent issues 218 00:14:34.093 --> 00:14:38.576 from accessing their site, and that NYT adopted it in August 219 00:14:38.650 --> 00:14:42.685 2023. It also called its regurgitation process or rare 220 00:14:40.258 --> 00:15:00.900 chatbot capabilities. Tell us more. 221 00:14:42.759 --> 00:14:47.242 bug that it's trying to fix. So we'll definitely see more of 222 00:14:47.317 --> 00:14:49.110 this in the coming days. 223 00:15:00.300 --> 00:15:03.320 Rashmi Ramesh: So take the AP, for example. I mean, they've 224 00:15:03.388 --> 00:15:07.439 been using it in various ways with mixed results really. So 225 00:15:07.507 --> 00:15:11.557 the AP, for example, it issued guidelines on what AI can be 226 00:15:11.626 --> 00:15:15.813 used for, and what it can be used for. For example, it can be 227 00:15:15.882 --> 00:15:19.794 used to create publishable content and images for the new 228 00:15:19.863 --> 00:15:24.325 service. But I have seen several news platforms that are using it 229 00:15:24.394 --> 00:15:28.581 to generate images for the news stories. And the Guardian and 230 00:15:28.650 --> 00:15:32.219 Insider also published statements similar to the AP, 231 00:15:32.288 --> 00:15:36.681 saying that they will not use it to create original content, but 232 00:15:36.750 --> 00:15:40.800 only make journalist content better in terms of things like 233 00:15:40.868 --> 00:15:44.987 structure and readability and also help hone their marketing 234 00:15:45.056 --> 00:15:49.037 strategies. The Times actually recently hired an editorial 235 00:15:49.106 --> 00:15:52.950 director to lead its AI initiative. So a little bit of a 236 00:15:53.018 --> 00:15:56.862 self plug moment here, but we also started an AI-focused 237 00:15:56.931 --> 00:16:01.324 website a few months ago, and my primary job is to write content 238 00:16:01.393 --> 00:16:05.443 for it. But some media firms have already gotten into a bit 239 00:16:05.512 --> 00:16:09.699 of a hot water on how they've used AI. Now Sports Illustrated 240 00:16:09.768 --> 00:16:13.200 publisher Arena Group fired several people who are 241 00:16:13.268 --> 00:16:17.250 overseeing the use of AI to generate content, because they 242 00:16:17.318 --> 00:16:21.574 will attributed to fake bylines allegedly. And CNET also began 243 00:16:21.643 --> 00:16:25.762 publishing AI written stories, and found out much later that 244 00:16:25.830 --> 00:16:30.018 there were errors in more than half of them. So that has been 245 00:16:30.086 --> 00:16:34.342 my experience as well. Like if you use ChatGPT or any other AI 246 00:16:34.411 --> 00:16:38.873 chatbot, check, check, check and check everything. I have seen it 247 00:16:38.941 --> 00:16:42.305 make up facts, I've seen it misattribute content, 248 00:16:42.374 --> 00:16:46.698 hallucinate, you name it, and it does it so... but that doesn't 249 00:16:46.767 --> 00:16:50.817 mean that it's of no help at all. It takes care of a lot of 250 00:16:50.886 --> 00:16:55.141 repetitive tasks. It's great to brainstorm content ideas, find 251 00:16:55.210 --> 00:16:59.466 sources, find experts, get a lot of background information. So 252 00:16:59.535 --> 00:17:03.791 definitely use it. Experiment as much as possible. But always, 253 00:17:03.859 --> 00:17:07.978 always, always take the results with a truckload of salt, at 254 00:17:08.047 --> 00:17:09.420 least at the moment. 255 00:17:10.530 --> 00:17:12.900 Anna Delaney: Wise words, certainly a useful tool. So 256 00:17:12.900 --> 00:17:15.630 there is the question of what happens here. What's the verdict 257 00:17:15.630 --> 00:17:18.480 going to be? But there's a bigger question as well, perhaps 258 00:17:18.480 --> 00:17:22.800 on all of our minds. What does this mean about the future of 259 00:17:22.800 --> 00:17:27.300 journalism? So maybe it's too soon to say that maybe you've 260 00:17:27.300 --> 00:17:28.890 got your own thoughts, Rashmi? 261 00:17:30.330 --> 00:17:33.480 Rashmi Ramesh: So the one clear outcome that I see from all of 262 00:17:33.480 --> 00:17:37.680 this drama is that we'll have some idea on how AI can be used 263 00:17:37.680 --> 00:17:41.370 in journalism. As you know, companies continue to experiment 264 00:17:41.370 --> 00:17:45.360 with it further, use OpenAI and other companies' LLMs also and 265 00:17:45.390 --> 00:17:49.140 also develop their own GPT models. And this specific case, 266 00:17:49.440 --> 00:18:01.920 will also maybe help us set clearer guidelines for companies 267 00:18:01.980 --> 00:18:06.750 to use journalism content to train their LLMs, and the NYT 268 00:18:06.750 --> 00:18:10.380 case will most likely set precedent for future violations 269 00:18:10.380 --> 00:18:14.850 as well. And I know how everyone talks about how journalism is 270 00:18:14.850 --> 00:18:18.960 dead because chatbots can now write stories. Well, anyone 271 00:18:18.990 --> 00:18:22.230 including a chatbot can regurgitate press releases with 272 00:18:22.230 --> 00:18:26.610 proper instructions. But actual journalism requires legwork. It 273 00:18:26.610 --> 00:18:30.090 requires humans speaking with humans and connecting all of 274 00:18:30.090 --> 00:18:33.960 those dots to weave a story that evokes curiosity and then seats 275 00:18:33.960 --> 00:18:37.830 that curiosity. So in my opinion, AI will only help with 276 00:18:37.830 --> 00:18:39.360 that and not really hinder it. 277 00:18:41.070 --> 00:18:43.650 Anna Delaney: Let's hope so. Mat, are you have the same 278 00:18:43.740 --> 00:18:44.340 opinion? 279 00:18:46.080 --> 00:18:49.290 Yeah, definitely. I mean, we've seen so many interesting use 280 00:18:49.290 --> 00:18:54.270 cases with AI. Just this week, I was reading about it being used 281 00:18:54.930 --> 00:18:58.440 to discover chemical substances that can be used as replacements 282 00:18:58.440 --> 00:19:02.190 for other sorts of materials. They had trained a very 283 00:19:02.190 --> 00:19:05.850 particular AI to be able to solve chemical problems, for 284 00:19:05.850 --> 00:19:10.770 example. So there's so much potential here. And I think we 285 00:19:10.770 --> 00:19:15.120 can get thrown off sometimes when we try to use these tools 286 00:19:15.120 --> 00:19:17.520 ourselves. And they don't always work the way that we think they 287 00:19:17.520 --> 00:19:21.540 should. But we're seeing so many new ways of training them to do 288 00:19:21.540 --> 00:19:25.800 very specific or very complicated types of tasks that 289 00:19:26.070 --> 00:19:29.160 I think - not to be cliche - but I think the sky is still the 290 00:19:29.160 --> 00:19:31.230 limit with a lot of the applications that we're going to 291 00:19:31.000 --> 00:19:36.910 Right. Well, let's move on to our final question, just for 292 00:19:31.230 --> 00:19:31.770 be seeing. 293 00:19:36.910 --> 00:19:40.630 fun. If you could interview any historical figure about their 294 00:19:40.630 --> 00:19:43.990 thoughts on cybersecurity or AI, who would it be? And what 295 00:19:43.990 --> 00:19:47.560 question would you ask them? Go for it, Rashmi. 296 00:19:48.580 --> 00:19:53.740 Rashmi Ramesh: so I would pick Salvador Dali, because his art 297 00:19:53.800 --> 00:20:00.400 was all about bending reality and perception. And I think he 298 00:20:00.400 --> 00:20:04.300 would have a very, very unique perspective and offer a new 299 00:20:04.360 --> 00:20:10.990 angle on a world that is equally fluid and deceptive. So I would 300 00:20:10.990 --> 00:20:14.590 probably just ask him in your dreamscape where, you know, 301 00:20:14.770 --> 00:20:20.680 logic literally melts and clocks trip, how would he represent 302 00:20:20.680 --> 00:20:23.920 threats and defenses of the cybersecurity world? 303 00:20:24.710 --> 00:20:26.870 Anna Delaney: Fantastic. I'd love to know what Dali thinks. 304 00:20:27.470 --> 00:20:28.880 Brilliant. Mat? 305 00:20:29.350 --> 00:20:32.770 The only caution there is, as a surrealist his answer to what's 306 00:20:32.770 --> 00:20:36.730 the secret to cybersecurity might be like, watermelon or 307 00:20:36.730 --> 00:20:43.090 something or hair I don't know. You might not like what you 308 00:20:43.090 --> 00:20:49.780 hear. I don't know. I think we could use a little more levity. 309 00:20:49.810 --> 00:20:55.330 So I would love to interview somebody like Mark Twain, who it 310 00:20:55.330 --> 00:20:59.050 is, there's a quote attributed to him where, "If you don't read 311 00:20:59.050 --> 00:21:02.080 the newspaper, you're uninformed. If you do read the 312 00:21:02.080 --> 00:21:06.850 newspaper, you're misinformed." And there's so many wonderful 313 00:21:06.850 --> 00:21:11.470 quips and observations from him about not taking things too 314 00:21:11.470 --> 00:21:16.870 seriously. Having good perspective on things, always 315 00:21:16.870 --> 00:21:21.220 trying to be a good person, even though others around you might 316 00:21:21.220 --> 00:21:24.970 seem like scoundrels or fools, that sort of thing. And I just 317 00:21:24.970 --> 00:21:29.290 think with the degradation in the discourse that we've been 318 00:21:29.290 --> 00:21:32.770 having with the implosion of things like Twitter, now known 319 00:21:32.770 --> 00:21:37.360 as X, I just think we need more maybe a little more levity, a 320 00:21:37.360 --> 00:21:40.690 little more lightness, and collectively or maybe just 321 00:21:40.690 --> 00:21:42.910 personally taking things a little less seriously. 322 00:21:43.810 --> 00:21:46.600 Excellent, wise words, I think it's interesting how we've all 323 00:21:46.600 --> 00:21:51.190 gone for creatives, writers, artists, so I've chosen the 324 00:21:51.190 --> 00:21:55.000 romantic poet Lord Byron. I'd love to know his thoughts on 325 00:21:55.000 --> 00:21:58.270 say, the challenges of preserving privacy and ethics 326 00:21:58.270 --> 00:22:02.080 and individual freedoms in this age of AI and digital 327 00:22:02.080 --> 00:22:05.830 interconnectedness. So I think it's interesting how we haven't 328 00:22:05.830 --> 00:22:09.310 gone for traditional technologists. What does that 329 00:22:09.310 --> 00:22:13.780 say about us? Well, thank you so much. This has been absolutely 330 00:22:13.780 --> 00:22:15.520 brilliant, excellent discussion. 331 00:22:16.870 --> 00:22:18.520 Thank you so much, Anna, for having us. 332 00:22:19.060 --> 00:22:19.810 Rashmi Ramesh: Thank you, Anna. 333 00:22:20.080 --> 00:22:21.700 Anna Delaney: It's my pleasure. And thank you so much for 334 00:22:21.700 --> 00:22:22.960 watching. Until next time.