Skip to content

How Will AI Affect Cybersecurity | EP 35

 

 

Transcript:

Connor Swalm:

Welcome to Gone Phishing, a show diving into the cybersecurity threats that surround our highly connected lives. Every human is different. Every person has unique vulnerabilities that expose them to potentially successful social engineering. On this show, we'll discuss human vulnerability and how it relates to unique individuals. I'm Connor Swalm, CEO of Phin Security, and welcome to Gone Phishing. 

Hey, everyone, welcome back. It is your host, Connor, CEO at Phin Security, and I am back today talking about how will artificial intelligence play into cybersecurity. Now, that is an incredibly large topic, and I could take it in 100 million different ways. And I'd like to caveat all this with one thing. I am not an expert in AI in any way, shape, or form.

I've just had many conversations with friends, with security experts, with practitioners and developers who work on artificial intelligence and other various related systems. So I have, I guess, some opinions that are influenced by those thoughts and those conversations as well. First and foremost, I believe at this point, artificial intelligence and using AI in tools is mostly marketing buzzwords for most companies. There's this whole concept of acronym Bloat, right? AI, the letters A and I ended up in so many random tools that you're like, how the hell is this even working? Like, this doesn't make sense. Now, there are some incredibly useful things that you could do, like triaging support tickets. I've talked with a lot of partners about that, and giving recommendations using artificial intelligence based on some context clues that the ticket can provide itself has been super important.

But I still believe that AI at this point is largely marketing buzz for most companies that end up implementing it. So, question I always ask myself, will AI have an impact on social engineering? So I've always said this, there are really largely two types of social engineering. There is incredibly targeted social engineering. Like a malicious actor will choose your company based upon something it sees on LinkedIn. It'll create an organizational chart based upon publicly available information. It'll then trace some contact information for the employees that it would like to target. It'll know who they work with and who their direct manager is, so on and so forth. There's a lot of information. If you don't believe me, there is an incredible amount of information about most companies available on the Internet, either for free or just for a few dollars.

So finding companies to target and then getting information, it's not difficult to do. Second type of social engineering that I see is this scattershot, where there is several hundred thousand email addresses in a queue of some sort, and this malicious actor sends an incredibly what should be stupid and easy to recognize, fish or text message or LinkedIn DM or Facebook messenger request that should be recognizable so it's not tailored to the people, it's not unique to the company, it's not unique to that individual. It's like, well, if I send this to a few hundred thousand people, I'll end up getting somebody that's larger than the mentality. So those are two incredibly targeted and incredibly not targeted. And I would say the incredibly targeted emails or social engineering attempts.  

I'm not convinced that AI, and let's use the words chad GPT here because it can end up like writing emails and text messages and stuff for you. But I'm not convinced that these incredibly targeted forms of cyberattacks are going to be impacted by artificial intelligence. And the reason I don't believe that is because the humans are already way better at spending the time diving into it and understanding who this company is, why they act the way they do, and how to specifically target and attack them. So one of the main reasons I don't believe that artificial intelligence will have a huge impact on that kind of attack is humans are already the best at it and I can't AI chat GBT.

If you want to describe what it's really valuable at right now, in a large sense, it's great at getting rid of the blank page, whether that's writing a prompt or whether that's even drawing with something like dolly, or whether that's a painting. It is great at removing writer's block or the block that a typical artist will have when it's just them versus the blank canvas of the blank page. And by the time a malicious individual has decided to target a company and then puts the effort in to target them exclusively, the blank page is already gone. And they don't really need to provide that. They don't need AI to provide that at all. So they've already done it themselves. Will artificial intelligence have an impact on the scatter shot once? I think it'll have a small impact, and here's why.

You don't even need to raise your hand. But how many people have seen emails and text messages that immediately get qualified, like you immediately qualify them as phishing scams? Because there's a misspelling. I mean, I'm a human and I misspell things all the time. But very quickly, AI tools such as chat GBT and other prompt writing tools, they're just going to be able to write things with better language. Whether that's English, or whether that's any other language, they're going to do it without an incredible amount of misspellings and typos. And also, they'll probably get way better at grammar and syntax.

So whereas before, grammar and syntax are incredibly hard, like English is an incredibly hard language to learn, if it's not your native language, chat GBT could help somebody who speaks English kind of well, but not great write, as if they are a way better writer than they would unless they were on their own. And so I could see there being a slight uptick in the quality of the grammar, the quality of the English, and all that stuff around those scattershot kind of phishing emails and phishing text messages. Another thing that a few folks have brought up, to me at least, security concerns around chat GBT. Now, I believe chat GBT take it with a grain of salt. I believe chat GBT is coming out with a way where you can actually keep all your data private.

But a big concern early on was all of your prompts, anything you feed into it, and all of the information you generate, it's not owned by you. In fact, it's completely owned, almost completely owned by the company, and it's available. So if you're using your client's information or your information to do anything, it's not yours anymore, and it's available to wherever you've plugged this into. So data privacy was a huge concern that a lot of friends of mine have. There's actually a really funny story that I love to talk about is there will be a link to it in the show notes about a lawyer, this poor guy, Stephen Schwartz, who used chat GPT to make up case law unintentionally. So this New York Times article is basically lawyer. This lawyer used chat GBT to say, is there case law to defend or not?

Like defend X, Y or Z? Chat GBT came up with a bunch of stuff, and then this lawyer asked it, okay, did you make up these cases or all these real cases? And then chat GBT doubled down and said, no, we did not make up these cases. These are completely accurate. And now he's actually in a huge amount of trouble where he might get disbarred and not even be able to practice law as a result of using chat GBT to make up some Bs that doesn't exist. And so I could also concern that I would have is the abuse of chat GBT to generate what is seemingly real, seemingly true and authentic information. 

And then most people, lawyers are incredibly intelligent people, on average, so if this person was able to get fooled by chat GBT, what is an average individual going to do when they're using chat GBT to ask it questions? They're not going to be able to ascertain, is this making something up completely? Are they going to be able to use their own logic and critical thinking to understand if this is all made up? Who knows? So there could be a concern with absolutely horribly generated, completely fake and unrealistic information and data that people should look out for. I think it's incredibly hilarious for me because I'm not a lawyer, that Chad JBT made this up. He brought it in front of a judge, and the judge basically was like, what the heck is this? What are you talking about?

And then now this person is threatening to get disbarred as a result of essentially lying in court. And the fact that he used Chet GPT doesn't matter. It's that guy lied in court. And so that's a big problem. There's also a lot of people, I forget who I was talking to. It was a professor at a university about the use of chat GPT to generate essays and stuff like that. And I thought her take on it was incredibly great. It was essentially, hey, these tools are here. People are going to use them whether we like it or not. So we should just get better at understanding and trying to teach them how to use them productively and use them well, so that they're not being malicious in any way, shape or form, and so that they're not lying.

We should teach students how to use these productively, not tell them they can't use them at all. And I thought that was great as well, is if Chad GBT, if other tools that are probably going to come out over the next year, maybe even several months from now, are going to be in existence, people are going to use them. And instead of shying away from it, we should look to adopt it as safely and effectively as possible. Last thing I would look out for, it's just a personal pet peeve of mine and a buddy of mine, Wes Spencer, has talked about this at length online and with me privately. Just acronym Bloat. Putting the words a, the letters A and I in an acronym where it didn't belong in the first place doesn't make your tool any better.

In fact, it just generates additional confusion. That's not what marketing should do. That's also not what anyone should aim to do. It's just to generate additional confusion. So acronyms keep getting bloated. AI used as a marketing buzzword most often, whereas you don't need to use the words AI, and if you listen to the previous episode, don't sell your tools, sell the outcome. So if you're going to use AI in your marketing, if you're going to say, hey, we use tools that use AI, or we have tools that use AI, that we provide services on your behalf when we use them, okay? Why does anyone care? Does that provide a better outcome? How so? Are you making a client more secure by using that stuff? Can you defend that? Can the tool you're using, the company you're working with defend that?

These are the real questions that I would rather answer and ask is buzzwords are great. They get people's eyeballs and whatnot. But if there's no substance behind additional value, it's just Bs marketing buzzwords that don't make sense and aren't going to matter at the end of the day, aren't going to create additional security and aren't going to generate additional outcomes. So always be aware of that. If you have any questions, comments, concerns, you can find me on LinkedIn or find our company at Phin Security. We work with partners all the time. I talk about stuff like this with them all the time, so always happy to chat about it whenever you have a free moment. Once again, I'm Connor, CEO at Phin, host of the podcast you just spent your time listening to. 

So thank you for joining and I will see you next time. Thanks so much for tuning in to Gone Phishing. If you want to find out more about high-quality security awareness training campaigns, how to launch them in ways that actually engage employees to change their habits, then check us out Phin Security at  Phinsec.io. That's  Phinsec.io. Or click all of the wonderful links in our show notes. Thanks for phishing with me today and we'll see you next time.