Skip to content

How Should MSPs Be Using AI | EP 45

 

 

 

Transcript:

Connor Swalm:

Welcome to gone phishing, a show diving into the cybersecurity threats that surround our highly connected lives. Every human is different. Every person has unique vulnerabilities that expose them to potentially successful social engineering. On this show, we'll discuss human vulnerability and how it relates to unique individuals. I'm Connor Swam, CEO of Phin Security. And welcome to Gone Phishing.

Hey, everyone. Welcome back to another episode of Gone Phishing. I'm your host, Connor, CEO at Phin, and I am joined by Adam Evans, the security director of Simplex It.

So, Adam, I got a quick question for you. AI is the buzzword of a decade.

Adam Evans:

Maybe the buzword of my lifetime.

Connor Swalm:

I'm not quite too old just yet. Actually, I was telling you, before we hopped on the call, I was at black hat, and I feel like you could sum up the entire vendor hall at Black Hat with we use artificial intelligence in our threat hunting process to fill extended detection and response. XDR AI threat hunting. Those were the three words that I saw in basically every single booth. So how should msps react? Is AI just a buzzword? Is it valuable? How should they go about thinking about this?

Adam Evans:

So to be short into the point. On that one, yeah, it's a pretty heavy marketing buzzword right now, but there is some value to it. What I like about this, and I did some research on this earlier in the year, the idea of artificial intelligence as we know it has been around for almost 100 years at this point, if not a little bit longer. Just after the civil war, people started asking the questions of, how can we artificially create a human brain? And what would that look like? Around the 1940s, Alan Turing came around and put some real meat to that argument and came up with his theories and equations and whatnot. And then were just playing the waiting game of waiting for technology to catch up, to actually make that a reality. We started getting some good progress in the 80s, that technology in the 80s, everyone was off listening to their cyberpunk stuff and all the good 80s jams of the day, but the technology wasn't there. But once we got into the technology finally caught up to the idea.

So we've been using AI in our daily lives for a long time. By pure definition, Google search algorithm meets the technical qualifications of an AI. It's using large amounts of data to learn and make influences to influence a decision. But when we hear all the marketing buzzwords and stuff, no one's thinking the Google algorithm or the Netflix recommendations as being AI. It's chat, GBT, and dolly and all the generative AI out there, but AI is more than that. We're certainly not at the point where we've got t 1000s or whatever. We don't have terminators or Cortanas or whatever out there trying to control our lives, right? Not yet, but it's becoming far more commonplace. So looking at these vendors and how they're handling stuff, they're using the more technical stuff around machine learning and deep learning models to kind of help surface information to the front.

There's a lot of good vendors out there, and antivirus has been doing this for a bit anyways, looking at all those technical signals and stuff and trying to find those anomalies. If you see that Kathy and accounting's computer is consistently accessing quickbooks online from eight to five, but then it suddenly fires off a connection at 02:00 a.m. Using that kind of data that would be able to paint that picture of that 02:00 a.m. Connection is anomaly. Maybe we should investigate that and try to figure out if there's any other relationships there and to help make that more actionable. But there's still a long ways to go. Right now, everyone just saw chat GPT is this fancy, powerful thing. Dolly is making those cool pictures of weird people with swirly faces and six hands, but it's captured the imagination. So we have companies that are starting to ask about it. We have vendors starting to say we use it in our products. And if you go out in the MSP communities, MSPs are asking the question how can I use this to make my life easier? So it gets interesting at that point. And there's the knowledge of being able to cut through the marketing buzz, but also figure out the real use cases and real risks around it and where the value is.

Connor Swalm:

I always kind of explained, trying to frame up the argument for where AI could be implemented for value the quickest. And it's always, if you take a look at a task, the less creative a task needs to be in order less creative a solution, in order for that task to be complete, needs to be, the more ripe it is for a disruption by AI. So if you look at least from what I've been able to see from talking with other vendors and other partners, there seems to be a lot of push to get AI integrated into helpdesk, to get AI integrated into customer support, to get AI implemented into triaging. That's more along the help desk, but its goal is to kind of bubble to the surface, which is a very uncreative thing. It's like what was probably most important for you to work at, to look at rather. But what are some unknown risks? What are some risks people might not be aware of if they use a solution that has chat, JBT or some other large language model implemented into it that they might not be aware of?

Adam Evans:

The first thing I look up is that language model that my data is going into and the AI is learning from? Is that a private language model or is it a public. If we think about the stuff that comes in from end user service tickets, end users will do end user things. And are they putting sensitive information in that service ticket, and is that being fed into those language models? We saw instance of, I believe it was Samsung that had a security incident because their people were cut and pasting proprietary data into chat GPT to write their emails or whatever they were doing with it, and they exposed sensitive information to the platform. That's definitely risk. Number one that I see out there. Is are we over sharing with those models?

The second one that I look at from there is AI is not perfect. If I go into chat GPT right now and tell it to write me the Powershell syntax for a made up language, it'll do it and it'll say hey, I think this is fake. But then when you come back and tell it no, it's a real language, it was invented in this time, and you feed it just enough BS to make it think it's legitimate. It'll apologize for the mistake and then carry on as if it's real. That phenomenon is known as just AI hallucinations. It can just make stuff up and be prone to error. So that's something to keep in mind as well, because if you start having your end user submit service tickets asking for stuff and they're feeding back bogus answers, that's not going to look good for the MSP or the vendor whomever.

And the final one that I look at is this whole argument of, you get out of AI what you put into it. So I look back, there's two examples that I love to look at for that one. One was when Microsoft introduced their chat bot Tay onto Twitter back in 2014 2015 somewhere in there. The rap scallions of the Internet realized very quickly that they had a little opportunity to exploit it. Fed it a whole bunch of really vile content, and within 12 hours, the chat bot became a very vile chat bot. Microsoft promptly pulled it off the Internet, and the other one the Air force released a paper about their AI and some of their testing in that they basically told the AI its mission was to take out surface to air missile sites. So the AI went out and found the most effective way to do that with a human saying whether or not it had the kill order or not. The AI decided that the human operator on the friendly side was hindering its ability to do its mission. So its first task was to take out the human operator so it can operate and complete its mission successfully. When told not to do that, you're not allowed. Friendly fire is bad. It then said, okay, I can't take out the person, but I can take out the communication tower that the person needs to communicate with me then, so I can operate with impunity. The Air Force now denies that study ever occurred. So you give garbage in, garbage out. So that presents its own risks as well.

Connor Swalm:

I had Jimmy Hatsel from Cyberqp on the podcast. We talked a little bit about AI, and he came up with a great point. I think it was Jimmy that said, like, if you're training a large language model on enormous sets of data, and you get that data from the Internet, in this case, Reddit, what kind of person, what kind of large language model would exist if you train it only and exclusively on all of the content on Reddit, which from my experience and your experience is quite possibly humanity at its worst.

Adam Evans:

Yeah. Looking at just the Reddits out there, they're going to be into some very interesting content that we probably can't talk about on the podcast, and they're going to be very bitter people. They're going to complain an awful lot about a lot of things.

Connor Swalm:

Oh, yeah. So with all these risks touched on some of them, we talked about data security issues. We talked about privacy issues in terms of if your help desk tickets do include sensitive information, which a lot of times they do, because it's supposed to be just between you and your client. Should MSPs implement more or less AI enabled solutions into their tools and stacks, or are they going about this the wrong way and trying to, like we talked about in our last episode, cut too many corners right now?

Adam Evans:

I think right now we're seeing a lot of our MSPs cutting corners. They want chat GPT to make nice, fancy documents for them, so they're just pasting stuff in there with abandon. I've even seen some comments on the MSP communities of people saying, I'm taking all my clients documentation from it, glue and formatting it through chat GPT, and I'm like, yikes. But it's also such how we're seeing the application of it in our industry. It's still pretty new. I think the better advice for MSPs is to start thinking about what that could look like and the risks inherently involved with that full transparency. We've used chat GPT internally for stuff. It's things like, here's a technical blog article that's going to be public anyway, and Adam was way too nerdy about it. So can chat GBT rewrite this to sound more focused towards business leaders than a security nerd? Or we're trying to build out a quick communication email and I just can't figure out where to get started. Chat GBT, can you write this for me? And then we can tweak and tailor from there even little things like, I'm really struggling on this script that is just a generic script to do something that doesn't involve anything sensitive in the slightest. Chad GBT, can you check my powershell to make sure the dash is in the right spot or I close the statement. There are certainly plenty of legitimate uses at low risk and established and understanding that awareness, but there are those guardrails that need to happen. And I've been seeing more and more in company acceptable use policies defining the use of AI tools like chat GPT or dolly and whatnot to make sure that we're using it responsibly.

Connor Swalm:

With all the security issues and the privacy issues that you had mentioned, once you put your prompt, potentially with sensitive information into we just use chat GBT. I use chat GBT as a stand in because it's the most popular LLM at this point publicly available launch language model. I've always equated it to chat GBT and other tools like it are great for getting rid of writer's block. They're great for getting rid of the blank canvas. They're great for. Hey, I have these thoughts I'm going to need to make something out of it. Can you organize them in this way for me? But then you took it to, you're like, hey, that's awesome. If you're using this to just flat out do your job, you're probably not getting what you expect out of this. And you're probably assuming some risk that you're not aware of. Absolutely.

Adam Evans:

That. And I actually read an interesting blog from someone in the gaming industry who writes the stories for games and all that other stuff. At the end of the day, we still need to know what we're doing manually. Chat GPT and whatnot. Can make things really easy for us. Can you imagine the disaster at a company if they hired an editor for their graphical content? A graphic designer. That's the term I'm looking for today. Graphic designers. If they hired a graphic designer whose only experience was how to use LLMs to come up with cool graphics, they wouldn't be a very good graphic designer, would they?

Connor Swalm:

That would be a great prompt engineer, though, right? And could that be a future job out there in the world?

Adam Evans:

Maybe. But at the end of the day, we still need to do our job so we can make sure that the results we're getting from these services actually make sense and are good. And the other thing I kind of consider, since you mentioned using chat TV to get rid of writer's block. If you're feeding something into it, is it the same thing that you would post on a public forum? Would you post it on Reddit or in a Facebook group? Or would you send it to a friend of yours and say, hey, can you proofread this? Because there's those levels of sensitivity of stuff out there. If I put a highly confidential company file of, say, our fiscal earnings for the last calendar year and ask to come up with my projections, we probably don't want that to be public. So I probably shouldn't put it in chat GBT. Rewriting a blog to make it less nerdy and stuff like that for a marketing post, who cares? It's going to be public anyway.

Connor Swalm:

That makes a lot of sense. I always draw attention to the case of a New York based lawyer who used chat GBT to come up with case law to support his foundational arguments for his trial. I don't know these legal terms, but he was a lawyer trying to do something, and he asked Chad GBT, this is the funniest thing after I'm reading the New York Times article about it right now, where he asked Chad GBT, are you sure you didn't make this up? And Chad GBT doubled down and said, these are all real, like you said, the hallucination piece. These are all real cases. This is actual case law and this supports your case. He went into court with this, handed it to, basically said this all in court, and this was his defense. Turns out Chadjibd made up everything and he's getting disparred.

Adam Evans:

Yep, I saw that it was with the case against Southwest Airlines, I think.

Connor Swalm:

I think so, yeah.

Adam Evans:

Yep. Another example of just the AI hallucinations out there and the risks involved with.

Connor Swalm:

Yeah, this. It's absolutely insane. If people wanted to get in touch with you, Adam, either to learn more about yourself, your story, or to get your perspective on a lot of the stuff we talked about. How would you like them to get in touch with you?

Adam Evans:

LinkedIn is probably the best place. That way people can follow, they can message, they can comment on stuff. I've been trying to upgrade or get more posts out there around security compliance and all the fun stuff there, trying to get much better at that. But you know, the lovely time crunch that we talked about already in the MSP space, so that's probably the best place. It's a public profile you can follow as you wish. You can connect, message, all the fun stuff.

Connor Swalm:

It's a good time for you folks who are listening. We'll have a link to Adam's LinkedIn profile in the show notes. So if you'd like to reach out, connect with him, or just follow him for all of the advice he gives on LinkedIn about security, feel free to do that. But Adam, thank you so much for joining us. I had a blast talking about not only appliance frameworks and awareness training and the MSP industry with you, but also now artificial intelligence. So really, thank you for lending all the research you've done over the years and all the experience you have to the folks that are listening. I really do appreciate it.

Adam Evans:

Thank you so much for having me. Always glad to share a little bit of all the blood, sweat and tears and all the things that are rattling around up here with anyone else. We can help them out even a little bit. I'm happy.

Connor Swalm:

Awesome. That makes me happy too. Once again, everyone, thank you so much for listening. I'm Connor, host of the Gone Phishing podcast, and were joined by the wonderful Adam Evans, security director of Simplex It we will see you next time.

Thanks so much for tuning in to Gone Phishing. If you want to find out more about high quality security awareness training campaigns, how to launch them in ways that actually engage employees to change their habits, then check us out  Phinsec.io. That's  Phinsec.io or click all of the wonderful links in our show notes. Thanks for fishing with me today and we'll see you next time.