10/31/2024 | Press release | Distributed by Public on 10/31/2024 13:57
But AI has staying power. Especially as malicious actors leverage the technology more and more.
For example, at one time, there was a solution called Security Orchestration, Automation, and Response (SOAR). If you detected malware on a workstation, you could kick off the orchestration to quarantine it and run some remediation scripts. However, SOAR is fading and getting replaced by generative AI.
At many small- and mid-sized organizations, when the IT staff goes home on Friday, they wouldn't learn about an attack that night until Monday morning. At that point, you've probably been severely damaged. Some organizations haven't yet seen the value of outsourcing that aspect of the business, which enables other resources to monitor operations 24/7 and react quickly. AI helps accomplish that, as a key component of managed security services - with MSSP Alert ranking Insight 11th on its list of the top 250 managed security services providers for 2024.
Many times, anomalous behavior is an indication. If I'm the system administrator, and I usually don't log in on the weekend, but then someone does with my account, that should warrant an alert. An immediate investigation can begin instead of waiting until Monday. If that use case is automated within the security operations environment, the AI will take action to freeze the account and quarantine the workstation. Then it would notify the person on call via email.
This approach is more advanced than simply following a script. AI can make informed judgments on what to do next. There are also companies using AI to automate penetration testing. There are certain products designed to emulate an attacker. These simulations go through steps that an attacker would normally take: Try to find out which IP addresses are exposed to the internet, which ports are open, and what kind of device it is, and then determine how to exploit.
Gen AI can even help a Security Operations Center (SOC) analyst conduct a forensic investigation. You could ask the AI agent on a workstation some fairly open-ended questions with your voice. It can even speak back and respond in a meaningful way. The amount of time and effort to investigate anomalous behavior in your environment drops dramatically.
AI is different from those other solutions that didn't survive. It's responsible for the pervasive shift in the way we approach things, everything, and not just security. It brings the promise of finding things the human brain possibly can't, whether it's in the data or by learning at a much faster rate than a person. The individuals most affected by AI - possibly losing their jobs - are the ones who don't embrace it. The same goes for companies that fail to leverage AI to defend against cyberattacks that are similarly powered.
It's like an arms race. Early AI adopters are the threat actors. So, we must match the threat. We implement AI to improve the detection, improve the response, improve the remediation through automation.
Also notable: we're seeing deepfakes do some pretty amazing stuff in seconds to minutes. Senior executives can get extorted with "compromising" deepfaked pictures. Instead of notifying security, they could fear it leaking to the internet, even if it isn't them in the photo. Malicious actors can also use an individual's voice to give a command. A lot of senior executives, if it's a publicly traded company, have voiceprints all over annual shareholder calls. If you get a really good sample of someone, you can come up with a pretty amazing replica of them.
It will only get harder and harder to separate what's fake from what's real. There's a lot of math you can apply to a voice signature. Organizations like banks try to match cadence and pitch and how you pronounce your name, but you can train an AI solution to do a really good job of emulating someone. It's getting to the point where you can't assume anymore, especially if they're not following normal business processes.
If your boss calls and says, "Go out and buy 10 $1,000 gift cards and send the codes to this number," it should raise alarm bells. Still, it's easy to find examples of workers who get fooled. People do it, because they think it's the CEO. It's an understandable reflex to do what you're told by a manager in a corporate setting. However, non-normal business requests should be verified. If things don't seem right and there's a high level of pressure and urgency applied by the person making the request, that's a flag.
The statistic is 90% of data breaches begin with phishing. For example, a law-firm client of Insight's suffered a phishing attack that introduced ransomware into their environment. The attack compromised approximately 700 devices. With the help of Insight, which was just named the Cisco Defend and Protect Partner of 2024 in Canada, the client regained business functionality without having to pay the multi-million-dollar ransom. So, ransomware is still a big deal. We're seeing incidents on a weekly basis. Aligning with a network and security vendor that's staying current and leveraging the updates from the community is always a good strategy.
Phishing just equates to social engineering, though. Bad actors try to convince you to respond. They're not necessarily looking for you to click on a bad link anymore and go to a website and harvest your credentials. I think it's going to be getting victims to do other things more and more. The trick is:
It may seem fairly low-tech, but educating employees is important so they are aware this kind of stuff is happening. Educate them that, if it does happen, they shouldn't be embarrassed. They should contact the right people for help. It's the job of those people to help… and help the company, all at once.
If you clicked on a link or some kind of executable on your laptop and it sparked something you didn't expect, you'd probably call IT. They're going to quarantine it. They may issue you a new laptop, but it's just a cybersecurity incident. That's how people need to look at gen AI and deepfakes. There's going to be more of this stuff coming out. We just need the awareness and the processes to correctly deal with these use cases, but of course in conjunction with AI.
From a cyber liability insurance perspective, the bar keeps going higher and higher. Just having antivirus on your workstation isn't adequate. You really need endpoint protection or Extended Detection and Response (XDR) capabilities to automatically identify anomalous end-user behavior, quarantine devices, and correlate things across the environment. If your organization doesn't, you will have a hard time getting insurance, and you're more likely to suffer a successful attack. As attacks get increasingly sophisticated, organizations must adapt, too. The best-prepared organizations will be ready for whatever comes next in the ever-changing world of cybersecurity.