11/01/2024 | News release | Archived content
Should critical infrastructure orgs boost OT/ICS systems' security with zero trust? Absolutely, the CSA says. Meanwhile, the Five Eyes countries offer cyber advice to tech startups. Plus, a survey finds "shadow AI" weakening data governance. And get the latest on MFA methods, CISO trends and Uncle Sam's AI strategy.
Dive into six things that are top of mind for the week ending Nov. 1.
As their operational technology (OT) computing environments become more digitized, converged with IT systems and cloud-based, critical infrastructure organizations should beef up their cybersecurity by adopting zero trust principles.
That's the key message of the Cloud Security Alliance's "Zero Trust Guidance for Critical Infrastructure," which focuses on applying zero trust methods to OT and industrial control system (ICS) systems.
While OT/ICS environments were historically air gapped, that's rarely the case anymore. "Modern systems are often interconnected via embedded wireless access, cloud and other internet-connected services, and software-as-a-service (SaaS) applications," reads the 64-page white paper, which was published this week.
The CSA hopes the document will help cybersecurity teams and OT/ICS operators enhance the way they communicate and collaborate.
Among the topics covered are:
The guide also outlines this five-step process for implementing zero trust in OT/ICS environments:
A zero trust strategy boosts the security of critical OT/ICS systems by helping teams "keep pace with rapid technological advancements and the evolving threat landscape," Jennifer Minella, the paper's lead author, said in a statement.
To get more details, read:
For more information about OT systems cybersecurity, check out these Tenable resources:
Startup tech companies can be attractive targets for hackers, especially if they have weak cybersecurity and valuable intellectual property (IP).
To help startups prevent cyberattacks, the Five Eyes countries this week published cybersecurity guides tailored for these companies and their investors.
"This guidance is designed to help tech startups protect their innovation, reputation, and growth, while also helping tech investors fortify their portfolio companies against security risks," Mike Casey, U.S. National Counterintelligence and Security Center Director, said in a statement.
These are the top five cybersecurity recommendations from Australia, Canada, New Zealand, the U.S. and the U.K. for tech startups:
"Sophisticated nation-state adversaries, like China, are working hard to steal the intellectual property held by some of our countries' most innovative and exciting startups," Ken McCallum, Director General of the U.K.'s MI5, said in a statement.
To get more details, check out these Five Eyes' cybersecurity resources for tech startups:
Employees' use of unauthorized AI tools is creating compliance issues in a majority of organizations. Specifically, it makes it harder to control data governance and compliance, according to almost 60% of organizations surveyed by market researcher Vanson Bourne.
"Amid all the investment and adoption enthusiasm, many organisations are struggling for control and visibility over its use," reads the firm's "AI Barometer: October 2024" publication. Vanson Bourne polls 100 IT and business executives each month about their AI investment plans.
To what extent do you think the unsanctioned use of AI tools is impacting your organisation's ability to maintain control over data governance and compliance?
(Source: Vanson Bourne's "AI Barometer: October 2024")
Close to half of organizations surveyed (44%) believe that at least 10% of their employees are using unapproved AI tools.
On a related front, organizations are also grappling with the issue of software vendors that unilaterally and silently add AI features to their products, especially to their SaaS applications.
While surveyed organizations say they're reaping advantages from their AI usage, "such benefits are dependent on IT teams having the tools to address the control and visibility challenges they face," the publication reads.
For more information about the use of unapproved AI tools, an issue also known as "shadow AI," check out:
VIDEO
Shadow AI Risks in Your Company
Multi-factor authentication (MFA) comes in a variety of flavors, and understanding the differences is critical for choosing the right option for each use case in your organization.
To help cybersecurity teams better understand the different MFA types and their pluses and minuses, the U.K. National Cyber Security Centre (NCSC) has updated its MFA guidance.
"The new guidance explains the benefits that come with strong authentication, while also minimising the friction that some users associate with MFA," reads an NCSC blog.
In other words, what type of MFA method to use depends on people's roles, how they work, the devices they use, the applications or services they're accessing and so on.
Topics covered include:
To get more details, read:
For more information about MFA:
The White House has laid out its expectations for how the federal government ought to promote the development of AI in order to safeguard U.S. national security.
In the country's first-ever National Security Memorandum (NSM) on AI, the Biden administration said the federal government must accomplish the following:
"The NSM's fundamental premise is that advances at the frontier of AI will have significant implications for national security and foreign policy in the near future," reads a White House statement.
The NSM's directives to federal agencies include:
The White House also published a complementary document titled "Framework To Advance AI Governance and Risk Management in National Security," which adds implementation details and guidance for the NSM.
As the cybersecurity risks and benefits of AI multiply, most U.S. state CISOs find themselves at the center of their governments' efforts to craft AI security strategies and policies.
That's according to the "2024 Deloitte-NASCIO Cybersecurity Study," which surveyed CISOs from all 50 states and the District of Columbia.
Specifically, 88% of state CISOs reported being involved in the development of a generative AI strategy, while 96% are involved with creating a generative AI security policy.
However, their involvement in AI cybersecurity matters isn't necessarily making them optimistic about their states' ability to fend off AI-boosted attacks.
None said they feel "extremely confident" that their state can prevent AI-boosted attacks, while only 10% reported feeling "very confident." The majority (43%) said they feel "somewhat confident" while the rest said they are either "not very confident" or "not confident at all."
Naturally, most state CISOs see AI-enabled cyberthreats as significant, with 71% categorizing them as either "very high threat" (18%) or "somewhat high threat" (53%).
At the same time, state CISOs see the potential for AI to help their cybersecurity efforts, as 41% are already using generative AI for cybersecurity, and another 43% have plans to do so by mid-2025.
Other findings from the "2024 Deloitte-NASCIO Cybersecurity Study" include:
For more information about CISO trends: