Gigamon Inc.

09/05/2024 | Press release | Distributed by Public on 09/05/2024 10:00

Gigamon Best Practices for Event Logging and Threat Detection

Executive Overview

Security agencies from multiple nations have published a joint brief best practice guide for logging and log data retention in an effort to establish a common baseline among all organizations from these countries. For many organizations, cybersecurity may not be a core aspect of their business, but rather is the cost of doing business and staying operational. This joint brief is a strategic and tactical guide on planning, building out, and maintaining common baselines.

When helping an organization recover from a security event, security agencies such as CISA or the FBI have inconsistent levels of information available to help identify and track down initial access and determine the scope of the compromise. This is the challenge these agencies are trying to address. The best practices brief draws heavily from U.S. federal mandate M-21-31 but is not a copy of that mandate; instead, it builds on it and uses aspects of M-21-31 as a foundation.

This is a guide for Gigamon customers and prospects on how to implement and solve for some of the controls described in the brief, "Best practices for event logging and threat detection," as published by the following national cybersecurity groups:

  • Australian Signals Directorate's Australian Cyber Security Centre (ASD's ACSC)
  • United States (U.S.) Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and the National Security Agency (NSA)
  • United Kingdom (U.K.) National Cyber Security Centre (NCSC-UK)
  • Canadian Centre for Cyber Security (CCCS)
  • New Zealand National Cyber Security Centre (NCSC-NZ) and Computer Emergency Response Team (CERT NZ)
  • Japan National Center of Incident Readiness and Strategy for Cybersecurity (NISC) and Computer Emergency Response Team Coordination Center (JPCERT/CC)
  • The Republic of Korea National Intelligence Services (NIS) and NIS's National Cyber Security Center (NCSC-Korea)
  • Singapore Cyber Security Agency (CSA)
  • The Netherlands General Intelligence and Security Service (AIVD) and Military Intelligence and Security Service (MIVD)

The risk landscape has changed dramatically. Threat actors are sophisticated and often allowed and encouraged to operate openly in their countries. They research systems for vulnerabilities and probe major organizations for any point of entry. National security agencies around the world are coming together to establish recommendations for a minimum level of visibility. This blog will cover the aspects that Gigamon can help you satisfy to reduce operational risk.

Architect Overview

The Gigamon Deep Observability Pipeline is a layer of visibility abstraction that broadly spans on-prem, hybrid cloud, and public cloud deployments, using the classic tap and aggregator deployment of packet brokers. Additionally, the Deep Observability Pipeline's deep packet inspection looks at standard applications and protocols in motion. Gigamon visibility can span technical and administrative boundaries, observing the behavior of workloads and network devices regardless of their internal state, be it misconfigured, overloaded, or compromised. It is an independent source of logging and truth in the network.

This guide will step through the best practice document's guidelines on logging. There are two practical principles to these requirements: The first is technical depth and accuracy. The second is cost. These two requirements must be balanced against each other.

M-21-31

M-21-31 was originally a United States government mandate. A larger security agency community has endorsed M-21-31 logging recommendations, which include standardized logging formats, retention, and breadth of logging.

Best practices suggest 18 months of log retention, but this can be difficult for many organizations to achieve. Gigamon can aid this in several ways:

  • Logging application metadata instead of full packet capture flows offers a 98 percent reduction in log size
  • Flow slicing long flows can also reduce log size
  • Gigamon logging can be application aware, so backup traffic, Windows updates, or low-threat apps like Netflix traffic will not be logged

Operational technology (OT) visibility has also been added to this requirement. OT/IoT is often minimally viable compute that is not full-featured and may not have the capacity for logging. Gigamon Advanced Metadata can see over 90 OT/IoT protocols and observe the behavior of the device, meeting the visibility requirement without putting load on the OT device.

The Gigamon Deep Observability Pipeline's view of Layers 2-7 can also aid in discovery and monitoring of OT devices. These often have specific, well-known MAC addresses, certificates, software versions, and protocols.

An in-depth white paper documents the contribution made by Gigamon to M-21-31.

Logging Priorities for Enterprise Networks

Network and NetFlow logging as they currently exist are not application aware. Under "Logging priorities" in the best practices brief, item 2 on page 9 identifies network metadata as a priority. For Gigamon, network metadata includes the following:

  • MAC address
  • DHCP options requested/supplied
  • Source IP
  • Source port
  • Destination IP
  • Destination port
  • Protocol (TCP/UDP/ICMP and others)
  • Encryption
  • Certificate
  • Protocol errors, retransmits, buffer size changes
  • Data encoding
  • Application visibility
  • Round trip and performance

You can observe edge devices such as routers and firewall behavior. While it may not be cost-effective to log everything in parallel to those devices, attempts to access the appliance via well-known services on standard and nonstandard ports could be effective. Best practices also call for monitoring certificates for indicators of compromise or changing of devices as well as monitoring DNS services for the organization's users.

Gigamon can solve for this requirement by observing DNS requests and replies on the wire. This includes rogue servers, nonstandard port usage, and DNS servers outside of the organization. Layer 2-7 visibility also makes it possible to detect DNS of maximum packet size, indicating exfiltration.

[Link]Figure 1. Rogue DNS servers.

With Gigamon, you can identify email servers running POP/SMTP; detect and record DHCP servers, requests, and options; and detect legacy IT assets, often through expired or old certificates, protocols, and MAC addresses. This includes devices such as printers, badge readers, network-connected thermostats, and other OT devices that are often overlooked.

[Link]Figure 2. OT/IoT communications.
[Link]Figure 3. Certificate visibility.

Logging Priorities for Operational Technologies

OT and IoT systems can be a challenge to monitor from the network, and the interdependence of IT and OT systems has only been growing. This section of the recommendations calls for monitoring OT systems that are critical to safety and service delivery. This would include HVACs, door access controls, and elevator systems, to name a few. These systems are often not designed with security or robust logging in mind. Being able to observe the behavior of those devices through external visibility can help solve some of those challenges.

Cloud Logging Considerations

Logging in the public cloud can be challenging from a cost perspective - achieving the proper level of logging can be expensive. On more than a few occasions, organizations have had to consult billing if data had been exfiltrated when logging was either disabled or turned off. The Gigamon Deep Observability Pipeline can be a cost-effective alternative to logging. Gigamon mirrors traffic off workloads and then performs deep packet inspection. This allows the creation of network metadata that is application and performance aware. As an additional benefit, mirroring the traffic before it hits the OpenSSL library is effectively a decrypt solution for the cloud with no key management. Message integrity is maintained, but message confidentiality is not.

LOTL and Volt Typhoon

The goal of these logging mandates is to better spot living-off-the-land (LOTL) techniques such as those called out for threat actor Volt Typhoon. This will normalize operational logging, leading to better operational outcomes. When it comes to LOTL techniques and threat actors, there is a gap in the industry today. We are trying to use network logging to solve for a problem it was never designed to solve for. Logs are not application aware. The lateral movement of threat actors using standard applications and protocols over standard and nonstandard ports is going mostly undetected. This has expanded to infrastructure attacks on routers and firewalls. How can you trust the network log when the NGFW is compromised and not logging the attacks used against it?

We will look at three techniques Volt Typhoon has been observed using that the Gigamon Deep Observability Pipeline is uniquely situated to solve for. Gigamon is a layer of visibility abstraction, so it can broadly see lateral traffic.

The first technique is [T1021.001], which is RDP (Remote Desktop) from an Active Directory server on a compromised host.

[Link]Figure 4. Remote Desktop on a nonstandard port.


Gigamon will also capture RDP on nonstandard ports.

The second technique used is ping. Ping is used to facilitate system discovery. This could be a broadcast ping within a subnet to force MAC and ARP tables to populate. It could also be done as a scan or sweep that could be detected laterally within the network [T1018].

The third technique is something Gigamon is specifically capable of observing, which is netsh portproxy. Volt Typhoon configured a host to act as a proxy. It will accept any traffic on a specific port and send it back out on a different port. Advanced Metadata Intelligence, being application aware in this scenario, would start to report a lot of unusual applications on those ports.

Here is an example from CISA of a Volt Typhoon command:

cmd.exe /c netsh interface portproxy add v4tov4 listenport=50100 listenaddress=0.0.0.0 connectport=1433 connectaddress="

This is saying that it accepts any incoming traffic on 50100 and sends it back over a network interface with a private-class IP on port 1433. For Gigamon Advanced Metadata intelligence, this is like lighting a signal fire in the logs. We would see all kinds of unusual application traffic on port 50100 and a lot of non-SQL traffic on port 1433, which is usually reserved for SQL traffic.

[Link]Figure 5. Detection of proxy traffic.
[Link]Figure 6. Internal and external SSH traffic on nonstandard ports.

Conclusion

Cyber actors have a high degree of sophistication and are actively trying to gain access to all organizations, from businesses to water treatment plants and hospitals. National security agencies are trying to move organizations to a common level of logging through their recommendations. Threat actors can still achieve lateral movement across the network through standard and nonstandard ports with relative ease. The Gigamon Deep Observability Pipeline is a layer of visibility abstraction that offers broad lateral visibility and makes network logs application aware. It allows you to observe unusual application usage, workloads, and OT/IoT by external behavior regardless of internal state. This solution can broadly solve nonstandard use cases, helping close some risk gaps the industry has not gotten a handle on yet.

Featured Webinars

Hear from our experts on the latest trends and best practices to optimize your network visibility and analysis.

CONTINUE THE DISCUSSION

People are talking about this in the Gigamon Community's Security group.

Share your thoughts today