Docker Inc.

13/08/2024 | News release | Distributed by Public on 14/08/2024 00:35

Zero Trust and Docker Desktop: An Introduction

Today's digital landscape is characterized by frequent security breaches resulting in lost revenue, potential legal liability, and loss of customer trust. The Zero Trust model was devised to improve an organization's security posture and minimize the risk and scope of security breaches.

In this post, we explore Zero Trust security and walk through several strategies for implementing Zero Trust within a Docker Desktop-based development environment. Although this overview is not exhaustive, it offers a foundational perspective that organizations can build upon as they refine and implement their own security strategies.

What is Zero Trust security?

The Zero Trust security model assumes that no entity - inside or outside the network boundary - should be automatically trusted. This approach eliminates automatic trust and mandates rigorous verification of all requests and operations before granting access to resources. Zero Trust significantly enhances security measures by consistently requiring that trust be earned.

The principles and practices of Zero Trust are detailed by the National Institute of Standards and Technology (NIST) in Special Publication 800-207 - Zero Trust Architecture. This document serves as an authoritative guide, outlining the core tenets of Zero Trust, which include strict access control, minimal privileges, and continuous verification of all operational and environmental attributes. For example, Section 2.1 of this publication elaborates on the foundational principles that organizations can adopt to implement a robust Zero Trust environment tailored to their unique security needs. By referencing these guidelines, practitioners can gain a comprehensive understanding of Zero Trust, which aids in strategically implementing its principles across network architectures and strengthening an organization's security posture.

As organizations transition toward containerized applications and cloud-based architectures, adopting Zero Trust is essential. These environments are marked by their dynamism, with container fleets scaling and evolving rapidly to meet business demands. Unlike traditional security models that rely on perimeter defenses, these modern infrastructures require a security strategy that supports continuous change while ensuring system stability.

Integrating Zero Trust into the software development life cycle (SDLC) from the outset is crucial. Early adoption ensures that Zero Trust principles are not merely tacked on post-deployment but are embedded within the development process, providing a foundational security framework from the beginning.

Containers and Zero Trust

The isolation of applications and environments from each other via containerization helps with the implementation of Zero Trust by making it easier to apply access controls, apply more granular monitoring and detection rules, and audit the results.

As noted previously, these examples are specific to Docker Desktop, but you can apply the concepts to any container-based environment, including orchestration systems such as Kubernetes.

A solid foundation: Host and network

When applying Zero Trust principles to Docker Desktop, starting with the host system is important. This system should also meet Zero Trust requirements, such as using encrypted storage, limiting user privileges within the operating system, and enabling endpoint monitoring and logging. The host system's attachment to network resources should require authentication, and all communications should be secured and encrypted.

Principle of least privilege

The principle of least privilege is a fundamental security approach stating that a user, program, or process should have only the minimum permissions necessary to perform its intended function and no more. In terms of working with containers, effectively implementing this principle requires using AppArmor/SELinux, using seccomp (secure computing mode) profiles, ensuring containers do not run as root, ensuring containers do not request or receive heightened privileges, and so on.

Hardened Docker Desktop (available with a Docker Business or Docker Government subscription), however, can satisfy this requirement through the Enhanced Container Isolation (ECI) setting. When active, ECI will do the following:

  • Running containers unprivileged: ECI ensures that even if a container is started with the --privileged flag, the actual processes inside the container do not have elevated privileges within the host or the Docker Desktop VM. This step is crucial for preventing privilege escalation attacks.
  • User namespace remapping: ECI uses a technique where the root user inside a container is mapped to a non-root user outside the container, in the Docker Desktop VM. This approach limits the potential damage and access scope even if a container is compromised.
  • Restricted file system access: Containers run under ECI have limited access to the file system of the host machine. This restriction prevents a compromised container from altering system files or accessing sensitive areas of the host file system.
  • Blocking sensitive system calls: ECI can block or filter system calls from containers that are typically used in attacks, such as certain types of mount operations, further reducing the risk of a breakout.
  • Isolation from the Docker Engine: ECI prevents containers from interacting directly with the Docker Engine's API unless explicitly allowed, protecting against attacks that target the Docker infrastructure itself.

Network microsegmentation

Microsegmentation offers a way to enhance security further by controlling traffic flow among containers. Through the implementation of stringent network policies, only authorized containers are allowed to interact, which significantly reduces the risk of lateral movement in case of a security breach. For example, a payment processing container may only accept connections from specific parts of the application, isolating it from other, less secure network segments.

The concept of microsegmentation also plays a role for AI systems and workloads. By segmenting networks and data, organizations can apply controls to different parts of their AI infrastructure, effectively isolating the environments used for training, testing, and production. This isolation helps reduce the list of data leakage between environments and can help reduce the blast radius of a security breach.

Docker Desktop's robust networking provides several ways to address microsegmentation. By leveraging the bridge network for creating isolated networks within the same host or using the Macvlan network driver that allows containers to be treated as physical devices with distinct MAC addresses, administrators can define precise communication paths that align with the least privileged access principles of Zero Trust. Additionally, Docker Compose can easily manage and configure these networks, specifying which containers can communicate on predefined networks.

This setup facilitates fine-grained network policies at the infrastructure level. It also simplifies the management of container access, ensuring that strict network segmentation policies are enforced to minimize the attack surface and reduce the risk of unauthorized access in containerized environments. Additionally, Docker Desktop supports third-party network drivers, which can also be used to address this concern.

For use cases where Docker Desktop requires containers to have different egress rules than the host, "air-gapped containers" allow for the configuration of granular rules applied to containers. For example, containers can be completely restricted from the internet but allowed on the local network, or they could be proxied/firewalled to a small set of approved hosts.

Note that in Kubernetes, this type of microsegmentation and network traffic management is usually managed by a service mesh.

Authentication and authorization

Implementing strong authentication and role-based access control (RBAC) is crucial in a Docker-based Zero Trust environment. These principles need to be addressed in several different areas, starting with the host and network as noted above.

Single Sign On (SSO) and System for Cross-Domain Identity Management (SCIM) should be enabled and used to manage user authentication to the Docker SaaS. These tools allow for better management of users, including the use of groups to enforce role and team membership at the account level. Additionally, Docker Desktop should be configured to require and enforce login to the Docker organization in use, which prevents users from logging into any other organizations or personal accounts.

When designing, deploying, building, and testing containers locally under Docker Desktop, implementing robust authentication and authorization mechanisms is crucial to align with security best practices and principles. It's essential to enforce strict access controls at each stage of the container lifecycle.

This approach starts with managing registry and image access, to ensure only approved images are brought into the development process. This can be accomplished by using an internal registry and enforcing firewall rules that block access to other registries. However, an easier approach is to use Registry Access Management (RAM) and Image Access Management (IAM) - features provided by Hardened Docker Desktop - to control images and registries.

The implementation of policies and procedures around secrets management - such as using a purpose-designed secrets store - should be part of the development process. Finally, using Enhanced Container Isolation (as described above) will help ensure that container privileges are managed consistently with best practices.

This comprehensive approach not only strengthens security but also helps maintain the integrity and confidentiality of the development environment, especially when dealing with sensitive or proprietary application data.

Monitoring and auditing

Continuous monitoring and auditing of activities within the Docker environment are vital for early detection of potential security issues. These controls build on the areas identified above by allowing for the auditing and monitoring of the impact of these controls.

Docker Desktop produces a number of logs that provide insight into the operations of the entire application platform. This includes information about the local environment, the internal VM, the image store, container runtime, and more. This data can be redirected and parsed/analyzed by industry standard tooling.

Container logging is important and should be sent to a remote log aggregator for processing. Because the best development approaches require that log formats and log levels from development mirror those used in production, this data can be used not only to look for anomalies in the development process but also to provide operations teams with an idea of what production will look like.

Docker Scout

Ensuring containerized applications comply with security and privacy policies is another key part of continuous monitoring. Docker Scout is designed from the ground up to support this effort.

Docker Scout starts with the image software bill of materials (SBOM) and continually checks against known and emerging CVEs and security policies. These policies can include detecting high-profile CVEs to be mitigated, validating that approved base images are used, verifying that only valid licenses are being used, and ensuring that a non-root user is defined in the image. Beyond that, the Docker Scout policy engine can be used to write custom policies using the wide array of data points available.

Immutable containers

The concept of immutable containers, which are not altered after they are deployed, plays a significant role in securing environments. By ensuring that containers are replaced rather than changed, the security of the environment is enhanced, preventing unauthorized or malicious alterations during runtime.

Docker images - more broadly, OCI-compliant images - are immutable by default. When they are deployed as containers, they become writable while they are running via the addition of a "scratch layer" on top of the immutable image. Note that this layer does not persist beyond the life of the container. When the container is removed, the scratch layer is removed as well.

When the immutable flag is added - either by adding the --read-only flag to the docker run command or by adding the read_only: true key value pair in docker compose - Docker will mount the root file system read-only, which prevents writes to the container file system.

In addition to making a container immutable, it is possible to mount Docker volumes as read/write or read-only. Note that you can make the container's root file system read-only and then use a volume read/write to better manage write access for your container.

Encryption

Ensuring that data is securely encrypted, both in transit and at rest, is non-negotiable in a secure Docker environment. Docker containers should be configured to use TLS for communications both between containers and outside the container environment. Docker images and volumes are stored locally and can benefit from the host system's disk encryption when they are at rest.

Tool chain updates

Finally, it is important to make sure that Docker Desktop is updated to the most current version, as the Docker team is continually making improvements and mitigating CVEs as they are discovered. For more information, refer to Docker security documentation and Docker security announcements.

Overcoming challenges in Zero Trust adoption

Implementing a Zero Trust architecture with Docker Desktop is not without its challenges. Such challenges include the complexity of managing such environments, potential performance overhead, and the need for a cultural shift within organizations towards enhanced security awareness. However, the benefits of a secure, resilient infrastructure far outweigh these challenges, making the effort and investment in Zero Trust worthwhile.

Conclusion

Incorporating Zero Trust principles into Docker Desktop environments is essential for protecting modern infrastructures against sophisticated cyber threats. By understanding and implementing these principles, organizations can safeguard their applications and data more effectively, ensuring a secure and resilient digital presence.

Learn more