Advanced Computing in the Age of AI | Thursday, April 18, 2024

5 Essentials for Securing Your Kubernetes Deployments 

(mamanamsai/Shutterstock)

Cloud computing has ushered in a digital transformation in the way applications are developed, deployed and operated. In the cloud-native era, applications are based on microservices and delivered via containers, enabling enterprises to continuously and efficiently update their apps while maintaining a seamless user experience. To manage their immense volumes of containers, companies large and small are turning to Kubernetes to orchestrate their containerized workloads and serve as their application delivery vehicle.

Originally developed by Google in 2014 before being donated to the Cloud Native Computing Foundation, Kubernetes is quickly gaining popularity among DevOps teams. It’s now the container orchestration tool of choice for 86 percent of teams as of mid-2019, up from 57 percent a year ago.

While Kubernetes has allowed DevOps teams to roll out new containerized applications and services at a breathtaking pace, many teams are still adjusting to the unique security challenges inherent to Kubernetes. It’s not surprising then that out of over 5,000 deployments scanned, 89 percent had sensitive information, such as usernames, passwords, keys and tokens, mishandled in their Kubernetes clusters, exposing their environments to hacks. It’s an unfortunate, growing pain for the emerging technology, but with the right practices in place, Kubernetes deployments can and should be secured.

Here are some tips on how to keep your Kubernetes environment secured:

  1. Start early in the Continuous Development pipeline

Too often, security is an afterthought applied at the production level. But if security isn’t a priority in the container development phase, a single weakness in a containerized application can put the entire cluster at risk. Workloads containerized in Kubernetes should be continuously assessed, tested and scanned for security issues in development, which is where security can be embedded into the engineering processes and automated. Securing applications at the development level will give you more confidence that applications will interoperate properly at the production level. But note that many Kubernetes security tools labeled CI/CD only cover the Continuous Integration part. CI and CD are different processes and present different opportunities to apply security functions — CD has to address security concerns not visible in CI. For example, in Kubernetes, there are checks that are valid only against a live cluster that are not even relevant during build, and if your workloads get compromised you need to know right away. Unless your security tool has functionality that is purpose-built for CD, it won't be enough.

  2. Use Secrets

As mentioned, 89 percent of Kubernetes deployments aren’t taking advantage of Kubernetes Secret Resources across the board, leaving sensitive information unencrypted and exposed in Kubernetes configurations not designed to carry secrets. Secrets, encoded and encrypted API objects containing sensitive information, are a simple Kubernetes security feature that goes a long way toward protecting against data breaches. From a technical perspective, a single Kubernetes worker-node keeps secrets in memory for several pods running on that node. But no pod, assuming certain runtime privilege configurations are met, can access the secrets of any other pod. Within a pod, containers must request a Secret volume in its volume Mounts in order for its contents to be used in the container, enabling security partitions to be constructed at the pod level. Secrets can also be used by a kubelet when pulling images for the pod. This communication between the user and API server, and from API server to the kubelets, is protected by TLS. When a pod using a Secret is deleted a kubelet, a Kubernetes worker-node agent, deletes the local copy of the secret data as well.

  3. Set Kubernetes Workload Access Permissions

Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within an enterprise. Kubernetes allows admins to dynamically configure access policies through the Kubernetes API, and to drive authorization decisions. As of Kubernetes v1.8, RBAC is enabled by default, which should be the only cluster provisioning option from a security perspective. In Kubernetes v1.14, the Kubernetes API discovery endpoints are inaccessible to unauthenticated users by default, greatly reducing attack surface and increasing the overall cluster security. Review & control workloads that require API access privileges to the Kubernetes API server - specifically, permissions that allow users to read Secret Resources, create workloads of any type (pods, deployments, etc.), and create services that can potentially open the cluster to the outside.In general, the principle of “least privilege” should be followed, and exceptions should be reviewed and controlled to prevent from opening the door to cluster-wide or workload level take over.

  4. Isolate your pods with microsegmentation

In the cloud-native era, container and microservice-based deployments have rendered traditional perimeter level network security policies irrelevant or redundant. Microsegmentation is about introducing fine grained, pod level, network isolation policies to the heart of the Kubernetes cluster. For example, microsegmentation can prevent explicit exfiltration of data from database workloads through lateral movement.

For another example, consider a fork bomb, a denial-of-service attack where a single pod spins up a high number of processes that hog Process IDs (PIDs) and starve legitimate processes of PIDs, potentially taking down the entire node. Kubernetes 1.14 allows for the configuration of a kubelet, a Kubernetes worker node agent, to limit the number of PIDs a given pod can consume, protecting the node from a single hostile Pod.

  5. Use Kubernetes Audit Logging

Kubernetes audit logs are chronological records of each call made to the Kubernetes API Server, the central touch point accessed by all users, automation, and components in the Kubernetes cluster. These records are useful for investigating suspicious API requests. The dynamic nature of a system like a Kubernetes cluster means that workloads are being added, removed or modified at a varying velocity. When it comes to database security, for example, it’s not a matter of an auditor focusing on access to a few specific workloads containing a database – it’s a matter of identifying which workloads contain a sensitive database at each particular instant in the audited time period, and which users and roles had access to these workloads. Audit logs can be a window to identifying current suspicious activity and remediating security violations.

Gadi Naor, Co-Founder and CTO, Alcide.

EnterpriseAI