● LIVE   Breaking News & Analysis
Gbuck12
2026-05-04
Technology

Kubernetes v1.36: Dynamically Scale Pod-Level Resources Without Restarts (Beta)

Kubernetes v1.36 graduates In-Place Pod-Level Resources Vertical Scaling to Beta, enabling dynamic resource pool adjustments without container restarts.

Kubernetes v1.36 brings a significant enhancement to resource management: the In-Place Pod-Level Resources Vertical Scaling feature has graduated to Beta, now enabled by default. This allows you to adjust the aggregate resource budget for a running pod—often without restarting containers. Previously, you could only scale individual container resources; now, you can modify the shared pool at the pod level, simplifying operations for complex pods with sidecars. This article answers common questions about the feature, its benefits, and how to use it safely.

What is new in Kubernetes v1.36 regarding pod-level resource scaling?

In v1.36, the InPlacePodLevelResourcesVerticalScaling feature gate is enabled by default, allowing you to update the .spec.resources field of a running pod to change the aggregate CPU and memory limits. Previously, this field was immutable after pod creation. Now, you can modify it without necessarily restarting containers, thanks to the In-Place Pod Vertical Scaling mechanism that first reached GA in v1.35 for container-level scaling. This builds on the pod-level resources feature that entered Beta in v1.34. The key advancement is that you can now resize the shared budget for containers that lack individual limits—ideal for pods with sidecars that need a collective pool.

Kubernetes v1.36: Dynamically Scale Pod-Level Resources Without Restarts (Beta)

Why is pod-level in-place vertical scaling useful?

Managing resources for complex pods—such as those running a main application alongside logging or monitoring sidecars—can be tedious if each container requires individual limits. The pod-level resource model simplifies this by allowing containers to share a collective pool. In v1.36, you can adjust this aggregate boundary on the fly. For example, during peak demand, you can expand the shared CPU pool without recalculating per-container values. Containers without explicit limits automatically inherit the new pod-level boundaries, making scaling seamless. This reduces downtime and manual effort, especially in environments with dynamic workloads.

How does resource inheritance and resizePolicy work?

When you initiate a pod-level resize, the Kubelet treats it as a resize event for every container that inherits its limits from the pod-level budget. To determine if a restart is needed, the Kubelet checks each container's resizePolicy. If a container sets restartPolicy: NotRequired for a resource (e.g., CPU), the Kubelet attempts a non-disruptive update via the Container Runtime Interface (CRI) to adjust cgroup limits dynamically. If restartPolicy: RestartContainer is specified, the container will be restarted to safely apply the new boundaries. Note that currently, resizePolicy is not supported at the pod level; the Kubelet always defers to individual container settings.

Can you show an example of scaling a shared resource pool?

Consider a pod named shared-pool-app with a pod-level CPU limit of 2 CPUs and no per-container limits. Both containers (main-app and sidecar) share this pool. To double CPU capacity to 4 CPUs, apply a patch using the resize subresource:

kubectl patch pod shared-pool-app --subresource resize --patch \
  '{"spec":{"resources":{"limits":{"cpu":"4"}}}}'

Because both containers have resizePolicy: [{resourceName: "cpu", restartPolicy: "NotRequired"}], the Kubelet updates cgroup limits dynamically without restarting them. The containers immediately benefit from the expanded pool, handling higher load transparently.

What node-level feasibility and safety checks does the Kubelet perform?

Applying a resize patch is only the first step. The Kubelet performs a sequence of checks to ensure node stability. It verifies that the requested resources are available on the node (not overcommitted), updates the pod's resource accounting, and then applies changes to the cgroups of affected containers. If a container requires a restart (per its resizePolicy), the Kubelet schedules a controlled restart. The overall operation is safe because the Kubelet uses the resize subresource to coordinate changes, ensuring that no container exceeds the new limits prematurely. This prevents resource contention and maintains predictable behavior.

What are limitations of the pod-level in-place resize feature?

One key limitation is that resizePolicy is not yet supported at the pod level. You must define it for each individual container. Additionally, not all container runtimes support in-place updates; the Kubelet will fall back to restarting if the runtime doesn't allow dynamic cgroup changes. Also, the feature only applies to the limits field of .spec.resources; requests are not updated. Finally, pod-level resources are intended for pods where containers share a pool—if all containers have their own limits, the pod-level resizing has no effect. Despite these constraints, the feature significantly improves operational flexibility for many common pod configurations.