Why it's hard to manage kubernetes compute resources, list the problems with solution
Mastering Kubernetes Compute Resource Management: Problems and Solutions
Managing Kubernetes compute resources is a critical task that often presents significant challenges for organizations. This comprehensive guide will explore why it's hard to manage Kubernetes compute resources, detailing the common problems encountered, from overprovisioning and underutilization to resource contention and OOMKills. We will then present effective solutions and practical strategies to optimize your cluster's performance, cost-efficiency, and stability.
Table of Contents
- Understanding Kubernetes Compute Resources
- The Inherent Challenges of Kubernetes Resource Management
- Common Problems in Kubernetes Compute Resource Management
- Effective Solutions for Kubernetes Compute Resource Management
- Frequently Asked Questions (FAQ)
- Further Reading
- Conclusion
Understanding Kubernetes Compute Resources
Kubernetes orchestrates applications by scheduling them onto nodes, which provide the underlying compute resources. These primarily include CPU and memory. CPU is measured in "cores" or "millicores" (e.g., 1000m = 1 core), while memory is measured in bytes (e.g., MiB, GiB).
To effectively manage these resources, Kubernetes relies on two key concepts: requests and limits. A container's request specifies the minimum amount of CPU and memory it needs and is guaranteed. A container's limit defines the maximum amount of CPU and memory it can consume.
Setting appropriate requests and limits is foundational. Requests are used by the Kubernetes scheduler to decide which node can host a pod, ensuring it has adequate resources. Limits act as guardrails, preventing a runaway container from consuming all node resources and impacting other workloads.
apiVersion: v1
kind: Pod
metadata:
name: my-app-pod
spec:
containers:
- name: my-container
image: nginx
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
The Inherent Challenges of Kubernetes Resource Management
The dynamic and distributed nature of Kubernetes makes managing compute resources inherently complex. Workloads can be highly variable, scaling up and down based on demand. Clusters often host diverse applications, each with unique resource profiles and performance requirements.
Achieving a balance between performance, cost-efficiency, and system stability is a constant tightrope walk. Over-allocating resources wastes money, while under-allocating leads to poor application performance and instability. This challenge is further amplified in multi-tenant environments where different teams or applications share the same underlying infrastructure.
Common Problems in Kubernetes Compute Resource Management
Several recurring issues highlight why it's hard to manage Kubernetes compute resources effectively without proper strategies and tools.
Overprovisioning and Underutilization
One of the most common problems is allocating more resources than an application actually needs, known as overprovisioning. This leads directly to underutilization of expensive infrastructure. Determining the exact resource requirements for an application is challenging, and teams often err on the side of caution to avoid performance issues.
This problem translates directly into wasted cloud spending. Paying for CPU and memory that sits idle significantly inflates operational costs. Overprovisioning also reduces the density of workloads on nodes, meaning fewer applications can run on a single machine, requiring more nodes overall.
Resource Contention and OOMKills
Conversely, under-provisioning or inadequate resource limits can lead to severe issues. When multiple pods on a single node compete for limited CPU or memory, resource contention occurs. This can cause applications to slow down, become unresponsive, or even crash.
Specifically for memory, exceeding a container's memory limit results in an Out-Of-Memory Kill (OOMKill). The Kubernetes Kubelet terminates the container to protect the node's stability. While preventing node failure, OOMKills cause application downtime and data loss, making them a critical stability concern.
Lack of Visibility and Monitoring
Without robust monitoring and observability, understanding how resources are being consumed across the cluster is nearly impossible. Teams struggle to identify bottlenecks, diagnose performance problems, or even know if their requests and limits are accurate. This lack of insight prevents proactive optimization.
Default Kubernetes metrics may provide some basic data, but a comprehensive solution requires dedicated tools. The absence of historical data and granular per-pod/container metrics makes it difficult to make data-driven decisions about resource allocation and scaling.
Manual Scaling and Inefficient Operations
Manually adjusting pod replicas or node counts in response to changing demand is time-consuming, error-prone, and reactive. It often leads to either over-scaling (wasting resources) or under-scaling (causing performance degradation) as human operators struggle to keep pace with dynamic workloads.
This operational overhead diverts engineering time from development tasks. Moreover, manual intervention often means reacting to an issue after it has already impacted users, rather than proactively maintaining optimal performance and cost.
Cost Management
Kubernetes compute resources directly translate into infrastructure costs, especially in cloud environments. Unoptimized resource allocation leads to significant and often unexpected expenses. Without clear visibility into resource consumption and its associated cost, it's challenging to justify infrastructure spend or implement cost-saving measures.
Attributing costs to specific teams, applications, or namespaces also becomes difficult. This lack of cost attribution hinders financial accountability and makes it harder to implement FinOps practices within the organization.
Effective Solutions for Kubernetes Compute Resource Management
Addressing these challenges requires a multi-faceted approach, combining Kubernetes-native features with best practices and specialized tools. Here are the key solutions to help you efficiently manage Kubernetes compute resources.
Implementing Resource Requests and Limits
The fundamental solution is to consistently define resource requests and limits for all containers. This practice forms the basis of Kubernetes' Quality of Service (QoS) classes and enables the scheduler to make intelligent placement decisions. Accurate requests guarantee resources, while limits prevent resource monopolization.
Start by setting conservative limits and monitoring actual usage. Gradually fine-tune these values based on observed performance and application profiling. This iterative process is crucial for effective resource allocation. Referencing the YAML example above is a good starting point.
Leveraging Kubernetes Autoscaling
Automating scaling decisions is vital for dynamic environments. Kubernetes offers powerful autoscaling capabilities:
- Horizontal Pod Autoscaler (HPA): Scales the number of pod replicas up or down based on CPU utilization, memory utilization, or custom metrics. HPA ensures applications can handle varying loads efficiently.
- Vertical Pod Autoscaler (VPA): Adjusts the CPU and memory requests and limits for individual containers, providing recommendations or automatically applying optimal values. VPA is excellent for rightsizing existing workloads.
- Cluster Autoscaler: Scales the number of nodes in your cluster. If pods are pending due to insufficient resources, it adds nodes. If nodes are underutilized, it removes them.
Robust Monitoring and Alerting
Visibility is paramount. Implement a comprehensive monitoring stack using tools like Prometheus (for collecting metrics) and Grafana (for visualization). Collect metrics for CPU usage, memory usage, network I/O, disk I/O, and pod restarts at the cluster, node, pod, and container levels.
Configure alerts for critical thresholds, such as high CPU throttling, sustained high memory usage, OOMKills, or low available node resources. Proactive alerting allows teams to respond to issues before they impact users or costs spiral out of control.
Adopting Resource Quotas and Limit Ranges
In multi-tenant or large clusters, Resource Quotas and Limit Ranges are indispensable. Resource Quotas set hard limits on the total resources (CPU, memory, number of pods, etc.) that can be consumed within a namespace. This prevents any single team or application from monopolizing cluster resources.
Limit Ranges automatically apply default requests and limits to containers within a namespace if they are not explicitly set. They can also enforce minimum or maximum values for requests/limits, promoting consistency and preventing extreme misconfigurations.
apiVersion: v1
kind: ResourceQuota
metadata:
name: team-dev-quota
namespace: dev
spec:
hard:
pods: "10"
requests.cpu: "4"
requests.memory: "8Gi"
limits.cpu: "8"
limits.memory: "16Gi"
FinOps and Cost Optimization Strategies
Integrating FinOps principles helps align engineering and finance goals. This involves continuous monitoring of cloud spend, cost allocation, and optimization efforts. Strategies include:
- Rightsizing: Continuously adjusting resource requests and limits to match actual usage.
- Scheduling: Utilizing node selectors, taints/tolerations, and affinity rules to place workloads on appropriate nodes (e.g., cheaper spot instances for fault-tolerant batch jobs).
- Autoscaling: As mentioned, dynamic scaling reduces idle resources.
- Cost Allocation Tools: Using tools that can break down Kubernetes costs by namespace, label, or team.
Regular Audits and Optimization
Resource management is not a one-time task; it requires continuous auditing and optimization. Regularly review resource usage data, analyze trends, and identify opportunities for improvement. This might involve:
- Identifying over-provisioned applications by checking actual usage against configured limits.
- Reviewing OOMKill events and adjusting memory limits.
- Analyzing throttling events for CPU-bound applications and increasing CPU limits/requests.
- Updating application code to be more resource-efficient.
Frequently Asked Questions (FAQ)
Q: What are Kubernetes compute resources?
A: Kubernetes compute resources primarily refer to CPU and memory that pods request and consume on nodes. They are fundamental for allocating and scheduling workloads efficiently.
Q: Why is managing Kubernetes compute resources important?
A: Effective management ensures applications perform reliably, prevents node instability, optimizes infrastructure costs, and maximizes cluster utilization.
Q: What is a CPU request in Kubernetes?
A: A CPU request is the minimum amount of CPU guaranteed to a container. The scheduler uses this value to decide which node can accommodate the pod.
Q: What is a CPU limit in Kubernetes?
A: A CPU limit is the maximum amount of CPU a container can use. If a container tries to exceed its limit, it will be throttled, slowing down its processes.
Q: What is a memory request in Kubernetes?
A: A memory request is the minimum amount of memory guaranteed to a container. The scheduler uses this value for placement, ensuring the node has enough free memory.
Q: What is a memory limit in Kubernetes?
A: A memory limit is the maximum amount of memory a container can use. Exceeding this limit causes the container to be terminated by the Kubelet (an OOMKill).
Q: What happens if I don't set requests and limits?
A: If not set, containers inherit default limits from a LimitRange or have no guaranteed resources, potentially leading to instability, contention, or starvation on the node.
Q: What is an OOMKill?
A: An Out-Of-Memory Kill occurs when a container attempts to use more memory than its specified limit. The Kubernetes Kubelet terminates it to protect the node from memory exhaustion.
Q: How can I prevent OOMKills?
A: Prevent OOMKills by setting appropriate memory limits based on application profiling, monitoring memory usage closely, and potentially using a Vertical Pod Autoscaler (VPA) for dynamic adjustment.
Q: What are Kubernetes QoS classes?
A: Quality of Service (QoS) classes (Guaranteed, Burstable, BestEffort) determine how pods are treated by the scheduler and Kubelet during resource contention, based on their requests and limits.
Q: What is the "Guaranteed" QoS class?
A: A pod gets the "Guaranteed" QoS class if all its containers have equal CPU requests and limits, and equal memory requests and limits. These pods have the highest priority and stability.
Q: What is the "Burstable" QoS class?
A: A pod gets the "Burstable" QoS class if at least one container has a CPU or memory request set, but they are not equal to their limits, or if limits are unset. These pods can burst but may be throttled.
Q: What is the "BestEffort" QoS class?
A: A pod gets the "BestEffort" QoS class if no container within the pod specifies any CPU or memory requests or limits. These pods have the lowest priority and are the first to be evicted.
Q: Why is overprovisioning a problem in Kubernetes?
A: Overprovisioning leads to wasted resources, increased cloud costs, and inefficient cluster utilization, as allocated resources remain unused, reducing overall workload density.
Q: Why is underutilization a problem in Kubernetes?
A: Underutilization means your infrastructure isn't working at its full potential, leading to higher operational costs relative to actual workload output and potentially slower performance due to suboptimal scheduling.
Q: What is resource contention?
A: Resource contention occurs when multiple pods or containers on the same node compete for a limited pool of CPU or memory, leading to performance degradation for one or more workloads.
Q: How does Kubernetes handle CPU throttling?
A: When a container exceeds its CPU limit, the Linux kernel's cgroups feature actively throttles its CPU usage, slowing down its processes without terminating them, to enforce the limit.
Q: What is the Horizontal Pod Autoscaler (HPA)?
A: HPA automatically scales the number of pod replicas in a Deployment or ReplicaSet based on observed CPU utilization, memory utilization, or custom metrics, reacting to demand changes.
Q: What is the Vertical Pod Autoscaler (VPA)?
A: VPA automatically adjusts the CPU and memory requests/limits for containers in a pod, recommending optimal values or applying them directly, to improve resource utilization and prevent OOMKills/throttling.
Q: What is the Cluster Autoscaler?
A: The Cluster Autoscaler automatically adjusts the number of nodes in your Kubernetes cluster. It scales nodes up when pods are pending due to insufficient resources and scales them down when nodes are underutilized.
Q: Can HPA and VPA be used together?
A: While technically possible, it's generally discouraged to use HPA and VPA on the same set of pods for CPU/memory, as they can conflict. VPA supports a "Recommendation Only" mode to avoid direct conflict.
Q: What are Resource Quotas?
A: Resource Quotas are Kubernetes objects that limit the total aggregate resource consumption (e.g., CPU, memory, number of pods) within a given namespace, ensuring fair sharing among teams.
Q: What are Limit Ranges?
A: Limit Ranges are Kubernetes objects that constrain resource requests and limits for pods and containers within a namespace, applying default values if none are specified and enforcing min/max boundaries.
Q: How do I monitor resource usage in Kubernetes?
A: Use monitoring tools like Prometheus and Grafana, along with `kubectl top`, `cAdvisor` (integrated into Kubelet), and the Kubernetes Dashboard to visualize CPU, memory, and network metrics.
Q: What metrics are crucial for Kubernetes resource monitoring?
A: Key metrics include CPU utilization (usage, throttling), memory usage (working set, RSS, OOMKills), network I/O, disk I/O, and pod restarts, collected at various levels of granularity.
Q: How can FinOps help with Kubernetes resource management?
A: FinOps combines financial accountability with cloud operations to drive cost efficiency. It encourages right-sizing, accurate cost allocation, and continuous optimization based on financial goals and business value.
Q: What is "right-sizing" in Kubernetes?
A: Right-sizing involves accurately determining and setting the optimal CPU and memory requests and limits for applications to match their actual needs, avoiding both waste and contention.
Q: How do I determine the right resource requests and limits?
A: Start by profiling your application's resource usage under typical and peak loads, use monitoring data, observe historical trends, and iterate with VPA recommendations to refine values.
Q: What is `kube-state-metrics`?
A: `kube-state-metrics` is an add-on that listens to the Kubernetes API server and generates metrics about the state of Kubernetes objects (e.g., pod phases, resource requests) for collection by Prometheus.
Q: What is `cAdvisor`?
A: `cAdvisor` (Container Advisor) is an open-source container monitoring tool, integrated into the Kubelet, that collects resource usage and performance metrics from running containers on a node.
Q: How can I debug a pod experiencing high CPU usage?
A: Use `kubectl top pod`, `kubectl describe pod`, check container logs for infinite loops or inefficient code, and leverage Prometheus metrics for historical usage and throttling data.
Q: How can I debug a pod experiencing OOMKills?
A: Check pod events (`kubectl describe pod`), review container logs for memory leaks, examine memory usage metrics from Prometheus/Grafana, and cautiously increase memory limits after analysis.
Q: What is the impact of scheduling pods without requests?
A: Pods without requests are treated as BestEffort, meaning they won't get guaranteed resources and are the first to be evicted during node resource pressure, leading to instability.
Q: How can resource management affect multi-tenant clusters?
A: In multi-tenant clusters, strict resource quotas, limit ranges, and proper namespace isolation are crucial to prevent one tenant from monopolizing resources and ensure fair sharing.
Q: What are `Guaranteed`, `Burstable`, and `BestEffort` pods' eviction priority?
A: `BestEffort` pods are evicted first, then `Burstable` pods if they exceed their requests, and `Guaranteed` pods are evicted last, typically only if critical system processes need resources.
Q: How can I optimize resource usage for DaemonSets?
A: DaemonSets run on every node; optimize their resource usage by setting tight requests/limits and ensuring they are as lightweight as possible, as their footprint impacts all nodes.
Q: Are there tools to help with rightsizing recommendations?
A: Yes, VPA (Vertical Pod Autoscaler) is a native Kubernetes tool for recommendations. Commercial solutions like Kubecost, Datadog, and Goldilocks also offer rightsizing advice.
Q: How do I manage resources for StatefulSets?
A: StatefulSets require careful resource management due to persistent storage. Ensure sufficient compute resources for performance and consider persistent volume claims with appropriate storage classes for I/O.
Q: What role do `priorityClass` objects play in resource management?
A: `priorityClass` objects allow you to assign a priority to pods, influencing the scheduler's decisions. High-priority pods can preempt or evict lower-priority pods if there are insufficient node resources.
Q: How does network resource management fit into this?
A: While CPU and memory are primary, network bandwidth and I/O can also be a bottleneck. Monitor network metrics for pods and nodes, and use network policies for traffic shaping and isolation where needed.
Q: Can I set default resource limits for a namespace?
A: Yes, by creating a `LimitRange` object in a namespace, you can specify default CPU and memory requests/limits for containers that don't explicitly define them in their pod specifications.
Q: What are Pod Disruption Budgets (PDBs)?
A: PDBs ensure that a minimum number or percentage of pods from a given application remain running during voluntary disruptions (like node maintenance), impacting resource availability planning.
Q: How do node taints and tolerations relate to resource management?
A: Taints and tolerations help control which pods can be scheduled on specific nodes, allowing for dedicated nodes for resource-intensive workloads or specific hardware, improving isolation and predictable performance.
Q: What is a "noisy neighbor" problem in Kubernetes?
A: A noisy neighbor problem occurs when one application or pod consumes an excessive amount of resources (CPU, memory, network I/O), negatively impacting the performance of other applications on the same node.
Q: How do container images affect resource usage?
A: Large or inefficient container images can consume more memory and disk space, leading to slower startup times, increased resource requirements for caching, and potentially higher storage costs.
Q: What is the impact of excessive logging on resource usage?
A: Excessive logging can consume significant CPU, memory, and disk I/O, especially when logs are collected, processed, and shipped to external systems, potentially impacting application performance.
Q: How do I manage ephemeral storage resources?
A: Ephemeral storage limits can be set in resource definitions, controlling the amount of temporary disk space a container can use, preventing a single pod from filling a node's local disk.
Q: Should I set CPU requests lower than limits?
A: Setting CPU requests lower than limits (resulting in Burstable QoS) allows pods to burst beyond their requested CPU if resources are available, but it can lead to throttling if the node becomes highly contended.
Q: What are common pitfalls in Kubernetes resource management?
A: Common pitfalls include not setting any requests/limits, setting overly generous limits, ignoring monitoring data, manual scaling only, and failing to plan for peak loads or failure scenarios.
Q: How does Kubernetes handle resource allocation on a node with mixed workloads?
A: Kubernetes uses requests and limits, along with QoS classes, to prioritize and allocate resources. It tries to fulfill requests, limits bursts, and evicts lower-priority pods under severe pressure to maintain node stability.
Q: What is a "resource quota exceeded" error?
A: This error occurs when a pod or other resource attempts to consume more CPU, memory, or other resources than are permitted by a ResourceQuota defined for its namespace.
Q: How can application architecture impact resource management?
A: Monolithic applications can be harder to right-size and scale than microservices. Well-designed, decoupled microservices with clear resource needs are easier to manage and optimize individually.
Q: What is the purpose of `kubelet`'s eviction policy?
A: The `kubelet`'s eviction policy is to proactively evict pods from a node when it's running low on critical resources (like memory, disk space) to prevent node instability or unresponsiveness.
Q: How does `pod anti-affinity` contribute to resource management?
A: Pod anti-affinity prevents specific pods from being scheduled on the same node, which can help distribute resource-intensive workloads, reducing the chance of resource contention on a single node.
Q: What are effective strategies for cost reduction related to Kubernetes compute?
A: Strategies include consistent rightsizing, leveraging autoscaling (HPA, VPA, Cluster Autoscaler), using spot instances for interruptible workloads, optimizing node types, and enforcing resource quotas.
Q: What is a resource "burstable" capability?
A: Burstable capability means a container is guaranteed its requested resources but can temporarily consume more than its request (up to its limit) if those resources are available on the node.
Q: How do namespaces help in managing resources?
A: Namespaces provide a logical isolation mechanism. They allow you to apply resource quotas and limit ranges per team or application, effectively segmenting and controlling resource consumption.
Q: What is the role of Prometheus Adapter in custom metric autoscaling?
A: The Prometheus Adapter allows HPA to consume custom metrics stored in Prometheus, enabling autoscaling based on application-specific metrics beyond CPU or memory utilization.
Q: How often should resource requests and limits be reviewed?
A: Resource requests and limits should be reviewed regularly, ideally after significant application updates, observed performance changes, or at least quarterly, using monitoring data to guide adjustments.
Q: What's the difference between node capacity and allocatable resources?
A: Node capacity is the total resources (CPU, memory) of a node. Allocatable resources are the amount available for pods, calculated by subtracting resources reserved for the Kubelet and OS from the node capacity.
Q: How can I identify idle or underutilized resources in my cluster?
A: Utilize monitoring tools to compare configured resource requests/limits against actual average and peak usage. Look for nodes or pods with consistently low CPU and memory consumption over time.
Q: What is the impact of a high number of small pods on resource management?
A: A high number of small pods can lead to increased overhead for the control plane (scheduler, API server) and Kubelet, potentially reducing overall node efficiency due to per-pod overhead.
Q: Can I use different resource requests/limits for different environments (dev, staging, prod)?
A: Yes, it's highly recommended. Development environments often need fewer resources than production. Use separate namespaces, configuration files, or GitOps practices to manage these differences.
Q: What are the benefits of using network policies for resource management?
A: While primarily for security, network policies can indirectly aid by limiting unnecessary network traffic between pods, potentially reducing network I/O overhead and freeing up some compute resources.
Q: How does Kubernetes handle CPU overcommitment?
A: Kubernetes allows CPU overcommitment by default if requests are less than limits. Pods can burst beyond requests if CPU is available, but they will be throttled if the sum of limits exceeds node capacity or other pods demand their requested CPU.
Q: What are common indicators of resource bottlenecks?
A: Common indicators include high CPU throttling, frequent OOMKills, high average CPU/memory usage, slow application response times, increased latency, and a large number of pending pods due to insufficient node resources.
Q: How can I enforce resource management policies across my organization?
A: Enforce policies using Admission Controllers (like Gatekeeper or Kyverno) to validate or mutate pod specifications based on predefined rules for resource requests/limits and other configurations.
Q: What are the risks of aggressive resource scaling policies?
A: Aggressive scaling policies can lead to "thrashing" (rapid scale-up/down cycles), increased costs (too many nodes/pods), or instability if not properly configured with cooldown periods and adequate metrics.
Q: How can I account for sidecar containers in resource management?
A: Sidecar containers should have their own resource requests and limits defined. Their resource consumption adds to the total for the pod and must be factored into the overall resource planning for the application.
Q: What is the interplay between node autoscaling and pod autoscaling?
A: Pod autoscalers (HPA, VPA) manage application instances. Node autoscaler (Cluster Autoscaler) ensures there are enough nodes for those pods. They work together: HPA creates more pods, which may trigger the Cluster Autoscaler to add nodes.
Q: What is the importance of container images in resource management?
A: Using optimized, minimal container images reduces their footprint, leading to faster startup times, lower memory consumption, and less disk I/O, all contributing to more efficient resource usage.
Q: How can I manage storage I/O resources in Kubernetes?
A: Storage I/O is managed primarily through PersistentVolumeClaims and StorageClasses, which allow you to provision storage with specific performance characteristics (IOPS, throughput) appropriate for your workloads.
Q: What is the concept of "bursting" in cloud resource management?
A: Bursting refers to the ability of a workload or instance to temporarily consume more resources than its baseline allocation when demand spikes, often leveraging unused capacity. In Kubernetes, this is achieved by setting requests lower than limits.
Q: How do resource annotations differ from requests/limits?
A: Resource annotations are metadata on Kubernetes objects that can provide additional information for external tools or custom controllers, but they do not directly enforce resource allocation like requests and limits do.
Q: What are the benefits of continuous delivery for resource optimization?
A: Continuous delivery allows for frequent, small changes, making it easier to test and iterate on resource configurations. It supports continuous monitoring and optimization, quickly adjusting to performance insights.
Q: How does Kubernetes handle resource reservation for system components?
A: Kubernetes reserves a portion of node resources (CPU and memory) for system components like the Kubelet, container runtime, and operating system. These are defined in the Kubelet configuration to ensure node stability.
Further Reading
- Kubernetes Documentation: Managing Resources for Containers
- Kubernetes Documentation: Horizontal Pod Autoscaling
- Google Cloud Documentation: Vertical Pod Autoscaler (VPA) Overview
Conclusion
Managing Kubernetes compute resources is a complex but crucial aspect of operating efficient, stable, and cost-effective cloud-native applications. By understanding the challenges of overprovisioning, contention, and lack of visibility, and implementing robust solutions like resource requests and limits, autoscaling, comprehensive monitoring, and FinOps practices, organizations can overcome these hurdles. Continuous optimization, driven by data and guided by best practices, will ensure your Kubernetes clusters deliver optimal performance and value.
```