DevOps System Design: Top Interview Questions & Answers Guide
Top 50 Large Scale System Design Interview Questions for DevOps Engineers
This comprehensive study guide is meticulously crafted to help DevOps engineers excel in system design interviews. We delve into the crucial concepts, practical examples, and common questions you’ll encounter when discussing large scale system design. Prepare to master the intricacies of building robust, scalable, and resilient systems from a DevOps perspective.
Understanding Foundational System Design Principles for DevOps
System design interviews for DevOps engineers often probe your understanding of core architectural principles. These concepts form the bedrock of any successful large-scale system. Grasping them is essential for designing efficient and maintainable infrastructure.
Key Principles Explained
Scalability: The ability of a system to handle increasing load by adding resources. This can be vertical (scaling up) or horizontal (scaling out).
Reliability: The probability that a system will perform its intended function without failure for a specified period. It's about minimizing downtime and errors.
Availability: The proportion of time a system is functional and accessible. Often expressed as a percentage (e.g., "four nines" for 99.99%).
Fault Tolerance: The ability of a system to continue operating even when one or more components fail. Redundancy and graceful degradation are key.
Consistency: Ensuring that all clients see the same data, even with concurrent updates. Different models like strong, eventual, and causal consistency exist.
Example Interview Question: Designing a Highly Available System
"How would you design a highly available web application that serves millions of users globally?"
For such a question, a DevOps engineer would focus on infrastructure redundancy and automation. This involves using multiple availability zones or regions, load balancers, auto-scaling groups, and automated failover mechanisms. The goal is to eliminate single points of failure across all layers, from DNS to databases.
Action Item: Consider how each principle applies to your current projects. Can you identify areas where scalability or reliability could be improved?
Designing Scalable Infrastructure and CI/CD Pipelines
A significant portion of large scale system design for DevOps involves architecting the infrastructure itself and the processes that deploy applications onto it. This includes leveraging modern cloud-native technologies and robust CI/CD pipelines to ensure rapid, reliable, and automated deployments.
Infrastructure as Code (IaC) and Container Orchestration
Modern scalable infrastructure relies heavily on IaC tools like Terraform or CloudFormation to provision and manage resources declaratively. Containerization with Docker and orchestration with Kubernetes are vital for microservices architectures, enabling efficient resource utilization and portability.
Example Interview Question: Designing a CI/CD Pipeline for Microservices
"Design a CI/CD pipeline for a microservices application deployed on Kubernetes."
Your answer should cover source code management (e.g., Git), automated builds and tests (unit, integration), container image creation and tagging, vulnerability scanning, deployment to staging environments, and eventually to production. Blue/green deployments or canary releases are crucial strategies for minimizing downtime and risk in production. Automation tools like Jenkins, GitLab CI, or GitHub Actions are central.
# Conceptual CI/CD Pipeline Stages
1. **Code Commit:** Developer pushes code to Git repository.
2. **Build Stage:**
* Trigger build (e.g., `mvn clean install` for Java, `npm install && npm test` for Node.js).
* Run unit tests.
* Create Docker image.
* Tag image with commit SHA/version.
* Push image to container registry (e.g., ECR, Docker Hub).
3. **Test Stage:**
* Deploy image to a staging Kubernetes environment.
* Run integration tests, end-to-end tests.
* Perform security scans (SAST/DAST).
4. **Release Stage:**
* Approve deployment (manual or automated based on test results).
* Deploy to production Kubernetes (e.g., using Helm charts or Kustomize).
* Implement blue/green or canary deployment strategy.
* Monitor application health post-deployment.
Action Item: Review your current CI/CD pipelines. Are they fully automated? How do they handle rollbacks and progressive deployments for a large scale system design?
Implementing Robust Monitoring, Logging, and Alerting Solutions
In any large scale system design, comprehensive observability is non-negotiable. DevOps engineers are responsible for ensuring that systems are adequately monitored, logs are collected and analyzed, and alerts are triggered appropriately. This proactive approach helps identify and resolve issues before they impact users.
The Pillars of Observability
Metrics: Numerical data collected over time (e.g., CPU utilization, request latency). Tools like Prometheus, Grafana, Datadog are common.
Logs: Structured or unstructured records of events that occurred within a system. Centralized logging solutions like ELK Stack (Elasticsearch, Logstash, Kibana) or Splunk are vital.
Traces: End-to-end paths of requests as they flow through distributed systems. Solutions like Jaeger or Zipkin help visualize service interactions.
Example Interview Question: Monitoring a Distributed Application
"Describe how you would set up a comprehensive monitoring and alerting system for a distributed microservices application."
You would describe a layered approach, collecting infrastructure metrics (VMs, containers), application metrics (request rates, error rates, latency), and business metrics. Logs from all services should be aggregated and searchable. Alerting rules should be defined based on critical thresholds and routed to appropriate teams (e.g., PagerDuty, Slack). Implementing dashboards for real-time visualization is also key.
Action Item: Explore different monitoring tools and understand their strengths and weaknesses. Think about how you would define service level objectives (SLOs) and service level indicators (SLIs) for your critical services.
Ensuring System Resilience and Disaster Recovery
Designing for failure is a core tenet of large scale system design. DevOps engineers must build systems that can withstand various failures and recover gracefully. This includes strategies for backups, data recovery, and maintaining high availability even during catastrophic events.
Strategies for Resilience
Redundancy: Duplicating critical components (e.g., multiple instances, redundant power supplies).
Automated Failover: Automatically switching to a standby system or component when the primary fails.
Circuit Breakers & Retries: Design patterns to prevent cascading failures in microservices.
Rate Limiting: Protecting services from overload by controlling incoming request rates.
Example Interview Question: Outlining a Disaster Recovery Strategy
"Outline a disaster recovery strategy for a critical application with stringent RTO/RPO requirements."
This question requires discussing Recovery Point Objective (RPO) – the maximum acceptable amount of data loss – and Recovery Time Objective (RTO) – the maximum acceptable downtime. A robust strategy might involve cross-region replication for databases, frequent backups of application data, and automated deployment of infrastructure to a secondary region. Regular disaster recovery drills are essential to validate the plan.
Action Item: Evaluate your current disaster recovery plan. How frequently are backups tested? Are your RTO/RPO targets clearly defined and met in practice?
Security and Compliance in Large-Scale Systems
Security is not an afterthought in large scale system design; it must be ingrained from the start. DevOps engineers play a critical role in implementing security best practices, managing secrets, ensuring network security, and adhering to compliance standards across the infrastructure and application lifecycle.
DevSecOps Principles
Shift Left Security: Integrating security practices early in the development lifecycle.
Secrets Management: Securely storing and distributing sensitive information (e.g., API keys, database credentials) using tools like HashiCorp Vault or AWS Secrets Manager.
Compliance Automation: Using IaC to enforce security policies and automate compliance checks.
Example Interview Question: Securing a Cloud-Native Application
"How would you secure a large-scale cloud-native application running on Kubernetes in a public cloud?"
Your answer should cover multiple layers: network security (VPC, network policies, ingress controllers), identity and access management (IAM roles for services, least privilege), secrets management, container image security (scanning, trusted registries), runtime security (network segmentation, security context), and audit logging. Implementing a Web Application Firewall (WAF) and regular security audits are also important aspects.
# Key Security Practices for Cloud-Native
1. **IAM Role-Based Access:** Grant minimal necessary permissions.
2. **Network Segmentation:** Use Kubernetes Network Policies.
3. **Secrets Management:** Utilize Kubernetes Secrets, external vaults.
4. **Image Scanning:** Integrate vulnerability scanning in CI/CD.
5. **Runtime Security:** Implement security context for pods, syscall auditing.
6. **Audit Logging:** Centralize and monitor all security-relevant logs.
Action Item: Review your organization's security posture. Are sensitive credentials hardcoded? Are regular security audits performed on your infrastructure and applications?
Frequently Asked Questions
Here are some common questions prospective DevOps engineers ask about system design interviews and preparation.
Q: What is the primary difference between system design for a DevOps Engineer vs. a Software Engineer?
A: A DevOps engineer's system design focuses more on the operational aspects: scalability of infrastructure, CI/CD, monitoring, logging, reliability, disaster recovery, and security from an infrastructure perspective. A software engineer often focuses on application architecture, data models, APIs, and algorithms.
Q: How do I prepare for a large scale system design interview?
A: Start with foundational concepts (CAP theorem, ACID vs. BASE). Practice drawing architectures, discussing trade-offs, and explaining how you'd implement observability, CI/CD, and resilience. Read case studies of large systems (e.g., Netflix, Google).
Q: What tools are essential for system design discussions?
A: While drawing tools are helpful, the most important "tools" are your understanding of concepts and ability to communicate. Be familiar with cloud services (AWS, Azure, GCP), containerization (Docker), orchestration (Kubernetes), IaC (Terraform), and monitoring (Prometheus, Grafana).
Q: Should I memorize specific architectures for system design interviews?
A: No, focus on understanding the underlying principles and trade-offs. Interviewers want to see your problem-solving process, not just memorized solutions. Be able to justify your design choices based on requirements and constraints.
Q: How can I demonstrate my DevOps mindset during a system design interview?
A: Emphasize automation, reliability, observability, and security in your proposed solutions. Discuss how you'd ensure continuous deployment, monitor system health, and build for self-healing capabilities. Highlight the importance of collaboration between development and operations teams.
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [
{
"@type": "Question",
"name": "What is the primary difference between system design for a DevOps Engineer vs. a Software Engineer?",
"acceptedAnswer": {
"@type": "Answer",
"text": "A DevOps engineer's system design focuses more on the operational aspects: scalability of infrastructure, CI/CD, monitoring, logging, reliability, disaster recovery, and security from an infrastructure perspective. A software engineer often focuses on application architecture, data models, APIs, and algorithms."
}
},
{
"@type": "Question",
"name": "How do I prepare for a large scale system design interview?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Start with foundational concepts (CAP theorem, ACID vs. BASE). Practice drawing architectures, discussing trade-offs, and explaining how you'd implement observability, CI/CD, and resilience. Read case studies of large systems (e.g., Netflix, Google)."
}
},
{
"@type": "Question",
"name": "What tools are essential for system design discussions?",
"acceptedAnswer": {
"@type": "Answer",
"text": "While drawing tools are helpful, the most important \"tools\" are your understanding of concepts and ability to communicate. Be familiar with cloud services (AWS, Azure, GCP), containerization (Docker), orchestration (Kubernetes), IaC (Terraform), and monitoring (Prometheus, Grafana)."
}
},
{
"@type": "Question",
"name": "Should I memorize specific architectures for system design interviews?",
"acceptedAnswer": {
"@type": "Answer",
"text": "No, focus on understanding the underlying principles and trade-offs. Interviewers want to see your problem-solving process, not just memorized solutions. Be able to justify your design choices based on requirements and constraints."
}
},
{
"@type": "Question",
"name": "How can I demonstrate my DevOps mindset during a system design interview?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Emphasize automation, reliability, observability, and security in your proposed solutions. Discuss how you'd ensure continuous deployment, monitor system health, and build for self-healing capabilities. Highlight the importance of collaboration between development and operations teams."
}
}
]
}
Further Reading
To deepen your understanding of large scale system design for DevOps, consider exploring these authoritative resources:
Large-scale system design focuses on building distributed, fault-tolerant, highly available systems that can handle massive traffic and data. It involves scalability, redundancy, load balancing, caching, partitioning, and observability to ensure stable performance under heavy loads.
2. What is horizontal scaling?
Horizontal scaling means adding more servers or nodes to distribute load across multiple instances. It improves availability and fault tolerance, supports auto-scaling, and prevents single-node bottlenecks in large distributed systems and cloud environments.
3. What is vertical scaling?
Vertical scaling increases the resources (CPU, RAM, storage) of an existing server to improve performance. It’s simpler but limited by hardware capacity and creates risk of downtime, making it less ideal for massive distributed systems compared to horizontal scaling.
4. What is load balancing?
Load balancing distributes incoming traffic across multiple servers to prevent overloading and ensure reliability. Tools like NGINX, HAProxy, AWS ELB, and GCP Load Balancer help route traffic efficiently using algorithms like round-robin and least connections.
5. What is a CDN?
A CDN (Content Delivery Network) caches static content such as images, videos, and scripts across globally distributed edge servers. This reduces latency, improves user experience, decreases origin server load, and provides DDoS resilience and faster content delivery.
6. What is sharding?
Sharding divides large datasets into smaller horizontal partitions across multiple database servers. Each shard stores a subset of data, improving performance, scalability, and read/write throughput while reducing bottlenecks in massive applications.
7. What is caching?
Caching stores frequently accessed data in fast memory systems like Redis, Memcached, or CDN edge nodes. It reduces database load, improves response time, enhances scalability, and supports high-throughput architectures in large-scale systems.
8. What is a message queue?
A message queue like Kafka, RabbitMQ, or SQS enables asynchronous communication between services. It decouples components, improves reliability, smooths traffic spikes, and ensures messages persist even when consumers are temporarily offline.
9. What is microservices architecture?
Microservices architecture breaks a system into small, independent services that communicate via APIs or messaging. It improves scalability, deployment agility, and fault isolation but requires strong observability, CI/CD, and distributed system management.
10. What is rate limiting?
Rate limiting controls how many requests a client can make in a given time window. It prevents abuse, protects backend systems, avoids DDoS-like traffic spikes, and ensures fair resource usage. Tools include NGINX, API gateways, and cloud WAF policies.
11. What is a reverse proxy?
A reverse proxy like NGINX or Envoy sits in front of backend servers and routes client requests. It provides load balancing, SSL termination, caching, rate limiting, security filtering, and improves performance for high-scale web systems.
12. What is eventual consistency?
Eventual consistency means data will become consistent across nodes over time but may be temporarily out of sync. It is used in distributed databases like Cassandra and DynamoDB to achieve high availability and partition tolerance under the CAP theorem.
13. What is high availability?
High availability ensures systems remain accessible with minimal downtime by using redundancy, multiple instances, failover mechanisms, health checks, replication, and distributed deployments across multiple zones or regions for resilience.
14. What is fault tolerance?
Fault tolerance enables a system to operate even when components fail. It includes redundancy, replication, multi-zone deployments, retry logic, and self-healing automation, ensuring continuous service delivery in large-scale distributed systems.
15. What is distributed tracing?
Distributed tracing tracks requests across microservices to analyze latency, bottlenecks, and failures. Tools like Jaeger, Zipkin, and OpenTelemetry help visualize service flows and improve debugging in large-scale distributed applications.
16. What is auto-scaling?
Auto-scaling automatically adjusts compute resources based on load. Tools like AWS ASG, Kubernetes HPA, and GCP autoscaler help maintain performance, reduce costs, and handle traffic surges in dynamic large-scale environments.
17. What is service discovery?
Service discovery automatically detects service locations in dynamic environments. Tools like Consul, Eureka, and Kubernetes DNS eliminate manual configuration, enabling scalable communication between microservices in distributed systems.
18. What is a distributed database?
A distributed database spreads data across multiple nodes for performance, availability, and fault tolerance. Solutions like Cassandra, DynamoDB, and CockroachDB scale horizontally, support replication, and reduce latency for global applications.
19. What is CAP theorem?
CAP theorem states that a distributed database can provide only two of three guarantees: Consistency, Availability, and Partition Tolerance. Systems like Cassandra prefer AP, while databases like MongoDB balance between CP and AP depending on configuration.
20. What is a microservices gateway?
A microservices gateway like Kong, Ambassador, or AWS API Gateway manages traffic, routing, auth, rate limiting, and observability between clients and services. It centralizes control and improves security in large-scale distributed applications.
21. What is partitioning?
Partitioning splits data into segments to improve performance and scalability. Techniques include range partitioning, hash partitioning, and list partitioning, helping large-scale systems distribute workload and reduce bottlenecks.
22. What is replication?
Replication creates multiple copies of data across nodes for redundancy, faster reads, and high availability. Systems like MySQL, MongoDB, and Cassandra use replication to prevent data loss and support large-scale, fault-tolerant environments.
23. What is a write-through cache?
A write-through cache writes data to both the cache and the database simultaneously. It ensures strong consistency, simplifies failover, and reduces stale reads but may slightly increase write latency in large-scale systems.
24. What is a write-back cache?
Write-back caching writes data to cache first and later syncs it to storage asynchronously. It enhances write performance but risks data loss if caching nodes fail, requiring careful durability and failover strategies in distributed systems.
25. What is consistent hashing?
Consistent hashing distributes keys across nodes in a way that minimizes data movement during scaling. It is used in caching systems, databases, and load balancers to achieve stable distribution and efficient horizontal scalability.
26. What is a distributed cache?
A distributed cache stores data across multiple nodes to provide fast, scalable retrieval. Tools like Redis Cluster and Hazelcast allow high availability, sharding, failover, and low-latency responses for large-scale applications with heavy read workloads.
27. What is a circuit breaker pattern?
The circuit breaker pattern prevents cascading failures by stopping requests to unhealthy services. It transitions between closed, open, and half-open states, improving resilience in distributed systems. Tools like Hystrix and Resilience4j implement this pattern.
28. What is a bulkhead pattern?
The bulkhead pattern isolates system components into independent pools to prevent a failure in one area from impacting others. This improves fault tolerance, resource isolation, and reliability in large-scale microservices and distributed environments.
29. What is API throttling?
API throttling limits the number of API calls a client can make to protect backend systems from overload. It helps maintain service reliability, prevents abuse, smooths spikes, and ensures fair usage. Implemented via API gateways and rate-limiting middleware.
30. What is data replication lag?
Replication lag is the delay between updates on a primary database and their propagation to replicas. High lag can cause stale reads and inconsistent data. Monitoring, tuning, and optimizing network performance help minimize lag in distributed environments.
31. What is a read replica?
A read replica is a secondary database instance that handles read-heavy workloads to reduce pressure on the primary database. It improves performance, supports failover scenarios, and enhances scalability in large distributed applications requiring fast reads.
32. What is failover?
Failover is the process of automatically switching to a standby system when a primary component fails. It ensures high availability, minimizes downtime, and keeps applications running. Tools like load balancers and clustering software support automated failover.
33. What is chaos engineering?
Chaos engineering intentionally breaks components in a controlled environment to test system resilience. Tools like Chaos Monkey and Litmus help reveal weaknesses, validate fault tolerance strategies, and improve reliability for large-scale distributed systems.
34. What is a service mesh?
A service mesh like Istio or Linkerd manages service-to-service communication by providing observability, traffic control, retries, encryption, and service discovery. It improves reliability and security in microservices-based architectures at scale.
35. What is distributed logging?
Distributed logging centralizes logs from multiple services into a unified platform like ELK or Loki. It enables efficient debugging, correlation, searching, and monitoring of distributed microservices across environments and scaled-out architectures.
36. What is a heartbeat check?
A heartbeat check is a lightweight periodic signal sent by a service or node to indicate it is alive and functioning. Load balancers, auto-scaling systems, and service meshes use heartbeats to detect failures and trigger automated recovery actions.
37. What is multi-region deployment?
Multi-region deployment distributes applications and data across multiple geographical regions for resilience, low latency, disaster recovery, and global performance. Cloud platforms like AWS and Azure provide multi-region replication and failover controls.
38. What is blue-green deployment?
Blue-green deployment runs two identical environments where blue handles current traffic and green holds the new version. Switching traffic between them enables zero-downtime releases, easy rollbacks, and safer deployment of large-scale application updates.
39. What is canary deployment?
Canary deployment releases new updates to a small subset of users before full rollout. It helps detect issues early, reduces risk, and enables data-driven decisions. Tools like Kubernetes, Istio, Argo Rollouts, and Spinnaker support canary strategies.
40. What is distributed rate limiting?
Distributed rate limiting enforces request limits across multiple servers or regions to protect large-scale services from overload. Implemented using Redis, Envoy, or API gateways, it ensures consistent client throttling in distributed environments.
41. What is a time-series database?
A time-series database (TSDB) like Prometheus, InfluxDB, or TimescaleDB stores time-stamped metrics. It is optimized for monitoring, trend analysis, alerting, and querying time-based data, making it ideal for large-scale observability systems.
42. What is a dead letter queue?
A dead letter queue (DLQ) stores messages that cannot be processed due to errors or retries. It isolates problematic data, improves resilience, and supports debugging in large-scale messaging systems like Kafka, SQS, RabbitMQ, and Pub/Sub.
43. What is data partitioning?
Data partitioning splits large datasets into smaller segments stored on different nodes. It improves query performance, reduces load, and supports massive scalability. Common techniques include hash, range, and key-based partitioning.
44. What is synchronous vs asynchronous communication?
Synchronous communication requires immediate response between services, increasing coupling and latency. Asynchronous uses message queues, enabling background processing and resilience. Large-scale systems use async patterns for better scalability.
45. What is a write-ahead log?
A write-ahead log (WAL) records changes before they are written to storage, ensuring durability and crash recovery. Databases like PostgreSQL and Cassandra use WAL to maintain data integrity during failures in distributed systems.
46. What is quorum in distributed systems?
Quorum is the minimum number of nodes required to agree on an operation in distributed databases. It ensures consistency and prevents split-brain scenarios. Systems like Cassandra and etcd rely on quorum to maintain strong coordination.
47. What is an API gateway?
An API gateway sits in front of microservices to manage routing, authentication, throttling, logging, and transformations. Tools like Kong, API Gateway, and NGINX streamline traffic management and security in large-scale architectures.
48. What are idempotent operations?
Idempotent operations produce the same result even when repeated multiple times. They prevent inconsistent states during retries, making them essential for distributed systems, APIs, payment flows, and fault-tolerant microservices.
49. What is distributed consensus?
Distributed consensus ensures nodes in a cluster agree on a single source of truth. Algorithms like Raft and Paxos power systems such as Kubernetes, Zookeeper, and etcd, enabling leader election, configuration management, and state consistency.
50. What is observability?
Observability measures how well you can understand a system’s internal state using logs, metrics, and traces. It enables debugging, performance tuning, and reliability in large-scale distributed environments. Tools include Prometheus, Grafana, and ELK.
What is K3d? What is K3s? and What is the Difference Between Both? Table of Contents Introduction What is K3s? Features of K3s Benefits of K3s Use Cases of K3s What is K3d? Features of K3d Benefits of K3d Use Cases of K3d Key Differences Between K3s and K3d K3s vs. K3d: Which One Should You Choose? How to Install K3s and K3d? Frequently Asked Questions (FAQs) 1. Introduction Kubernetes is the leading container orchestration tool, but its complexity and resource demands can be overwhelming. This led to the creation of K3s and K3d , two lightweight alternatives designed to simplify Kubernetes deployment and management. If you're wondering "What is K3d? What is K3s? and What is the difference between both?" , this in-depth guide will provide a clear understanding of these tools, their features, benefits, and use cases. By the end, you'll be able to decide which one is best suited for your needs. 2. What is K3s? K3s...
Here’s a detailed DevOps learning roadmap with estimated hours for each section, guiding you from beginner to advanced level. This plan assumes 10-15 hours per week of study and hands-on practice. 1. Introduction to DevOps ✅ What is DevOps? ✅ DevOps principles and culture ✅ Benefits of DevOps ✅ DevOps vs Traditional IT Operations 2. Linux Basics & Scripting ✅ Linux commands and file system ✅ Process management & user permissions ✅ Shell scripting (Bash, Python basics) 3. Version Control Systems (VCS) ✅ Introduction to Git and GitHub ✅ Branching, merging, and rebasing ✅ Git workflows (GitFlow, Trunk-based development) ✅ Hands-on GitHub projects 4. Continuous Integration & Continuous Deployment (CI/CD) ✅ What is CI/CD? ✅ Setting up a CI/CD pipeline ✅ Jenkins basics ✅ GitHub Actions CI/CD ✅ Automated testing in CI/CD 5. Containerization & Orchestration ✅ Introduction to Docker ✅...
Kubernetes is the de facto standard for container orchestration, but running a full-fledged Kubernetes cluster locally can be resource-intensive. Thankfully, there are several lightweight Kubernetes distributions perfect for local development on an Ubuntu machine. In this blog, we’ll explore the most popular options—Minikube, K3s, MicroK8s, and Kind—and provide a step-by-step guide for getting started with them. 1. Minikube: The Most Popular and Beginner-Friendly Option https://minikube.sigs.k8s.io/docs/ Use Case: Local development and testing Pros: Easy to set up Supports multiple drivers (Docker, KVM, VirtualBox) Works seamlessly with Kubernetes-native tooling Cons: Slightly heavier when using virtual machines Requires Docker or another driver Installing Minikube on Ubuntu: curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 sudo install minikube-linux-amd64 /usr/local/bin/minikube Starting a Cluster: minikube start --driver=...
Comments
Post a Comment