
KUBERNETES EXPLAINED IN 5 MINUTES | K8S ARCHITECTURE
Kubernetes (often abbreviated as K8s) is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes helps manage the complexity of deploying, operating, and scaling containers, especially when working with microservices architectures or large-scale, distributed systems.
Key Features of Kubernetes:
Container Orchestration:
Kubernetes automates the deployment and management of containerized applications. It schedules containers across a cluster of machines, ensures they run as expected, and handles scaling.
Automatic Scaling:
Based on CPU usage or other metrics, Kubernetes can automatically scale applications up or down by adding/removing containers (called pods in Kubernetes).
Self-Healing:
Kubernetes monitors the state of applications and can automatically restart or reschedule failed containers, ensuring high availability and resilience.
Service Discovery and Load Balancing:
Kubernetes provides internal DNS to allow applications to discover and communicate with each other. It can also load balance traffic to ensure even distribution across replicas of services.
Declarative Configuration:
Users define the desired state of the application (number of instances, network settings, storage requirements, etc.) in a YAML or JSON file. Kubernetes ensures that the current state of the cluster matches the desired state.
Secret and Configuration Management:
Kubernetes can manage sensitive information such as passwords, tokens, and configuration files securely and deliver them to applications when required.
Storage Orchestration:
Kubernetes allows you to mount storage systems like persistent volumes (PV) to containers. These can be local storage, cloud-based storage, or network storage like NFS.
Rolling Updates and Rollbacks:
Kubernetes supports rolling updates, ensuring zero downtime during application updates. If something goes wrong, it can roll back to a previous version.
Core Concepts in Kubernetes:
Pod:
The smallest deployable unit in Kubernetes. A pod is a wrapper for one or more containers and typically represents a single instance of an application. Pods share the same network and storage space.
Node:
A physical or virtual machine in a Kubernetes cluster that runs pods. A master node manages the cluster, while worker nodes run the workloads (pods).
Cluster:
A group of nodes that work together to run containerized applications. The master node manages the cluster, and the worker nodes handle the actual application workloads.
Deployment:
A controller that defines the desired state of an application (e.g., how many replicas should be running). It monitors and maintains this state by creating or removing pods as necessary.
Service:
A way to expose a set of pods as a network service. Kubernetes services provide load balancing across pods and can expose services internally (within the cluster) or externally (to the outside world).
Ingress:
A resource to manage external access to services within the cluster, typically HTTP or HTTPS, providing load balancing, SSL termination, and name-based virtual hosting.
Namespace:
A way to divide cluster resources between multiple users or teams. Namespaces allow isolation of resources, helping to manage large or multi-tenant environments.
ConfigMap & Secret:
ConfigMap is used to store non-sensitive configuration data, while Secrets are used for sensitive data like passwords and tokens. Both provide ways to inject configuration into containers.
Persistent Volumes (PV) and Persistent Volume Claims (PVC):
PVs represent storage resources in the cluster, and PVCs are requests for those resources. Kubernetes dynamically provisions storage based on these claims.
@TechWithMachines
#kubernetes #docker #devops #aws
コメント