Kubernetes



Kubernetes is a powerful open-source platform for automating the deployment, scaling, and management of containerized applications. It provides a flexible and highly scalable platform for running modern applications that can run anywhere, from on-premises data centers to public and private cloud environments.

At its core, Kubernetes architecture is based on a distributed system model that allows for highly resilient and fault-tolerant deployments.

kubernetes architecture



The platform is composed of several components that work together to provide a scalable and reliable platform for containerized applications.


The Kubernetes architecture is divided into two main components: the control plane and the worker nodes.
The control plane is responsible for managing the entire Kubernetes cluster, while the worker nodes are responsible for running the actual containers that make up the application.

The Control Plane

kubernetes control plane


The control plane is made up of several components that work together to manage the Kubernetes cluster. These components include:

  1. Kubernetes API Server: The API server is the central management component of Kubernetes. It provides a RESTful API interface that can be used to interact with the Kubernetes system. The API server handles all requests from users and external systems and communicates with other Kubernetes components to perform the requested operations.

  2. etcd: etcd is a distributed key-value store that stores the configuration and state data of the Kubernetes cluster. The Kubernetes API server uses etcd to store and retrieve data about the state of the cluster.

  3. kube-scheduler: The kube-scheduler is responsible for scheduling pods onto worker nodes based on resource availability and other constraints. It evaluates a set of rules to determine the best node for each pod, taking into account factors such as resource utilization and affinity/anti-affinity rules.

  4. kube-controller-manager: The kube-controller-manager is responsible for managing the lifecycle of Kubernetes objects such as pods, services, and replication controllers. It runs a set of controllers that monitor the state of these objects and take action to ensure that the desired state is maintained.

  5. Cloud Controller Manager: The Cloud Controller Manager is an optional component that runs cloud-specific control loops. These control loops interact with the underlying cloud provider to manage cloud-specific resources, such as load balancers and storage volumes.

Worker Nodes

kubernetes nodes


The worker nodes are the machines that run the containers that make up the application. They are responsible for executing the desired state of the Kubernetes objects, such as pods and services. Each worker node runs several Kubernetes components, including:

  1. kubelet: The kubelet is responsible for communicating with the Kubernetes API server and ensuring that the containers running on the node are in the desired state. It pulls container images from a container registry, starts and stops containers, and reports container status back to the API server.

  2. kube-proxy: The kube-proxy is responsible for providing network connectivity to the containers running on the node. It manages network routing and load balancing for Kubernetes services, ensuring that traffic is directed to the correct pods.

  3. Container Runtime: The container runtime is responsible for managing the containers on the node. Kubernetes supports several container runtimes, including Docker, containerd, and CRI-O.

Pods

kubernetes pods


Pods are the smallest deployable units in Kubernetes architecture. They consist of one or more containers that share the same network namespace and can communicate with each other via localhost.

Pods are scheduled onto worker nodes by the kube-scheduler.

Replication Controllers

The Replication Controller is responsible for maintaining the desired number of pod replicas across the cluster. It can automatically scale up or down the number of replicas based on the workload. Replication Controllers can be used to ensure high availability and resiliency for critical applications.

In Kubernetes, a service is an abstraction layer that provides a stable IP address and DNS name for a set of pods.

kubernetes Service


Services enable other pods or external clients to access the pods using a single IP address, even if the underlying pods are replaced or rescheduled. Services provide an important mechanism for communication between different parts of a Kubernetes application.

When a service is created in Kubernetes, it is assigned a virtual IP address and a DNS name. The service is associated with a set of pods based on a label selector.

Any traffic sent to the service IP address is load-balanced across the pods that are associated with the service. The load-balancing algorithm used by Kubernetes is determined by the service type.

There are three types of services in Kubernetes: ClusterIP, NodePort, and LoadBalancer.

  1. ClusterIP: A ClusterIP service is the default service type in Kubernetes. It exposes the service on a cluster-internal IP address, which is only accessible within the Kubernetes cluster. This type of service is often used for internal communication between different parts of a Kubernetes application.

  2. NodePort: A NodePort service exposes the service on a port on each worker node in the Kubernetes cluster. This allows the service to be accessible from outside the cluster, as well as within the cluster. The NodePort service type is often used for development and testing purposes.

  3. LoadBalancer: A LoadBalancer service exposes the service using a load balancer provided by a cloud provider. This type of service is often used in production environments to provide a stable external IP address and load balancing capabilities for the service.

In addition to these service types, Kubernetes also supports externalName services. An externalName service maps the service to an external DNS name, rather than a set of pods.

Services are defined using a Kubernetes manifest file, which specifies the service type, IP address, port, and label selector.

Once the service is created, it is managed by the Kubernetes API server, which ensures that the desired state of the service is maintained.

Services in Kubernetes are an essential component of any Kubernetes application. They provide a stable IP address and DNS name for a set of pods, allowing other parts of the application to communicate with the pods in a reliable and scalable way.

By load balancing traffic across multiple pods, services ensure that applications are highly available and resilient to failures.