The last thing – a tool for interacting with the API service and sending commands to the master node
Each worker node will be run by Docker, running the configured caps. The photos are downloaded, and containers are started.
The kubelet takes the apiserver pod configuration to ensure that the containers listed are up and running. This is the worker service that communicates with the master node.
It also interacts with etc. to collect information on newly developed services and to write the data.
The Kube-proxy serves as a load balancer and a network proxy for a single working node operation. It provides TCP and UDP packets with network routing.
A node is a machine, whether physical or virtual. Kubernetes is not created. You can build or manually install cloud-based systems such as OpenStack or Amazon EC2. So before you use Kubernetes to deploy your applications, you need to develop your necessary infrastructure. From that point on, however, it can define virtual networks, Storage, etc. For example, to define networks, you might use OpenStack Neutron or Romana to force them out of Kubernetes.
A pod is one or more of the containers that logically go together. Pods are running on nodes. Pods are running together as a logical unit. So they share the same content. They all share the shared IP address, but they can access other addresses through the localhost. And they could share the Storage. But they don’t all need to run on the same machine as containers can run on more than one device. One node is capable of running several pods.
The pods are cloud-conscious. E.g., you might spin two Nginx instances and allocate them to a public IP address to the Google Compute Engine (GCE). To do this, you would start the Kubernetes cluster, configure a GCE link, and then type something like:
Kubectl expose my-nginx deployment – port=80 – type = LoadBalancer
It is the starting point for the management of the Kubernetes cluster in all administrative tasks. There might be more than one master node in the cluster to search for error tolerance. More than one master node places the device in a High Availability mode, one of which is the main node in which all the tasks are performed.
To control the cluster state, in which all master nodes bind to it.
The API server is the input point for all REST control commands used for cluster control. It processes, validates, and performs the related business logic for REST applications. Somewhere the resulting state has to remain, and this leads us to the next master node part.
etcd is a simple, distributed, consistent store of key value. It is used mostly for shared settings and service discovery.
The CRUD API provides a REST API and GUI for registering watchers on individual nodes, allowing the rest of the cluster to report changes to their configuration reliably.
An example of Kubernetes data stored in etcd is planning, creating, and deploying jobs, pod/service details and state, namespaces, replication information, and more.
Usage of the scheduler component to deploy optimized pods and services to the nodes.
The scheduler has knowledge about available resources for cluster members and those appropriate to operate the configured service and can determine where a special service is to be deployed.
In the master node, you can optionally run various kinds of controllers. The manager of controls is an embedding daemon of such.
A controller uses an apiserver to track the cluster’s shared status to correct the current state to transform this into the desired state.
The replication controller, which controls the number of pods in the system, is an example of that controller. The user configures the replication factor, and the controller is responsible for recreating or removing an additional timed pot.
Other examples include the controls of endpoints, namespace controllers, and the controller of service accounts, but we will not go into depth here.