50000+ Learners upgraded/switched career Testimonials
All
Certifications
preparation material is for renowned vendors like
Cloudera, MapR, EMC, Databricks,SAS, Datastax, Oracle,
NetApp etc , which has more value, reliability and
consideration in industry other than any training
institutional certifications.
Note
:
You can choose more than one product to have custom
package created from below and send email to
hadoopexam@gmail.com to get discount.
Do you know?
Kubernetes, commonly
stylized as K 8 S is an open-source
container-orchestration system for automating
computer application deployment, scaling, and
management. It was originally designed by Google
and is now maintained by the Cloud Native
Computing Foundation. It aims to provide a
"platform for automating deployment, scaling, and
operations of application containers across
clusters of hosts". It works with a range of
container tools and runs containers in a cluster,
often with images built using Docker. Kubernetes
originally interfaced with the Docker runtime
through a "Dockershim". however, the shim has
since been deprecated in favor of directly
interfacing with the container through containerd,
or replacing Docker with a runtime that is
compliant with the Container Runtime Interface
(CRI) introduced by Kubernetes in 2016. There are many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) and you as a developer or administrator deploy Kubernetes. There are many cloud providers of software company provide their own Kubernetes distributions. In recent years Kubernetes became powerful technology became popular. Kubernetes is used as the base for the way you manage software deployments. Kubernetes has API-driven architecture sooner or later your organization would start adopting it, if they have not done yet. |
Answer: Kubernetes
progressively rolls out changes to your
application or its configuration, while monitoring
application health to ensure it doesn't kill all
your instances at the same time. If something goes
wrong, Kubernetes will rollback the change for
you. Take advantage of a growing ecosystem of
deployment solutions. |
Answer: No
need to modify your application to use an
unfamiliar service discovery mechanism. Kubernetes
gives Pods their own IP addresses and a single DNS
name for a set of Pods, and can load-balance
across them. |
Answer: Automatically
mount the storage system of your choice, whether
from local storage, a public cloud provider such
as GCP or AWS, or a network storage system such as
NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker. |
Answer: Deploy
and update secrets and application configuration
without rebuilding your image and without exposing
secrets in your stack configuration. |
Answer: Automatically
places containers based on their resource
requirements and other constraints, while not
sacrificing availability. Mix critical and
best-effort workloads in order to drive up
utilization and save even more resources. |
Answer: In
addition to services, Kubernetes can manage your
batch and CI workloads, replacing containers that
fail, if desired. |
Answer: Allocation
of IPv4 and IPv6 addresses to Pods and Services |
Answer: Scale
your application up and down with a simple
command, with a UI, or automatically based on CPU
usage. |
Answer: Restarts
containers that fail, replaces and reschedules
containers when nodes die, kills containers that
don't respond to your user-defined health check,
and doesn't advertise them to clients until they
are ready to serve. |
Answer: Add
features to your Kubernetes cluster without
changing upstream source code. |
Python programming is not only for the IT developers, it is mainly used by professionals who are working in Finance domain, Analytics, Data Scientists, Researchers and IT Developers etc. |
Another good annual package, which is subscribed by user who are interested in more technology learning including Spark, Hadoop, Cassandra, Sacala and much more with below annual subscription, which include any two certification preparation material.
Answer: Kubernetes
defines a set of building blocks ("primitives"),
which collectively provide mechanisms that deploy,
maintain, and scale applications based on CPU,
memory or custom metrics. Kubernetes is loosely
coupled and extensible to meet different workloads.
This extensibility is provided in large part by the
Kubernetes API, which is used by internal components
as well as extensions and containers that run on
Kubernetes. The platform exerts its control over
compute and storage resources by defining resources
as Objects, which can then be managed as such.
Kubernetes follows the primary/replica architecture.
The components of Kubernetes can be divided into
those that manage an individual node and those that
are part of the control plane. |
Answer:
The Kubernetes master is the main controlling unit of
the cluster, managing its workload and directing
communication across the system. The Kubernetes
control plane consists of various components, each its
own process, that can run both on a single master node
or on multiple masters supporting high-availability
clusters. The various components of the Kubernetes
control plane are as follows:
|
Answer:
A Node, also known as a Worker or a Minion, is a
machine where containers (workloads) are deployed.
Every node in the cluster must run a container
runtime such as Docker, as well as the
below-mentioned components, for communication with
the primary for network configuration of these
containers.
|
Answer:
The basic scheduling unit in Kubernetes is a pod. A
pod is a grouping of containerized components. A pod
consists of one or more containers that are
guaranteed to be co-located on the same node. Each pod in Kubernetes is assigned a unique IP address within the cluster, which allows applications to use ports without the risk of conflict. Within the pod, all containers can reference each other on localhost, but a container within one pod has no way of directly addressing another container within another pod; for that, it has to use the Pod IP Address. An application developer should never use the Pod IP Address though, to reference / invoke a capability in another pod, as Pod IP addresses are ephemeral - the specific pod that they are referencing may be assigned to another Pod IP address on restart. Instead, they should use a reference to a Service, which holds a reference to the target pod at the specific Pod IP Address. A pod can define a volume, such as a local disk directory or a network disk, and expose it to the containers in the pod. Pods can be managed manually through the Kubernetes API, or their management can be delegated to a controller. Such volumes are also the basis for the Kubernetes features of ConfigMaps (to provide access to configuration through the filesystem visible to the container) and Secrets (to provide access to credentials needed to access remote resources securely, by providing those credentials on the filesystem visible only to authorized containers). |
Answer: A
ReplicaSet’s purpose is to maintain a stable set of
replica Pods running at any given time. As such, it
is often used to guarantee the availability of a
specified number of identical Pods. The ReplicaSets can also be said to be a grouping mechanism that lets Kubernetes maintain the number of instances that have been declared for a given pod. The definition of a Replica Set uses a selector, whose evaluation will result in identifying all pods that are associated with it. |
Answer: A Kubernetes service is a set of pods that work together, such as one tier of a multi-tier application. The set of pods that constitute a service are defined by a label selector. Kubernetes provides two modes of service discovery, using environmental variables or using Kubernetes DNS. Service discovery assigns a stable IP address and DNS name to the service, and load balances traffic in a round-robin manner to network connections of that IP address among the pods matching the selector (even as failures cause the pods to move from machine to machine). By default a service is exposed inside a cluster (e.g., back end pods might be grouped into a service, with requests from the front-end pods load-balanced among them), but a service can also be exposed outside a cluster (e.g., for clients to reach front-end pods).: |
Answer:
Filesystems in the Kubernetes container provide
ephemeral storage, by default. This means that a
restart of the pod will wipe out any data on such
containers, and therefore, this form of storage is
quite limiting in anything but trivial applications.
A Kubernetes Volume provides persistent storage that
exists for the lifetime of the pod itself. This
storage can also be used as shared disk space for
containers within the pod. Volumes are mounted at
specific mount points within the container, which
are defined by the pod configuration, and cannot
mount onto other volumes or link to other volumes.
The same volume can be mounted at different points
in the filesystem tree by different containers. |
Answer: Kubernetes
provides a partitioning of the resources it manages
into non-overlapping sets called namespaces. They
are intended for use in environments with many users
spread across multiple teams, or projects, or even
separating environments like development, test, and
production.: |
Answer:
A common application challenge is deciding where to
store and manage configuration information, some of
which may contain sensitive data. Configuration data
can be anything as fine-grained as individual
properties or coarse-grained information like entire
configuration files or JSON / XML documents.
Kubernetes provides two closely related mechanisms
to deal with this need: "configmaps" and "secrets",
both of which allow for configuration changes to be
made without requiring an application build. The
data from configmaps and secrets will be made
available to every single instance of the
application to which these objects have been bound
via the deployment. A secret and / or a configmap is
only sent to a node if a pod on that node requires
it. Kubernetes will keep it in memory on that node.
Once the pod that depends on the secret or configmap
is deleted, the in-memory copy of all bound secrets
and configmaps are deleted as well. The data is
accessible to the pod through one of two ways: a) as
environment variables (which will be created by
Kubernetes when the pod is started) or b) available
on the container filesystem that is visible only
from within the pod. The data itself is stored on the master which is a highly secured machine which nobody should have login access to. The biggest difference between a secret and a configmap is that the content of the data in a secret is base64 encoded. Recent versions of Kubernetes have introduced support for encryption to be used as well. Secrets are often used to store data like certificates, passwords, pull secrets (credentials to work with image registries), and ssh keys. |
Answer:
It is very easy to address the scaling of stateless
applications: one simply adds more running
pods—which is something that Kubernetes does very
well. Stateful workloads are much harder, because
the state needs to be preserved if a pod is
restarted, and if the application is scaled up or
down, then the state may need to be redistributed.
Databases are an example of stateful workloads. When
run in high-availability mode, many databases come
with the notion of a primary instance and secondary
instance(s). In this case, the notion of ordering of
instances is important. Other applications like
Kafka distribute the data amongst their brokers—so
one broker is not the same as another. In this case,
the notion of instance uniqueness is important.
StatefulSets are controllers (see Controller
Manager, below) that are provided by Kubernetes that
enforce the properties of uniqueness and ordering
amongst instances of a pod and can be used to run
stateful applications. |
Answer: Normally, the locations where pods are run are determined by the algorithm implemented in the Kubernetes Scheduler. For some use cases, though, there could be a need to run a pod on every single node in the cluster. This is useful for use cases like log collection, ingress controllers, and storage services. The ability to do this kind of pod scheduling is implemented by the feature called DaemonSets |
Answer: Kubernetes
enables clients (users or internal components) to
attach keys called "labels" to any API object in the
system, such as pods and nodes. Correspondingly,
"label selectors" are queries against labels that
resolve to matching objects. When a service is
defined, one can define the label selectors that
will be used by the service router / load balancer
to select the pod instances that the traffic will be
routed to. Thus, simply changing the labels of the
pods or changing the label selectors on the service
can be used to control which pods get traffic and
which don't, which can be used to support various
deployment patterns like blue-green deployments or
A-B testing. This capability to dynamically control
how services utilize implementing resources provides
a loose coupling within the infrastructure. For example, if an application's pods have labels for a system tier (with values such as front-end, back-end, for example) and a release_track (with values such as canary, production, for example), then an operation on all of back-end and canary nodes can use a label selector, such as: tier=back-end AND release_track=canary Just like labels, field selectors also let one select Kubernetes resources. Unlike labels, the selection is based on the attribute values inherent to the resource being selected, rather than user-defined categorization. metadata.name and metadata.namespace are field selectors that will be present on all Kubernetes objects. Other selectors that can be used depend on the object/resource type. |
Answer:
A ReplicaSet declares the number of instances of a
pod that is needed, and a Replication Controller
manages the system so that the number of healthy
pods that are running matches the number of pods
declared in the ReplicaSet (determined by evaluating
its selector). Deployments are a higher level management mechanism for ReplicaSets. While the Replication Controller manages the scale of the ReplicaSet, Deployments will manage what happens to the ReplicaSet - whether an update has to be rolled out, or rolled back, etc. When deployments are scaled up or down, this results in the declaration of the ReplicaSet changing - and this change in declared state is managed by the Replication Controller. |
Answer: Add-ons
operate just like any other application running
within the cluster: they are implemented via pods
and services, and are only different in that they
implement features of the Kubernetes cluster. The
pods may be managed by Deployments,
ReplicationControllers, and so on. There are many
add-ons, and the list is growing. Some of the more
important are:
|
Answer: Containers
emerged as a way to make software portable. The
container contains all the packages you need to run
a service. The provided filesystem makes containers
extremely portable and easy to use in development. A
container can be moved from development to test or
production with no or relatively few configuration
changes. Historically Kubernetes was suitable only for stateless services. However, many applications have a database, which requires persistence, which leads to the creation of persistent storage for Kubernetes. Implementing persistent storage for containers is one of the top challenges of Kubernetes administrators, DevOps and cloud engineers. Containers may be ephemeral, but more and more of their data is not, so one needs to ensure the data's survival in case of container termination or hardware failure. When deploying containers with Kubernetes or containerized applications, companies often realize that they need persistent storage. They need to provide fast and reliable storage for databases, root images and other data used by the containers. In addition to the landscape, the Cloud Native Computing Foundation (CNCF), has published other information about Kubernetes Persistent Storage including a blog helping to define the container attached storage pattern. This pattern can be thought of as one that uses Kubernetes itself as a component of the storage system or service. More information about the relative popularity of these and other approaches can be found on the CNCF's landscape survey as well, which showed that OpenEBS from MayaData and Rook - a storage orchestration project - were the two projects most likely to be in evaluation as of the Fall of 2019. Container Attached Storage is a type of data storage that emerged as Kubernetes gained prominence. The Container Attached Storage approach or pattern relies on Kubernetes itself for certain capabilities while delivering primarily block, file, object and interfaces to workloads running on Kubernetes. Common attributes of Container Attached Storage include the use of extensions to Kubernetes, such as custom resource definitions, and the use of Kubernetes itself for functions that otherwise would be separately developed and deployed for storage or data management. Examples of functionality delivered by custom resource definitions or by Kubernetes itself include retry logic, delivered by Kubernetes itself, and the creation and maintenance of an inventory of available storage media and volumes, typically delivered via a custom resource definition. |
Answer: The design principles underlying Kubernetes allow one to programmatically create, configure, and manage Kubernetes clusters. This function is exposed via an API called the Cluster API. A key concept embodied in the API is the notion that the Kubernetes cluster is itself a resource / object that can be managed just like any other Kubernetes resources. Similarly, machines that make up the cluster are also treated as a Kubernetes resource. The API has two pieces - the core API, and a provider implementation. The provider implementation consists of cloud-provider specific functions that let Kubernetes provide the cluster API in a fashion that is well-integrated with the cloud-provider's services and resources. |
Answer:
We highly recommend you get paid subscription Annual Premium to access all
contents on this website or get paid subscription
for this product itself. |
We have training subscriber from TCS, IBM, INFOSYS, ACCENTURE, APPLE, HEWITT, Oracle , NetApp , Capgemini etc.
One of testimonials from training subscriber :