Kubernetes ReplicaSet
Basic workload controllers like Kubernetes ReplicaSets ensure a cluster has a certain number of identical Pod replicas. For reliable distributed systems, Kubernetes’ smallest deployable units, Pods, are transient and do not reschedule if a node fails. The ReplicaSet enables redundancy, scaling, and self-healing
How does a ReplicaSet work?
A reconciliation loop, an ongoing background operation that keeps an eye on two different states the planned state and the observed state is what powers a ReplicaSet’s intelligence.
- Desired State: The configuration specified in a YAML manifest (such as “run three replicas of a web server”) is the desired state.
- Observed State: The amount of healthy Pods that are presently operating in the cluster is known as the “observed state.”
These values are continuously compared by the ReplicaSet controller. The controller generates new Pods if the observed count is less than the intended count for example, as a result of a Pod crash or node failure. On the other hand, to preserve equilibrium, it ends any excess Pods. This link is maintained through the metadata.ownerReferences field in the Pod’s metadata, which identifies the controlling ReplicaSet.
You can also read When to use Kubernetes StatefulSet? & It’s Limitations
Architecture and Manifest Structure
A “cookie cutter” and a goal quantity of cookies can be combined to create a ReplicaSet. All managed Pods are uniform and interchangeable with the “cookie cutter” Pod template specified in the manifest.
A standard ReplicaSet manifest requires several key fields:
- apiVersion: Specifies the Kubernetes API version (typically
apps/v1). - kind: Must be set to
ReplicaSet. - metadata: Includes information like the name of the ReplicaSet.
- spec: Contains the core operational requirements:
- replicas: The number of Pods that should be running (defaults to 1 if not specified).
- selector: A label query used to identify which Pods to manage.
- template: The specification for new Pods, including container images, ports, and resource limits.
When to use a ReplicaSet
A ReplicaSet keeps a set number of pod replicas active. However, deployments enable declarative Pod changes and ReplicaSet maintenance, among other benefits. Therefore, unless you need specialized update orchestration or don’t need updates at all, we advise using Deployments rather than ReplicaSets directly.
As a result, you might never need to work with ReplicaSet objects. Instead, use a Deployment and provide your application in the spec section.
You can also read What is Kubernetes Cloud Controller Manager?
How ReplicaSets Overcame Past Limitations
The previous Replication Controller was directly replaced by the ReplicaSet. The inclusion of set-based selectors, which provide significantly more flexibility than the equality-based selectors employed by its predecessor, is the most notable change.
Only exact key-value pairs (such as app: frontend) are matched by equality-based selectors.
Complex criteria, including matching Pods where a label key exists or where a value is inside a certain set (e.g., environment in (production, qa)), are made possible using set-based selectors.
Non-template Pod acquisitions are made possible by this decoupling. Despite not creating them, a ReplicaSet can “adopt” Pods that match its selector if they don’t have a controller as an owner reference.
Non-Template Pod acquisitions
It is highly advised to ensure that the labels of the bare Pods do not match the selector of one of your ReplicaSets, even though you can construct bare Pods without any issues. This is because a ReplicaSet can acquire additional Pods using the methods outlined in the preceding sections; it is not restricted to possessing the Pods provided by its template.
You can also read What is a Kubernetes Controller Manager?
Writing a ReplicaSet manifest
As with all other Kubernetes API objects, a ReplicaSet needs the apiVersion, kind, and metadata fields. For ReplicaSets, the kind is always a ReplicaSet.
When the control plane creates new Pods for a ReplicaSet, the .metadata.name of the ReplicaSet is part of the basis for naming those Pods. The name of a ReplicaSet must be a valid DNS subdomain value, but this can produce unexpected results for the Pod hostnames. For best compatibility, the name should follow the more restrictive rules for a DNS label.
A ReplicaSet also needs a .spec section.
Pod Template
The .spec.template is a pod template which is also required to have labels in place. In our frontend.yaml example we had one label: tier: frontend. Be careful not to overlap with the selectors of other controllers, lest they try to adopt this Pod.
For the template’s restart policy field, .spec.template.spec.restartPolicy, the only allowed value is Always, which is the default.
Pod Selector
The .spec.selector field is a label selector. As discussed earlier these are the labels used to identify potential Pods to acquire. In our frontend.yaml example, the selector was:
matchLabels:
tier: frontend
In the ReplicaSet, .spec.template.metadata.labels must match spec.selector, or it will be rejected by the API.
Note:
For 2 ReplicaSets specifying the same .spec.selector but different .spec.template.metadata.labels and .spec.template.spec fields, each ReplicaSet ignores the Pods created by the other ReplicaSet.
Replicas
You can specify how many Pods should run concurrently by setting .spec.replicas. The ReplicaSet will create/delete its Pods to match this number.
If you do not specify .spec.replicas, then it defaults to 1.
You can also read How to Get Started Kubernetes? Explained Briefly
Working with ReplicaSets
Step 1: Create a YAML file that defines the ReplicaSet. This file should include the number of replicas you want, the container image to use, and any other desired properties such as environment variables or resource limits.
To create the ReplicaSet, you can use the kubectl create command and pass it to the YAML file as an argument:
$ kubectl create -f replicaset.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: <RSName>
spec:
replicas: <noOfPODReplicas>
selector: # To Match POD Labels.
matchLabels: # Equality Based Selector
<key>: <value>
matchExpressions: # Set Based Selector
- key: <key>
operator: <in/not in>
values:
- <value1>
- <value2>
template:
metadata:
name: <PODName>
labels:
<key>: <value>
spec:
- containers:
- name: <nameOfTheContainer>
image: <imageName>
ports:
- containerPort: <containerPort>
Create a ReplicaSet using the configuration in replicaset.yaml
$ kubectl create -f gmemegen_deployment.yaml

Step 3: Verify that the ReplicaSet was created
$ kubectl get replicasets
.png)
Step 4: View the ReplicaSet in more detail
$ kubectl describe replicaset my-replicaset
.png)
Deleting a ReplicaSet and its Pods
To delete a ReplicaSet and all of its Pods, use kubectl delete. The Garbage collector automatically deletes all of the dependent Pods by default.
kubectl delete rs <name of the replicaset>
When using the REST API or the client-go library, you must set propagationPolicy to Background or Foreground in the -d option. For example:
kubectl proxy --port=8080
curl -X DELETE 'localhost:8080/apis/apps/v1/namespaces/default/replicasets/frontend' \
-d '{"kind":"DeleteOptions","apiVersion":"v1","propagationPolicy":"Foreground"}' \
-H "Content-Type: application/json"
Deleting The Pod
The pod can be deleted by using the following command.
kubectl delete pods --selector <key= pair>
You can also read What is Kube-Proxy in Kubernetes and it’s Lifecycle
Isolating Pods from a ReplicaSet
By altering their labels without matching the replicaset’s selectors, pods can be separated from the replicaset. The following actions can be used to modify the pod labels:
Step 1: Select the pod which you need to be isolated. For that list all the pods
kubectl get pods
Step 2: Edit the pod labels which no longer match the replicaset selectors.
kubectl edit pod <Name of the Pod>
Step 3: Apply the pod by using the following command.
Kubectl apply -f <name of the pod>
Scaling a ReplicaSet
Scaling the replicaset can be done by using two methods.
- Manually by using the command.
- ReplicaSet as a Horizontal Pod Autoscaler Target.
Manually by using the command
Manually scale the replicaset by changing the reproduction count using this command.
kubectl scale rs <name of replicaset> --replicas=5
Kubectl controls Kubernetes. The command “scale rs” scales the replicaset and specifies the number of replicas. Five copies will be made.
ReplicaSet as a Horizontal Pod Autoscaler Target
Horizontal pod autoscaler is another resource that may be used to scale a replicaset. Depending on our needs based on the incoming traffic, the pods will automatically scale when the threshold value of the pod CPU reaches the maximum value specified in the manifest file.
You can also read Kubernetes Controller Manager vs Cloud Controller Manager
Alternatives to ReplicaSet
Deployment (recommended)
Deployment is an object that has the ability to own ReplicaSets and use declarative, server-side rolling updates to update them and their Pods. Although ReplicaSets can be used on their own, Deployments now primarily employ them as a way to coordinate the creation, deletion, and updating of Pods. Using Deployments eliminates the need to manage the ReplicaSets they generate. ReplicaSets are owned and managed by deployments. Therefore, when you desire ReplicaSets, it is advised to use Deployments.
Bare Pods
ReplicaSets replace deleted or terminated Pods, including node failure or disruptive node maintenance like kernel upgrades, unlike manual Pod creation. We recommend a ReplicaSet even if your app just needs one Pod. Unlike a process supervisor, it controls several Pods on multiple nodes. Local container restarts are assigned by a ReplicaSet to a node agent, such Kubelet.
Job
For Pods that are supposed to end on their own, such as batch jobs, use a Job rather than a ReplicaSet.
DaemonSet
For Pods that offer a machine-level function, such machine monitoring or machine logging, use a DaemonSet rather than a ReplicaSet. The machine’s lifespan determines which Pods run first and can safely finish when the computer is ready for a reboot or shutdown.
ReplicationController
ReplicationControllers with ReplicaSets. Both function and behave identically, save for a ReplicationController’s inability to provide set-based selection criteria as described in the labels user guide. ReplicaSets are therefore favored over ReplicationControllers.
Advanced Management Features
- Pod Isolation: By matching labels to the ReplicaSet selector, administrators can isolate a problematic Pod for debugging. The original Pod can be utilized for interactive troubleshooting, but the ReplicaSet automatically spins up a healthy one to maintain goal count.
- Pod Deletion Cost: The controller prioritizes pending or unschedulable Pods by using a particular methodology to decide which Pods to terminate during a scale-down event. The controller allows users to affect this.Pods with a lower deletion cost are eliminated first, according to the kubernetes.io/pod-deletion-cost annotation.
- Cascaded Deletion: When a ReplicaSet is deleted, all associated Pods are also deleted. Orphan deletion with
--cascade=orphanremoves the controller while keeping the Pods running.
Best Practices
When using ReplicaSets, administrators should adhere to a number of best practices to guarantee stable operations:
- Prefer Deployments: Unless custom update orchestration is needed, always utilize a deployment to manage ReplicaSets.
- Resource Limits: To avoid resource depletion, provide CPU and RAM requests and limits within the Pod template.
- Readiness Probes: Use readiness probes to initialize Pods before the ReplicaSet allows traffic.
- Anti-Affinity: Schedule replicas on many nodes to avoid a single point of failure.
- Monitoring: Grafana and Prometheus track health and performance.
Finally, the ReplicaSet ensures Kubernetes stateless workload reliability. It turns temporary containers into a self-healing infrastructure with its strong reconciliation logic and configurable selection mechanism.
You can also read Kind: A Practical Guide to Local Kubernetes Clusters
