Kubernetes ReplicaSet

We need multiple replicas of containers running at a time because:
  • Redundancy: fault-tolerant system
  • Scale: more requests can be served
  • Sharding: computation can be handled in a parallel manner
Multiple copies of pods can be created manually, but it’s a tedious process. We need a way in which a replicated set of pods can be managed and defined as a single entity. This what the ReplicaSet does: ensures the right types and number of pods are running correctly. Pods managed by a ReplicaSet are automatically rescheduled during a node failure or network partition.
When defining ReplicaSet, we need:
  • specification of the pods we want to create
  • desired number of replicas
  • a way of finding the pods controlled by the ReplicaSet
Reconciliation Loops
This is the main concept behind how ReplicaSets work and it is fundamental to the design and implementation of Kubernetes. Here we deal with two states:
  • desired state is the state you want – the desired number of replicas
  • the current state is the observed staten the present moment – the number of pods presently running
  • The reconciliation loop runs constantly to check if there is a mismatch between the current and the desired state of the system.
  • If it finds a mismatch, then it takes the required actions to match the current state with what’s desired.
  • For example, in the case of replicating pods, it’ll decide whether to scale up or down the number of pods based on what’s specified in the pod’s YAML. If there are 2 pods and we require 3, it’ll create a new pod.
Benefits:
  • goal-driven
  • self-healing
  • can be expressed in few lines of code
  •  
Relating Pods and ReplicaSets
  • pods and ReplicaSets are loosely coupled
  • ReplicaSets doesn’t own the pods they create
  • use label queries to identify which set of pods they’re managing
This decoupling supports:
Adopting Existing Containers:
If we want to replicate an existing pod and if the ReplicaSet owned the pod, then we’d have to delete the pod and re-create it through a ReplicaSet. This would lead to downtime. But since they’re decoupled, the ReplicaSet can simply “adopt” the existing pod.
Quarantining Containers
If a pod is misbehaving and we want to investigate what’s wrong, we can isolate it by changing its labels instead of killing it. This will dissociate it from the ReplicaSet and/or service and consequently the controller will notice that a pod is missing and create a new one. The bad pod is available to developers for debugging.
Designing with ReplicaSets
  • represent a single, scalable microservice in your app
  • every pod created is homogeneous and are inter-changable
  • designed for stateless services
ReplicaSet spec
  • must’ve a unique name (within a namespace or cluster-wide?)
  • spec section that contains:
    • number of replicas
    • pod template
Pod Templates
  • created using the pod template in the spec section
  • the ReplicaSet controller creates & submits the pod manifest to the API server directly
Labels
  • ReplicaSets use labels to filter pods it’s tracking and responsible for
  • When a ReplicaSet is created, it queries the API server for the list of pods, filters it by labels. It adds/deletes pods based on what’s returned.
Scaling ReplicaSets
Imperative Scaling
kubectl scale replica-set-name —replicas=4
Don’t forget to update any text-file configs you’ve to match the value set imperatively.
Declarative Scaling
Change the replicas field in the config file via version control and then apply it to the cluster.
Autoscaling a ReplicaSet
K8s has a mechanism called horizontal pod autoscaling (HPA). It is called that because k8s differentiates between:
  • horizontal scaling: create additional replicas of a pod
  • vertical scaling: adding resources (CPU, memory) to a particular pod
HPA uses a pod known as heapster in your cluster to work correctly. This pod keeps track of metrics and provides an API for consuming those metrics when it makes scaling decisions.
Note: There is no relationship between HPA and ReplicaSet. But it’s a bad idea to use both imperative/declarative management and autoscaling together. It can lead to unexpected behaviour.
Deleting ReplicaSets
Deleting a ReplicaSet deletes all the pods it created & managed as well. TO delete only the ReplicaSet object & not the pods, use —cascasde=false.

Deploy docker image to Azure Kubernetes Service

In this tutorial, we will learn how to deploy container image into Kubernetes cluster using Azure Kubernetes Service. I am assuming that you have already pushed an image into your Azure container registry.

You may refer the below articles for Docker image & Azure container registry.

NOTE: We will use Azure CLI hence install Azure CLI version 2.0.29 or later. Run az –version to find the version.

  1. Select your subscription. Replace <> with your subscription id.

az account set --s <>

     2. If you have an existing resource group, you can skip this step else you can create a new resource group.

az group create --name <> --location <>

Example:

az group create --name helloworldRG --location westeurope

  1. Create Azure Kubernetes Service.

az aks create --resource-group <> --name <> --node-count <> --enable-addons monitoring --generate-ssh-keys --node-resource-group <>

Example:

az aks create --resource-group helloworldRG --name helloworldAKS2809 --node-count 1 --enable-addons monitoring --generate-ssh-keys --node-resource-group helloworldNodeRG

While creating an AKS cluster, one more resource group is also created to store AKS resource. For more details, refer Why are two resource groups created with AKS?

  1. Connect to AKS cluster

az aks get-credentials --resource-group <>  --name <>

Example:

az aks get-credentials --name \"helloworldAKS2809\" --resource-group \"helloworldRG\"

To verify your connection, run the below command to get nodes.

kubectl get nodes

aks1

  1. Integrate AKS with ACR.

az aks update -n myAKSCluster -g myResourceGroup --attach-acr 

Example

az aks update -n helloworldAKS2809 -g helloworldRG --attach-acr helloworldACR1809

  1. Deploy image to AKS.

Create a file hellowordapp.yaml and copy the below content in the file.

apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworldapp
labels:
app: helloworldapp
spec:
replicas: 3
selector:
matchLabels:
app: helloworldapp
template:
metadata:
labels:
app: helloworldapp
spec:
containers:
- name: hellositecore2705
image: helloworldacr1809.azurecr.io/helloworldapp:latest
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: helloworldapp
spec:
selector:
app: helloworldapp
ports:
- protocol: TCP
port: 80
targetPort: 8080
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: helloworldapp

You can refer Yaml validator to check your yaml file.

It’s time to deploy our docker image from ACS to AKS.

kubectl apply -f <>

Example:

kubectl apply -f hellowordapp.yaml

To get the external IP run the command.

kubectl get service helloworldapp

aks2

Open the web Brower and navigate to your external IP. The web page will be opened as shown below:aks3

I hope this information helped you. If you have any feedback, questions or suggestions for improvement please let me know in the comments section.

Alternative to Azure SQL Server authentication

You might have faced issues when you forget your Azure SQL Server password and unable to login in server. Follow the below steps to configure an admin in Azure SQL Server via Azure Portal.
1. Go to Azure SQL Server and navigate to Active Directory admin in you left navigation bar. Then click Set Admin and add your Azure account.
2. Open your SSMS and enter your SQL Server name and in Authentication you can select one of the Active Directory- Password (or Active Directory – Universal with MFA support if MFA is enabled).