Docker announces collaboration with Azure

Docker today announced that it has extended its strategic collaboration with Microsoft to simplify code to cloud application development for developers and development teams by more closely integrating with Azure Container Instances (ACI). 

Docker and Microsoft’s collaboration extends to other products as well. Docker will add integrations for Visual Studio Code, Microsoft’s widely used free code editor, to let developers who use the tool deploy their code as containers faster. After this collaboration, you can login directly to Azure from Docker CLI, then you can select Docker context. If we have resource group, you can select it or you can create a new resource group. Then, you can run individual or multiple containers using Docker Compose.
docker login Azure

docker context create aci-westus aci --aci-subscription-id xxx --aci-resource-group yyy --aci-location westus

docker context use aci-westus
Tighter integration between Docker and Microsoft developer technologies provides the following productivity benefits:

  1. Easily log into Azure directly from the Docker CLI
  2. Trigger an ACI cloud container service environment to be set up automatically with easy to use defaults and no infrastructure overhead
  3. Switch from a local context to a cloud context to run applications quickly and easily
  4. Simplifies single container and multi-container application development via the Compose specification allowing a developer to invoke fully Docker compatible commands seamlessly for the first time natively within a cloud container service
  5. Provides developer teams the ability to share their work through Docker Hub by sharing their persistent collaborative cloud development environments where they can do remote pair programming and real-time collaborative troubleshooting
For more information & developers can sign up for Docker Desktop & VS Code beta here.

Cryptocurrency mining attack against Kubernetes clusters

Cryptojacking is the unauthorized use of someone else’s computer to mine cryptocurrency. Hackers are using ransomware-like tactics and poisoned websites to get your employees’ computers to mine cryptocurrencies. Several vendors in recent days have reported a huge surge in illegal crypto-mining activity involving millions of hijacked computers worldwide.

Kubernetes have been phenomenal in improving developer productivity. With lightweight portable containers, packaging and running application code is effortless. However, while developers and applications can benefit from them, many organizations have knowledge and governance gaps, which can create security gaps.

Some of the Past Cases of Cryptocurrency on Kubernetes cluster:

Tesla Case: The cyber thieves gained access to Tesla\’s Kubernetes administrative console, which exposed access credentials to Tesla\’s AWS environment. Once an attacker gains admin privilege of the Kubernetes cluster, he or she can discover all the services that are running, get into every pod to access processes, inspect files and tokens, and steal secrets managed by the Kubernetes cluster.

Jenkins Case: Hackers used an exploit to install malware on Jenkins servers to perform crypto mining, making over $3 million to date. Although most affected systems were personal computers, it’s a stern warning to enterprise security teams planning to run Jenkins in containerized form that constant monitoring and security is required for business critical applications.

Recently, Azure Security Center detected a new crypto mining campaign that targets specifically Kubernetes environments. What differs this attack from other crypto mining attacks is its scale: within only two hours a malicious container was deployed on tens of Kubernetes clusters.

There are three options for how an attacker can take advantage of the Kubernetes dashboard:

  1. Exposed dashboard: The cluster owner exposed the dashboard to the internet, and the attacker found it by scanning.
  2. The attacker gained access to a single container in the cluster and used the internal networking of the cluster for accessing the dashboard.
  3. Legitimate browsing to the dashboard using cloud or cluster credentials.

How could this be avoided?

As per Microsoft\’s Recommendations, follow the below:

  1. Do not expose the Kubernetes dashboard to the Internet: Exposing the dashboard to the Internet means exposing a management interface.
  2. Apply RBAC in the cluster: When RBAC is enabled, the dashboard’s service account has by default very limited permissions which won’t allow any functionality, including deploying new containers.
  3. Grant only necessary permissions to the service accounts: If the dashboard is used, make sure to apply only necessary permissions to the dashboard’s service account. For example, if the dashboard is used for monitoring only, grant only “get” permissions to the service account.
  4. Allow only trusted images: Enforce deployment of only trusted containers, from trusted registries.

Refer: Azure Kubernetes Services integration with Security Center

Source: https://azure.microsoft.com/en-us/blog/detect-largescale-cryptocurrency-mining-attack-against-kubernetes-clusters/

Sitecore Docker Installation Step by Step Set Up

Sitecore has officially announced Support for Sitecore products in containers.  There have been recent developments in Sitecore and Docker therefore I would like to share some valuable piece of information which you all are looking for. After going through  Sitecore Docker & reading several articles,  I was able to successfully complete the Sitecore provision in docker. I hope this will lead to successful installations and you can thank me later.😊

Prerequisites:

  1. Windows 10
  2. Enable Hyper-V + Containers features in Windows
  3. Git installed
  4. Valid Sitecore license
  5. Docker Desktop
  6. Sitecore certified account on Sitecore Downloads

Refer to the steps given below to perform the step up:

  1. Find Docker Desktop in your taskbar and make it switch to Windows containers. This will restart the Docker and we are good to go.

SCdocker1

  1. Ensure that license.xml should be available in C:\\license\\license.xml. Create a new folder, for example: C:\\SitecoreDocker and clone the Sitecore Docker images.

git clone https://github.com/Sitecore/docker-images.git

  1. Open Windows PowerShell (run as administrator) & change directory to C:\\SitecoreDocker\\docker-images .
  2. Run this command to install:

.\\Build.ps1 -SitecoreUsername \"ashishxxx@xxxxxx.com\" -SitecorePassword \"xxxxxxxx\"

Depending on PC specifications and network speed, this will take from 30 minutes to about an hour. If you end up with a green text similar to the one given in the image below. Then we’re ready to proceed ahead.

SCdocker2

  1. Refer to the below commands to Spin Up Sitecore:

.\\Set-LicenseEnvironmentVariable.ps1 -Path C:\\license\\license.xml # to update Sitecore license

cd .\\windows\\tests\\9.3.x

docker-compose -f docker-compose.xp.yml up

SCdocker3

  1. Open your browser, navigate to http://localhost:44001/

SCdocker4

Login to Sitecore using the credentials (admin/b)

SCdocker5

SCdocker6

SCdocker7

  1. Run the below commands. It will give a list of the XP instance containers.

docker container ls

SCdocker8

  1. Now it’s time to shut down all the containers.

docker-compose -f docker-compose.xp.yml down

SCdocker10

I hope this information helped you. If you have any feedback, questions or suggestions for improvement please let me know in the comments section.

Kubernetes Service Discovery

Service discovery solves the problem of figuring out which process is listening on which address/port for which service.
In a good service discovery system:
  • users can retrieve info quickly
  • informs users of changes to services
  • low latency
  • richer definition of what the service is (not sure what this means)
The Service Object
  • a way to create a named label selector
  • created using kubectl expose
  • Each service is assigned a virtual IP called cluster IP – the system load balances across this IP all the pods identified by the same selector
Kubernetes itself creates and runs a service called kubernetes which lets the components in your app talk to other components such as the API server.
Service DNS
  • k8s inbuilt DNS service maps cluster IPs to DNS names
  • installed as a system component when the cluster is created, managed by k8s
  • Within a namespace, any pod belonging to a service can be targeted just by using the service name
Readiness Checks
The Service objects also tracks when your pods are ready to handle requests.
spec:
 
  template:
   
    spec:
      containers:
       
        name: alpaca-prod
        readinessProbe:
          httpGet:
            path: /ready
            port: 8080
          periodSeconds: 2
          initialDelaySeconds: 0
          failureThreshold: 3
          successThreshold: 1

I we add the readinessProbe section to our YAML for the deployment, then the pods created by this deployment will be checked at the /ready endpoint on port 8080. As soon as the pod comes up, we hit that end point every 2 seconds. If one check succeeds, the pod will be considered ready. However, if three checks fail in succession, it’ll not be considered ready. Requests from you application are only sent to pods that are ready.
Looking Beyond the Cluster
To allow traffic from outside the cluster to reach it, we use something known as NodePorts.
  • This feature assigns a specific port to the service along with the cluster IP
  • Whenever any node in this cluster receives a request on this port, it automatically forwards it to the service
  • If you can reach any node in the cluster, you can reach the service too, without knowing where any of its pods are running

Kubernetes DaemonSet

DaemonSets
  • replicates a single pod on all or a subset of nodes in the cluster
  • land an agent or daemon on every node for, say, logging, monitoring, etc.
ReplicaSet and DaemonSet both create pods which are expected to be long-running services & try to match the observed and desired state in the cluster.
Use a ReplicaSet when:
  • the app is completely decoupled from the node
  • multiple copies of the pod can be run on one node
  • no scheduling restrictions need to be in place to ensure replicas don\’t run on the same node
Use a DaemonSet when:
  • one pod per node or subset of nodes in the cluster
DaemonSet Scheduler
  • By default the a copy is created on every pod
  • The nodes can be limited using node selectors which matches to a set of labels
  • DaemonSets determine which node a pod will run on when creating it using the nodeName field
  • Hence, the k8s scheduler ignores the pods created by DaemonSets
The DaemonSet controller:
  • creates a pod on each node that doesn\’t have one
  • if a new node is added to the cluster, the DaemonSet controller adds a pod to it too
  • it tries to reconcile the observed state and the desired state
Creating DaemonSets
  • the name should be unique across a namespace
  • includes a pod spec
  • creates the pod on every node if a node selector isn\’t specified
Limiting DaemonSets to Specific Nodes
  1. Add labels to nodes
$ kubectl label nodes k0-default-pool-35609c18-z7tb ssd=true
node \”k0-default-pool-35609c18-z7tb\” labeled
  1. Add the nodeSelector key in the pod spec to limit the number of nodes the pod will run on:

apiVersion: extensions/v1beta1
kind: \”DaemonSet\”
metadata:
  labels:
    app: nginx
    ssd: \”true\”
  name: nginx-fast-storage
spec:
  template:
    metadata:
      labels:
        app: nginx
        ssd: \”true\”
    spec:
      nodeSelector:
        ssd: \”true\”
      containers:
        – name: nginx
          image: nginx:1.10.0

  • If the labels specified as a nodeSelector is added to a new to existing node, the pod will be created on that node. Similarly, if the label is removed from an existing node, the pod will also be removed.
Updating a DaemonSet
  • For k8s < 1.6, the DaemonSet was updated and the pods were deleted manually
  • For k8s >= 1.6, the DaemonSet has an equivalent to the Deployment object which manages the rollout
Updating a DaemonSet by Deleting Individual Pods
  • manually delete the pods associated with a DaemonSet
  • delete the entire DaemonSet and create a new one with the updated config. This approach causes downtime as all the pods associated with the DaemonSet are also deleted.
Rolling Update of a DaemonSet
  • for backwards compatibility, the default update strategy is the one described above
  • to use a rolling update set the following in your yaml: spec.updateStrategy.type: RollingUpdate
  • any change to spec.template or sub-field will initiate a rolling update
  • it\’s controlled by the following two params:
    • spec.minReadySeconds, which determines how long a Pod must be “ready” before the rolling update proceeds to upgrade subsequent Pods
    • spec.updateStrategy.rollingUpdate.maxUnavailable, which indicates how many Pods may be simultaneously updated by the rolling update.
A higher value for spec.updateStrategy.rollingUpdate.maxUnavailable increases the blast radius in case a failure happens but increase the time it takes for the rollout.
Note: In a rolling update, the pods associated with the DaemonSet are upgraded gradually while some pods are still running the old configuration until all pods have the new configuration.
To check the status of the rollout, run: kubectl rollout
Deleting a DaemonSet
kubectl delete -f daemonset.yml
  • Deleting a DaemonSet deletes all the pods associated with it
  • in order to retain the pods and delete only the DaemonSet, use the flag: –cascade = false

Basic Kubectl Commands

Some of the basic kubectl commands apply to all k8s objects
Namespaces
  • used to group objects in the cluster
  • each namespace acts like a folder holding a set of objects
  • kubectl works with the default namespace by default
  • —namespace can be passed to specify a different one
Context
  • can be used to change the default namespace, manage different clusters or different users for authenticating to them
  • To change the namespace to abc, run: $ kubectl config set-context my-context –namespace=abc
  • This command just creates a context. To run it, use: $ kubectl config use-context my-context
  • This records the change in the kubectl config file while is usually located at $HOME/.kube/config. This file also stores information related to finding and authenticating to the cluster.
Viewing Kubernetes API Objects
  • Everything is represented by a RESTful resource, called objects.
  • Each object has a unique HTTP endpoint scoped using namespaces.
  • kubectl uses these endpoints to fetch the objects
  • Viewing resources in the current namespace: kubectl get
  • Viewing a specific resource: kubectl get  Use describe instead of `get“ for more detailed information about the object
  • To get more info than normally displayed, use -o wide flag
  • To get raw JSON or YAML objects from the API server, use -o json-o yaml.
Creating, Updating, and Destroying Kubernetes Objects
  • k8s objects are represented using YAML or JSON files (each object has its own JSON/YAML file?)
  • used to manipulate objects on the server
  • To create an object described using obj.yamlkubectl apply -f obj.yaml K8s automatically infers the type, it doesn’t need to be specified.
Labeling and Annotating Objects
  • used to tag objects
  • kubectl label/annotate pods bar color=red adds the color=red label to a pod called bar.
  • To remove a label: kubectl label pods bar -color
Debugging Commands
To see logs for a container: kubectl logs
  • Use -c to choose a particular container
  • Use -f to stream logs to terminal
  • kubectl exec -it — bash provides an interactive shell within the container.

Kubernetes ReplicaSet

We need multiple replicas of containers running at a time because:
  • Redundancy: fault-tolerant system
  • Scale: more requests can be served
  • Sharding: computation can be handled in a parallel manner
Multiple copies of pods can be created manually, but it’s a tedious process. We need a way in which a replicated set of pods can be managed and defined as a single entity. This what the ReplicaSet does: ensures the right types and number of pods are running correctly. Pods managed by a ReplicaSet are automatically rescheduled during a node failure or network partition.
When defining ReplicaSet, we need:
  • specification of the pods we want to create
  • desired number of replicas
  • a way of finding the pods controlled by the ReplicaSet
Reconciliation Loops
This is the main concept behind how ReplicaSets work and it is fundamental to the design and implementation of Kubernetes. Here we deal with two states:
  • desired state is the state you want – the desired number of replicas
  • the current state is the observed staten the present moment – the number of pods presently running
  • The reconciliation loop runs constantly to check if there is a mismatch between the current and the desired state of the system.
  • If it finds a mismatch, then it takes the required actions to match the current state with what’s desired.
  • For example, in the case of replicating pods, it’ll decide whether to scale up or down the number of pods based on what’s specified in the pod’s YAML. If there are 2 pods and we require 3, it’ll create a new pod.
Benefits:
  • goal-driven
  • self-healing
  • can be expressed in few lines of code
  •  
Relating Pods and ReplicaSets
  • pods and ReplicaSets are loosely coupled
  • ReplicaSets doesn’t own the pods they create
  • use label queries to identify which set of pods they’re managing
This decoupling supports:
Adopting Existing Containers:
If we want to replicate an existing pod and if the ReplicaSet owned the pod, then we’d have to delete the pod and re-create it through a ReplicaSet. This would lead to downtime. But since they’re decoupled, the ReplicaSet can simply “adopt” the existing pod.
Quarantining Containers
If a pod is misbehaving and we want to investigate what’s wrong, we can isolate it by changing its labels instead of killing it. This will dissociate it from the ReplicaSet and/or service and consequently the controller will notice that a pod is missing and create a new one. The bad pod is available to developers for debugging.
Designing with ReplicaSets
  • represent a single, scalable microservice in your app
  • every pod created is homogeneous and are inter-changable
  • designed for stateless services
ReplicaSet spec
  • must’ve a unique name (within a namespace or cluster-wide?)
  • spec section that contains:
    • number of replicas
    • pod template
Pod Templates
  • created using the pod template in the spec section
  • the ReplicaSet controller creates & submits the pod manifest to the API server directly
Labels
  • ReplicaSets use labels to filter pods it’s tracking and responsible for
  • When a ReplicaSet is created, it queries the API server for the list of pods, filters it by labels. It adds/deletes pods based on what’s returned.
Scaling ReplicaSets
Imperative Scaling
kubectl scale replica-set-name —replicas=4
Don’t forget to update any text-file configs you’ve to match the value set imperatively.
Declarative Scaling
Change the replicas field in the config file via version control and then apply it to the cluster.
Autoscaling a ReplicaSet
K8s has a mechanism called horizontal pod autoscaling (HPA). It is called that because k8s differentiates between:
  • horizontal scaling: create additional replicas of a pod
  • vertical scaling: adding resources (CPU, memory) to a particular pod
HPA uses a pod known as heapster in your cluster to work correctly. This pod keeps track of metrics and provides an API for consuming those metrics when it makes scaling decisions.
Note: There is no relationship between HPA and ReplicaSet. But it’s a bad idea to use both imperative/declarative management and autoscaling together. It can lead to unexpected behaviour.
Deleting ReplicaSets
Deleting a ReplicaSet deletes all the pods it created & managed as well. TO delete only the ReplicaSet object & not the pods, use —cascasde=false.

Deploy docker image to Azure Kubernetes Service

In this tutorial, we will learn how to deploy container image into Kubernetes cluster using Azure Kubernetes Service. I am assuming that you have already pushed an image into your Azure container registry.

You may refer the below articles for Docker image & Azure container registry.

NOTE: We will use Azure CLI hence install Azure CLI version 2.0.29 or later. Run az –version to find the version.

  1. Select your subscription. Replace <> with your subscription id.

az account set --s <>

     2. If you have an existing resource group, you can skip this step else you can create a new resource group.

az group create --name <> --location <>

Example:

az group create --name helloworldRG --location westeurope

  1. Create Azure Kubernetes Service.

az aks create --resource-group <> --name <> --node-count <> --enable-addons monitoring --generate-ssh-keys --node-resource-group <>

Example:

az aks create --resource-group helloworldRG --name helloworldAKS2809 --node-count 1 --enable-addons monitoring --generate-ssh-keys --node-resource-group helloworldNodeRG

While creating an AKS cluster, one more resource group is also created to store AKS resource. For more details, refer Why are two resource groups created with AKS?

  1. Connect to AKS cluster

az aks get-credentials --resource-group <>  --name <>

Example:

az aks get-credentials --name \"helloworldAKS2809\" --resource-group \"helloworldRG\"

To verify your connection, run the below command to get nodes.

kubectl get nodes

aks1

  1. Integrate AKS with ACR.

az aks update -n myAKSCluster -g myResourceGroup --attach-acr 

Example

az aks update -n helloworldAKS2809 -g helloworldRG --attach-acr helloworldACR1809

  1. Deploy image to AKS.

Create a file hellowordapp.yaml and copy the below content in the file.

apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworldapp
labels:
app: helloworldapp
spec:
replicas: 3
selector:
matchLabels:
app: helloworldapp
template:
metadata:
labels:
app: helloworldapp
spec:
containers:
- name: hellositecore2705
image: helloworldacr1809.azurecr.io/helloworldapp:latest
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: helloworldapp
spec:
selector:
app: helloworldapp
ports:
- protocol: TCP
port: 80
targetPort: 8080
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: helloworldapp

You can refer Yaml validator to check your yaml file.

It’s time to deploy our docker image from ACS to AKS.

kubectl apply -f <>

Example:

kubectl apply -f hellowordapp.yaml

To get the external IP run the command.

kubectl get service helloworldapp

aks2

Open the web Brower and navigate to your external IP. The web page will be opened as shown below:aks3

I hope this information helped you. If you have any feedback, questions or suggestions for improvement please let me know in the comments section.

Kubernetes APIs and Access

The entire Kubernetes architecture is API-driven, the main agent for communication (internal and external) is the Kubernetes-apiserver. There are API groups that may have multiple versions and follow a domain-name format with reserved names such as the empty group and names ending in .k8s.io.
View the API groups with a curl query:


$ curl https://127.0.0.1:6443/apis -k
….
    {
      \”name\”: \”apps\”,
      \”versions\”: [
        {
          \”groupVersion\”: \”apps/v1beta1\”,
          \”version\”: \”v1beta1\”
        },
        {
          \”groupVersion\”: \”apps/v1beta2\”, 
          \”version\”: \”v1beta2\”
        }
      ],
     },
….

Make the API calls with kubectl (recommended) or use curl or other program providing the certificates, keys, and JSON string or file when required.

curl –cert userbob.pem \\
  –key userBob-key.pem \\
  –cacert /path/to/ca.pem \\

It\’s important to check authorizations. Use kubectl to check authorizations as administrator and as a regular user (i.e. bob) in different namespaces:

$ kubectl auth can-i create deployments
yes
$ kubectl auth can-i create deployments –as bob
no
$ kubectl auth can-i create deployments –as bob –namespace developer
yes

There are 3 APIs which can be applied to set who and what can be queried:
  • SelfSubjectAccessView: Access view for any user, useful for delegating to others.
  • LocalSubjectAccessReview: Review is restricted to a specific namespace
  • SelfSubjectRulesReview: A review shows allied actions for a user in a namespace
The use of reconcile allows a check of authorization necessary to create an object from a file. No output means the creation would be allowed.
As mentioned before the serialization for API calls must be JSON, all files in YAML are converted to and from JSON.
The value of resourceVersion is used to determine API updates and implement optimistic concurrency which means an object is not locked from the rime it has been read until the object is written.
The resourceVersion is backed via the modifiedIndex parameter in etc and it\’s unique to the namespace, kind and server. The operations that do not modifiy an object such as WATCH and GET, do not modify this value.
Annotations allow to add metadata to an object, they are key to value maps. Annotations can store more information and in human-readable format, labels are not.

Push container image to Azure container registry

In this tutorial, we will learn how to push local container image to Azure container registry. You may refer this article to know more about container image creation: Create Container image

NOTE: We will use Azure CLI hence install Azure CLI version 2.0.29 or later. Run az –version to find the version.

  1. Select your subscription. Replace <> with your subscription id

az account set --s <>

  1. Create resource group

az group create --name  --location 

Example:

az group create --name helloworldRG --location westeurope

acr1

  1. Create Azure Container registry. Replace with unique ACR name

az acr create --resource-group   --name  --sku Basic

Example:

az acr create --resource-group helloworldRG --name helloworldACR1809 --sku Basic

acr2

  1. Log in to container registry

az acr login --name <>

Example:

az acr login --name helloworldACR1809

  1. Tag container image

To push a container image to Azure Container Registry, you must first tag the image with registry\’s login server.

Run the below commands:

docker images

image2

Tag container image

docker tag  /:

Example:

docker tag helloworldapp helloworldacr1809.azurecr.io/helloworldapp:latest

acr3

Run docker images to validate tagging operation.

  1. Push Image

Example: docker push helloworldacr1809.azurecr.io/helloworldapp:latest

acr4

  1. Verify the container image in ACR

az acr repository list --name --output table

Example:

az acr repository list --name helloworldacr1809

Create Container image

In this tutorial, we will learn to create an image of an application. This application is a simple web application built in node.js. Follow the below steps to create an image of this application:

  1. Copy the application\’s repository.

git clone https://github.com/ashish993/helloworld.git

  1. Build the container file. The below command will create container image & we will tag it as helloworldapp

docker build .\\helloworld -t helloworldapp

image1

  1. Check docker images. Run the below command and it will list all the docker images.

docker images

image2

  1. Run the container locally.

docker run -d -p 8080:80 helloworldapp

  1. Navigate to http://localhost:8080 in your browser. The web page will be opened as shown below.

image3

Creating Docker Images

The distributed systems that can be deployed by k8s are made up primarily of application container images.
Applications = language run time + libraries + source code
  • Problems occur when you deploy an application to a machine that doesn’t available on the production OS. Such a program, naturally, has trouble executing.
  • Deployment often involves running scripts which have a set of instructions resulting in a lot of failure cases being missed.
  • The situation becomes messier if multiple apps deployed to a single machine use different versions of the same shared library.
Containers help solve the problems described above.
Container images
container image is a binary package that encapsulates all the files
necessary to run an application inside of an OS container.
It bundles the application along with its dependencies (runtime, libraries, config, env variables) into a single artefact under a root filesystem. A container in a runtime instance of the image — what the image becomes in memory when executed. We can obtain container images in two ways:
  • a pre-existing image for a container registry (a repository of container images to which people can push images which can be pulled by others)
  • build your own locally
Once an image is present on a computer, it can be run to get an application running inside an OS container.
The Docker Image Format
  • De facto standard for images
  • made up of a series of layers of filesystem where each layers adds/removes/modifies files from the previous layer. It’s called an overlay filesystem.
Container images also have a configuration files which specify how to setup the env (networking, namespaces), the entry point for running the container, resource limits and privileges, etc.
The Docker image format = container root file system + config file
Containers can be of two types:
  1. System containers: like VMs, run a fool boot process, have system services like sshcron, etc.
  2. Application containers: commonly run only a single app
Building Application Images with Docker
Dockerfiles
It a file that automates the building of container images. Read more here . The book uses a demo app, the source code is available on GitHub. To run the kuar (Kubernetes Up and Running) image, follow these steps:
  1. Ensure you’ve Docker installed and running
  2. Download and clone the kuar repo
  3. Run make build to generate a binary
  4. Create a file named Dockerfile (no extension) containing the following:
FROM alpine
COPY bin/1/amd64/kuard /kuard
ENTRYPOINT [\”/kuard\”]
  1. Next, we build the image using the following command: $ docker build -t kuard-amd64:1 . The -t specifies the name of the tagTags are a way to version Docker images. the . tells docker to use the Dockerfile present in the current directory to build the image.
alpine as mentioned in the Dockerfile is a minimal Linux distribution that is used as a base image. More info can be found here.
Image Security
Don’t ever have passwords/secrets in any layer of your container image. Deleting it from one layer will not help if the preceding layers contain sensitive info.
Optimising Image Sizes
  • If a file is present in a preceding layer, it’ll be present in the image that uses that layer even though it’ll be inaccessible.
  • Every time a layer is changed, every layer that comes after is also changed. Changing any layer the image uses means that layer and all layers dependent on it need to be rebuilt, re-pushed and re-pulled for the image to work. As a rule of thumb, the layers should be ordered from least likely to change to most likely to change to avoid a lot of pushing and pulling.
Storing Images in a Remote Registry
To easily share container images to promote reuse and make them available on more machines, we use what’s called a registry. It’s a remote location where we can push our images and other people can download them from there. They’re of two types:
  1. Public: anyone can download images
  2. Private: authorisation is needed to download images
The book uses the Google Container Registry whereas I used the Docker Hub. After creating an account on Docker Hub, run the following commands:
  1. $ docker login
  2. $ docker tag kuard-amd64:1 $DockerHubUsername/kuard-amd64:1
  3. $ docker push $DockerHubUsername/kuard-amd64:1 Replace $DockerHubUsername with your username.
The Docker Container Runtime
The default container run time used by Kubernetes is Docker.
Running Containers with Docker
Run the following command to run the container that you pushed to Docker Hub in the previous step: $ docker run -d –name kuard -p 8080:8080 $DockerHubUsername/kuard-amd64:1 Let’s try to unpack everything that command does one thing at a time:
  • -d tells Docker to run the contain in detached mode. In this mode, the container doesn’t attach its output to your terminal and runs in the background.
  • —name gives a name to your container. Keep in mind it doesn’t alter the name of your image in anyway.
  • -p enables port-forwarding. It maps port 8080 on your local machine to your container’s port 8080. Each container gets its own IP address and doesn’t have access to the host network (your machine in this case) by default. Hence, we’ve to explicitly expose the port.
To stop & remove the container, run: $ docker stop kuard $ docker rm kuard
Docker allows controlling how many resources your container can use (memory, swap space, CPU) etc., using various flags that can be passed to the run command.
Cleanup
Images can be removed using the following command: $ docker rmi
Docker IDs can be shortened as long as they remain unique.

Minikube Installation

There are lots of other ways to build a kube cluster, such as kubeadm, or my favourite Kubernetes the hard way. However, we will create a kube cluster locally on our workstation using Minikube. I should point that all the things I\’ll demo in this course will work the same way irrespective of how the Kube Cluster was built.
Earlier we talked about how to install minikube, lets now if it has really installed by running a minikube command:
$ minikube version
minikube version: v1.0.1

Similarly you check it\’s status:
$ minikube status
host:
kubelet:
apiserver:
kubectl:

To create a new kubecluster, we run (note this can take several minutes):

$ minikube start


If you open up the virtualbox gui, you should see a new vm called minikube running. If you check the status again, you should now see:

$ minikube status
host: Running
kubelet: Running
apiserver: Running
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100

Here it says that minikube has also configured kubectl, that\’s done by making changes to kubectl\’s config file. By default that file is located at ~/.kube/config. We\’ll cover more about this file later in the course. But for now we\’ll confirm that this config file is currently configured to point to minikube\’s kube cluster:

$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
To further debug and diagnose cluster problems, use \’kubectl cluster-info dump\’.

The ip address shown here is the Minikube VM\’s ip address, which should match:
$ minikube ip
192.168.99.100

To check the health of your kub cluster\’s control plane, you can run:
$ kubectl get componentstatus
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {\”health\”:\”true\”}

Also to see how many nodes are in our kubecluster, run:
$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
minikube   Ready    master   4d10h   v1.14.1

This command lists out all VMs that has the kubelet component running on it, along with the kubelet\’s VERSION. If you built kubernetes the hardway then the masters won\’t get listed here, since the masters don\’t have the kubelet running on them. We can specify the \’wide\’ output setting to display a little some more info:
$ kubectl get nodes -o wide
NAME       STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE            KERNEL-VERSION   CONTAINER-RUNTIME
minikube   Ready    master   20h   v1.14.0   10.0.2.15     <none>        Buildroot 2018.05   4.15.0           docker://18.6.2

Now that we have a kubecluster in place, we can now run the kubectl version command but this time without the –client filter flag:
$ kubectl version –short
Client Version: v1.14.1
Server Version: v1.14.1

By design, to stay lightweight, our minikube based kubecluster is a single node cluster, which acts as both the master and worker node. That\’s fine in a development environment. But in production, you should have multiple node cluster for High Availability, better performance, more CPU+RAM capacity,..etc.
When your not using minikube, you can shut it down:
minikube stop
You can also delete your minikube vm:
minikube delete
The Kubernetes dashboard
You can also monitor your kubecluster via the web browser, bu running:
$ minikube dashboard

This is a really cool tool that let\’s you view and manage your Kubecluster visually. I encourage you to explore this tool as we progress through the course.

MiniKube Installation using Powershell

Minikube is a CLI tool that provisions and manages the lifecycle of single-node Kubernetes clusters running inside Virtual Machines (VM) on your local system. It runs a standalone cluster on a single virtual machine for a quick Kubernetes setup so that you can easily and quickly try your hands at deploying and testing Kubernetes on your own time.

To install & configure Minikube, run the below powershell command & you can have a standalone Kubernetes cluster running locally.

<#
.Synopsis
Install MiniKube + Kubectl
.DESCRIPTION
This script downloads the executables for MiniKube, Kubectl, configures Hyper-V as the hypervisor (if not configured already)
together with configuring a specific network adapter for use with the Minikube virtual machine
.EXAMPLE
Install-MiniKube

#>

## Check if running as a Administrator (needed for Hyper-V commands)
$currentPrincipal = New-Object Security.Principal.WindowsPrincipal([Security.Principal.WindowsIdentity]::GetCurrent())
$currentPrincipal.IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)

## Check HyperV status
$HypervState = (Get-WindowsOptionalFeature -Online -FeatureName:Microsoft-Hyper-V).State

## If missing, enable HyperV
if ($HypervState -eq \"Disabled\")
{
$EnableHyperV = Enable-WindowsOptionalFeature -Online -FeatureName:Microsoft-Hyper-V-Management-Powershell,Microsoft-Hyper-V-All -NoRestart

## If a restart is needed, add registry entry to continue after reboot
if ($EnableHyperV.RestartNeeded -eq $true)
{
## Set script to re-run after reboot
Set-ItemProperty -Path \"HKLM:\\Software\\Microsoft\\Windows\\CurrentVersion\\RunOnce\" -Name \"Install-MiniKube\" -Value \"C:\\Windows\\system32\\WindowsPowerShell\\v1.0\\Powershell.exe $PSCommandPath\"

## And reboot
Restart-Computer
}
}

## Get version number of latest stable release of kubectl
$KubectlVersion = (Invoke-WebRequest -uri https://storage.googleapis.com/kubernetes-release/release/stable.txt -UseBasicParsing).content.Trim()

## Turn off progress bars to speed up incoming download *sigh*
$ProgressPreference = \"silentlyContinue\"

## Download minikube + kubectl to temp location
$MinikubeUrl = \"https://storage.googleapis.com/minikube/releases/latest/minikube-windows-amd64.exe\"
$MinikubeDl = \"$Env:Temp\\minikube.exe\"
$KubctlUrl = \"https://storage.googleapis.com/kubernetes-release/release/$KubectlVersion/bin/windows/amd64/kubectl.exe\"
$KubctlDl = \"$Env:Temp\\kubectl.exe\"

Invoke-WebRequest -uri $MinikubeUrl -OutFile $MinikubeDl
Invoke-WebRequest -uri $KubctlUrl -OutFile $KubctlDl

## Restore progress bars to default
$ProgressPreference = \"Continue\"

## Create and copy downloads to Minikube directory in Program Files
$MinikubeDst = \"$Env:Programfiles\\Minikube\"

New-Item $MinikubeDst -ItemType Container
Move-Item $MinikubeDl -Destination $MinikubeDst
Move-Item $KubctlDl -Destination $MinikubeDst

## Update PATH environment variable for this session
$env:Path +=\";$MinikubeDst\"

## Update PATH environment variable permentantly
[Environment]::SetEnvironmentVariable(\"Path\", $env:Path + \";$MinikubeDst\", [EnvironmentVariableTarget]::Machine)

## Check for and clear out any previous MiniKube configurations
if (Test-Path -Path \"$HOME\\.minikube\")
{
Remove-Item -Path \"$HOME\\.minikube\" -Force -Recurse
}

## Get Network Adapter of choice for use with MiniKube
$NetworkAdapter = Get-NetAdapter | Out-GridView -OutputMode Single -Title \'Pick your network adapter to use with MiniKube\'

## Configure Hyper-V Virtual Switch with Network Adapter chosen previously
New-VMSwitch -Name \"Minikube\" -AllowManagementOS $true -NetAdapterName $NetworkAdapter.Name

## Configure Minikube to use Hyper-V driver and Virtual Network Adapter
minikube config set vm-driver hyperv
minikube config set hyperv-virtual-switch Minikube
minikube config set memory 2048

## Start MiniKube
minikube start

Refer : https://dxpetti.com/blog/2019/installing-minikube-with-powershell/

Overview of Kubernetes

Kubernetes is an open-source system for automating deployment, scaling and management of containerized applications. source: kubernetes.io
Built from the Google project Borg. Kubernetes is all about decoupled and transient services. Decoupling means that everything has been designed to not require anything else. Transient means that the whole system expects various components to be terminated and replaced. A flexible and scalable environment means to have a framework that does not tie itself from one aspect to the next and expect objects to die and to reconnect to their replacements.
Kubernetes deploy many microservices. Other parties (internal or external to K8s) expect that there are many possible microservices available to respond a request, to die and be replaced.
The communication between components is API call driven. It is stored in JSON but written in YAML. K8s convert from YAML to JSON prior store it in the DB.

Other solutions to Kubernetes are:
Docker Swarm
Apache Mesos
Nomad
Rancher: Container orchestrator-agnostic system. Support Mesos, Swarm and Kubernetes.

Kubernetes Architecture:


Kubernetes is made of a central manager (master) and some worker nodes, although both can run in a single machine or node. The manager runs an API server (kube-apiserver), a scheduler (kube-scheduler), controllers and a storage system (etcd).
Kubernetes exposes an API which could be accessible with kubectl or your own client. The scheduler sees the requests for running containers coming to the API and find a node to run that container in. Each node runs a kubelet and a proxy (kube-proxy). Kubelet receives requests tu run containers, manage resources and watches over them in the local node. The proxy creates and manage networking rules to expose the container on the network.
A Pod consist of one or more containers which share an IP address, access to storage and namespace. A container in a pod runs an application, and the secondary containers supports such application.
Orchestration is managed though a series of watch-loops, or controllers that check with the API server for a particular object state, modifying the object until declares the desired state.
A Deployment is a controller that ensures that resources are available, and then deploys a ReplicaSet. The ReplicaSet is a controller which deploys and restart containers until the requested number of containers running. The ReplicationController was deprecated and replaced by Deployment.
There are Jobs and CronJobs controllers to handle single or recurring tasks.
Labels are strings part of the object metadata used to manage the Pods, they can be used to check or changing the state of objects without having to know the name or UID. Nodes can have taints to discourage Pod assignment, unless the Pod has a toleration in the metadata.
There are also annotations in metadata which is information used by third-party agents or tools.

Tools:
Minikube which runs with VirtualBox to have a local Kubernetes cluster
kubeadm
kubectl
Helm
Kompose: translate Docker Compose files into Kubernetes manifests