Saturday, March 20, 2021

Installing ElasticSearch on Dev Kubernetes

 Installing Kubernetes

I used Kubeadm as I alluded to in a previous post. The one tricky part is making sure that you have the pod-network-cidr explicitly specified when running kubeinit. In addition to turning off swap etc. I had no need to turn off/disable firewall, some people say they had to.

Installing ElasticSearch

So to test my new Kubernetes cluster, I wanted to play with elasticsearch. The instructions for that are relatively straightforward here https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html . Once the custom resource definition is installed, I started using the snippet here https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html to deploy a quick-start elasticsearch cluster, easy enough I thought. Keep in mind this isn't on a cloud based environment, this is going to be on dev kubernetes. 

ElasticSearch Didn't Find Storage -- Persistent Volumes

After I deployed my quickstart ES cluster, I saw the following error: "0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims". So essentially ES pods want to claim storage space (persistent volume) but don't know how. So I followed the following steps to resolve the issue:

  1. Created a storage class, I used a local storage class since I don't have NFS setup yet, see https://kubernetes.io/docs/concepts/storage/storage-classes/#local
  2. I created a persistent volume with the same storage class as the local one I just created
  3. I had to tweak the quickstart ES cluster template to:
    1. modify the volumeClaimTemplate to have it use the PersistentVolume I created in step 2
    2. modify the pod template to have the pod scheduled on a specific node that has persistent volume (remember this is a local storage)
    3. ES requires vm.max_map_count to be >= 262144
  4. I deployed the three objects and I finally had ES up and running

 Scripts

Local Storage

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

 

Persistent Volume

# see https://kubernetes.io/docs/concepts/storage/volumes/
kind: PersistentVolume
apiVersion: v1
metadata:
name: quickstart-elasticsearch-pv
labels:
type: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
volumeMode: Filesystem
hostPath:
path: /your/path/here
type: Directory

 

ElasticSearch Modified QuickStart Operator Deployment:

#I retyped this for this post so I may have typos/bugs etc.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.11.2
nodeSets:
- name: default
count: 1
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage # has to match what we used in the StorageClass object
podTemplate:
metadata:
labels:
app: elasticsearch-test
spec:
nodeSelector:
kubernetes.io/hostname: hostname-with-the-persistent-volume #if you only have a single node this won't be needed
initContainer:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144'] #this is needed because by default Linux uses a lower limit than ES requires


When Installing Kubernetes Using Kubeadm ...

 If you're going to build your own Kubernetes cluster, then kubeadm is a really good tool for that.

One word of advice is that you should specify the --pod-network-cidr IP address range explicitly and don't rely on the defaults.

So, for instance, kubeadm init --pod-network-cidr=192.168.0.0/16 assuming that the 192.168 network address does not conflict or overlap with any other subnet address in your network.

Better yet, consider printing the default config options to a file first using kubeadm config print init-defaults > your-config-file.yaml then tweak these defaults to fit your need. Once you're happy with your tweaked config file, then run kubeadm init --config=your-config-file.yaml