Sunday, September 5, 2021

Building OpenWRT

 Compile OpenWRT Locally

If you're following the commands here https://openwrt.org/docs/guide-developer/build-system/use-buildsystem, specifically

# Download and update the sources
git clone https://git.openwrt.org/openwrt/openwrt.git openwrt
cd openwrt
git pull
 
# Select a specific code revision
git branch -a
git tag
git checkout v19.07.8
 
# Update the feeds
./scripts/feeds update -a
./scripts/feeds install -a
 
# Configure the firmware image and the kernel
make menuconfig
make kernel_menuconfig
 
# Build the firmware image
make -j $(nproc) defconfig download clean world

And finding out that the ./scripts/update -a fails because the which dependency isn't found then the solution is to temporarily remove the which alias.

Which is a very common GNU utility that tells you where a certain command is (on the filesystem or as an alias). On my Linux installation the which command is wrapped within an alias. 

  • Remove it temporarily in a number of ways: you can go to /etc/profile.d/ and rename the which2.sh (or which2.csh depending on which shell you're using) to something that does not end in .sh. 
  • Close the current terminal/session
  • Start a new terminal and run which which and verify that you're getting the path of the which utility and not the alias
  • Re-run the build steps
  • Restore the which alias

 

Saturday, March 20, 2021

Installing ElasticSearch on Dev Kubernetes

 Installing Kubernetes

I used Kubeadm as I alluded to in a previous post. The one tricky part is making sure that you have the pod-network-cidr explicitly specified when running kubeinit. In addition to turning off swap etc. I had no need to turn off/disable firewall, some people say they had to.

Installing ElasticSearch

So to test my new Kubernetes cluster, I wanted to play with elasticsearch. The instructions for that are relatively straightforward here https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html . Once the custom resource definition is installed, I started using the snippet here https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html to deploy a quick-start elasticsearch cluster, easy enough I thought. Keep in mind this isn't on a cloud based environment, this is going to be on dev kubernetes. 

ElasticSearch Didn't Find Storage -- Persistent Volumes

After I deployed my quickstart ES cluster, I saw the following error: "0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims". So essentially ES pods want to claim storage space (persistent volume) but don't know how. So I followed the following steps to resolve the issue:

  1. Created a storage class, I used a local storage class since I don't have NFS setup yet, see https://kubernetes.io/docs/concepts/storage/storage-classes/#local
  2. I created a persistent volume with the same storage class as the local one I just created
  3. I had to tweak the quickstart ES cluster template to:
    1. modify the volumeClaimTemplate to have it use the PersistentVolume I created in step 2
    2. modify the pod template to have the pod scheduled on a specific node that has persistent volume (remember this is a local storage)
    3. ES requires vm.max_map_count to be >= 262144
  4. I deployed the three objects and I finally had ES up and running

 Scripts

Local Storage

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

 

Persistent Volume

# see https://kubernetes.io/docs/concepts/storage/volumes/
kind: PersistentVolume
apiVersion: v1
metadata:
name: quickstart-elasticsearch-pv
labels:
type: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
volumeMode: Filesystem
hostPath:
path: /your/path/here
type: Directory

 

ElasticSearch Modified QuickStart Operator Deployment:

#I retyped this for this post so I may have typos/bugs etc.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.11.2
nodeSets:
- name: default
count: 1
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage # has to match what we used in the StorageClass object
podTemplate:
metadata:
labels:
app: elasticsearch-test
spec:
nodeSelector:
kubernetes.io/hostname: hostname-with-the-persistent-volume #if you only have a single node this won't be needed
initContainer:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144'] #this is needed because by default Linux uses a lower limit than ES requires


When Installing Kubernetes Using Kubeadm ...

 If you're going to build your own Kubernetes cluster, then kubeadm is a really good tool for that.

One word of advice is that you should specify the --pod-network-cidr IP address range explicitly and don't rely on the defaults.

So, for instance, kubeadm init --pod-network-cidr=192.168.0.0/16 assuming that the 192.168 network address does not conflict or overlap with any other subnet address in your network.

Better yet, consider printing the default config options to a file first using kubeadm config print init-defaults > your-config-file.yaml then tweak these defaults to fit your need. Once you're happy with your tweaked config file, then run kubeadm init --config=your-config-file.yaml

 

Sunday, February 28, 2021

A Docker Image from Scratch

Initially, we need a minimal root filesystem.

    mkdir base-image
    cd base-image
    #assuming we're inside the base-image dir
    chr=$(pwd)
    
    mkdir -p dev/{pts,shm}
    touch dev/console
    mkdir etc
    touch etc/hostname  etc/hosts etc/resolv.conf
    ln -s /proc/mounts etc/mtab
    mkdir ./{proc, sys}
    
    #now we copy our application that is the target of containerization, in this case it's bash and a few other utils
    cp -v /bin/{bash,touch,ls,rm} $chr/bin
    #copy bash dependencies, ldd will tell us what bash requires at runtime to run
    list="$(ldd /bin/bash | egrep -o '/lib.*\.[0-9]')"
    echo $list
    for i in $list; do cp -v --parents "$i" "${chr}"; done
    
    list="$(ldd /bin/touch | egrep -o '/lib.*\.[0-9]')"
    echo $list
    for i in $list; do cp -v --parents "$i" "${chr}"; done
    
    list="$(ldd /bin/ls | egrep -o '/lib.*\.[0-9]')"
    echo $list
    for i in $list; do cp -v --parents "$i" "${chr}"; done
    
    
    list="$(ldd /bin/rm | egrep -o '/lib.*\.[0-9]')"
    echo $list
    for i in $list; do cp -v --parents "$i" "${chr}"; done
    
    #run chroot to test
    sudo chroot . /bin/bash
    

Once chroot is working and we're able to jail the bash app, we proved that bash is able to run in isolation, it's got all it needs.
We now create the Dockerfile from scratch

        FROM scratch
        COPY bin/ /bin/
        COPY lib/ /lib
        COPY lib64/ /lib64
        COPY usr/ /usr/

        #RUN ["/bin/bash", "/bin/ls", "."]
        #ENTRYPOINT ["/bin/bash"]
        CMD ["/bin/bash"]
    

To build the image,
sudo docker build . -t bshell

To run the image
sudo docker run -it --rm bshell

[1] Using Chroot https://www.howtogeek.com/441534/how-to-use-the-chroot-command-on-linux/
[2] Docker: Up & Running 2nd Edition https://learning.oreilly.com/library/view/docker-up/9781492036722/ch04.html#docker_images