Sunday, April 21, 2024

OHIF Dicom Viewer & Orthanc Server

 Running Orthanc Server & OHIF DICOM Viewer

OHIF DICOM Viewer

A viewer is available here https://github.com/OHIF/Viewers and is fairly easy to build and configure. Requires node & yarn and little configuration (to be seen later)

Orthanc Server [on a different machine]

Orthanc Server can be found here https://orthanc.uclouvain.be/book/ and it has a number of plugins that you can configure. One of which is the DICOMWeb which is the plugin that implements the DICOM-Web standard. This is the one you need to enable in order for The OHIF Viewer to work with Orthanc. Orthanc has its own RESTful API but it's the same API that OHIF Viewer uses.

CORS

Orthanc Server and it's DICOMWeb Plugin should ideally behind a reverse-proxy/load-balancer. This allows to set the necessary headers to allow scripts in one domain to call the DICOM API.

Here is an example NGINX Configuration

server {
    listen 8042;
    server_name localhost;

    location / {
        proxy_pass http://your_orthanc_server_ip_address_or_hostname:8042;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

        proxy_set_header X-Test-Id 'some_value';
        add_header Access-Control-Allow-Origin 'http://localhost:3000';

    }
}

 Assuming your OHIF Viewer is running on http://localhost:3000

Don't forget to configure your plugin DICOMWeb Plugin of the Orthanc Server so that the Host value is the proper one it should be whatever your reverse proxy is otherwise; you'll run into weird CORS and/or connection errors.

Just some configs that helped me. Hopefully this helps.


Sunday, September 5, 2021

Building OpenWRT

 Compile OpenWRT Locally

If you're following the commands here https://openwrt.org/docs/guide-developer/build-system/use-buildsystem, specifically

# Download and update the sources
git clone https://git.openwrt.org/openwrt/openwrt.git openwrt
cd openwrt
git pull
 
# Select a specific code revision
git branch -a
git tag
git checkout v19.07.8
 
# Update the feeds
./scripts/feeds update -a
./scripts/feeds install -a
 
# Configure the firmware image and the kernel
make menuconfig
make kernel_menuconfig
 
# Build the firmware image
make -j $(nproc) defconfig download clean world

And finding out that the ./scripts/update -a fails because the which dependency isn't found then the solution is to temporarily remove the which alias.

Which is a very common GNU utility that tells you where a certain command is (on the filesystem or as an alias). On my Linux installation the which command is wrapped within an alias. 

  • Remove it temporarily in a number of ways: you can go to /etc/profile.d/ and rename the which2.sh (or which2.csh depending on which shell you're using) to something that does not end in .sh. 
  • Close the current terminal/session
  • Start a new terminal and run which which and verify that you're getting the path of the which utility and not the alias
  • Re-run the build steps
  • Restore the which alias

 

Saturday, March 20, 2021

Installing ElasticSearch on Dev Kubernetes

 Installing Kubernetes

I used Kubeadm as I alluded to in a previous post. The one tricky part is making sure that you have the pod-network-cidr explicitly specified when running kubeinit. In addition to turning off swap etc. I had no need to turn off/disable firewall, some people say they had to.

Installing ElasticSearch

So to test my new Kubernetes cluster, I wanted to play with elasticsearch. The instructions for that are relatively straightforward here https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html . Once the custom resource definition is installed, I started using the snippet here https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html to deploy a quick-start elasticsearch cluster, easy enough I thought. Keep in mind this isn't on a cloud based environment, this is going to be on dev kubernetes. 

ElasticSearch Didn't Find Storage -- Persistent Volumes

After I deployed my quickstart ES cluster, I saw the following error: "0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims". So essentially ES pods want to claim storage space (persistent volume) but don't know how. So I followed the following steps to resolve the issue:

  1. Created a storage class, I used a local storage class since I don't have NFS setup yet, see https://kubernetes.io/docs/concepts/storage/storage-classes/#local
  2. I created a persistent volume with the same storage class as the local one I just created
  3. I had to tweak the quickstart ES cluster template to:
    1. modify the volumeClaimTemplate to have it use the PersistentVolume I created in step 2
    2. modify the pod template to have the pod scheduled on a specific node that has persistent volume (remember this is a local storage)
    3. ES requires vm.max_map_count to be >= 262144
  4. I deployed the three objects and I finally had ES up and running

 Scripts

Local Storage

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

 

Persistent Volume

# see https://kubernetes.io/docs/concepts/storage/volumes/
kind: PersistentVolume
apiVersion: v1
metadata:
name: quickstart-elasticsearch-pv
labels:
type: local
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
volumeMode: Filesystem
hostPath:
path: /your/path/here
type: Directory

 

ElasticSearch Modified QuickStart Operator Deployment:

#I retyped this for this post so I may have typos/bugs etc.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.11.2
nodeSets:
- name: default
count: 1
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-storage # has to match what we used in the StorageClass object
podTemplate:
metadata:
labels:
app: elasticsearch-test
spec:
nodeSelector:
kubernetes.io/hostname: hostname-with-the-persistent-volume #if you only have a single node this won't be needed
initContainer:
- name: sysctl
securityContext:
privileged: true
command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144'] #this is needed because by default Linux uses a lower limit than ES requires


When Installing Kubernetes Using Kubeadm ...

 If you're going to build your own Kubernetes cluster, then kubeadm is a really good tool for that.

One word of advice is that you should specify the --pod-network-cidr IP address range explicitly and don't rely on the defaults.

So, for instance, kubeadm init --pod-network-cidr=192.168.0.0/16 assuming that the 192.168 network address does not conflict or overlap with any other subnet address in your network.

Better yet, consider printing the default config options to a file first using kubeadm config print init-defaults > your-config-file.yaml then tweak these defaults to fit your need. Once you're happy with your tweaked config file, then run kubeadm init --config=your-config-file.yaml

 

Sunday, February 28, 2021

A Docker Image from Scratch

Initially, we need a minimal root filesystem.

    mkdir base-image
    cd base-image
    #assuming we're inside the base-image dir
    chr=$(pwd)
    
    mkdir -p dev/{pts,shm}
    touch dev/console
    mkdir etc
    touch etc/hostname  etc/hosts etc/resolv.conf
    ln -s /proc/mounts etc/mtab
    mkdir ./{proc, sys}
    
    #now we copy our application that is the target of containerization, in this case it's bash and a few other utils
    cp -v /bin/{bash,touch,ls,rm} $chr/bin
    #copy bash dependencies, ldd will tell us what bash requires at runtime to run
    list="$(ldd /bin/bash | egrep -o '/lib.*\.[0-9]')"
    echo $list
    for i in $list; do cp -v --parents "$i" "${chr}"; done
    
    list="$(ldd /bin/touch | egrep -o '/lib.*\.[0-9]')"
    echo $list
    for i in $list; do cp -v --parents "$i" "${chr}"; done
    
    list="$(ldd /bin/ls | egrep -o '/lib.*\.[0-9]')"
    echo $list
    for i in $list; do cp -v --parents "$i" "${chr}"; done
    
    
    list="$(ldd /bin/rm | egrep -o '/lib.*\.[0-9]')"
    echo $list
    for i in $list; do cp -v --parents "$i" "${chr}"; done
    
    #run chroot to test
    sudo chroot . /bin/bash
    

Once chroot is working and we're able to jail the bash app, we proved that bash is able to run in isolation, it's got all it needs.
We now create the Dockerfile from scratch

        FROM scratch
        COPY bin/ /bin/
        COPY lib/ /lib
        COPY lib64/ /lib64
        COPY usr/ /usr/

        #RUN ["/bin/bash", "/bin/ls", "."]
        #ENTRYPOINT ["/bin/bash"]
        CMD ["/bin/bash"]
    

To build the image,
sudo docker build . -t bshell

To run the image
sudo docker run -it --rm bshell

[1] Using Chroot https://www.howtogeek.com/441534/how-to-use-the-chroot-command-on-linux/
[2] Docker: Up & Running 2nd Edition https://learning.oreilly.com/library/view/docker-up/9781492036722/ch04.html#docker_images

Sunday, June 14, 2020

Building Firefox on Windows 10

Building Firefox

The build tools linked here https://firefox-source-docs.mozilla.org/setup/windows_build.html do not like spaces in their paths.

For example, ./mach build step will fail with an error: "ERROR: GetShortPathName returned a long path name for C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.26.28801\bin\Hostx64\x64\cl.exe" or something like that. A number of solutions suggested here, https://bugzilla.mozilla.org/show_bug.cgi?id=1323381.

This is discussed here https://bugzilla.mozilla.org/show_bug.cgi?id=1323381 I personally do not like enable short names using fsutil because it will slow down directory enumeration. Here is what worked for me,
  • Create a link, a directory junction https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/cc753194(v=ws.11). The link will obviously have a shorter name with no spaces, for example Mklink /j C:\VS2019 C:\Program Files (x86)\Microsoft Visual Studio\2019\Community The paths as shown in the example is what worked for me

  • Add a new environment path VC_PATH with value C:\VS2019\VC\Tools\MSVC\14.26.28801 make sure the value is actually a valid directory and update it to the latest if necessary.

  • Run (or rerun) ./mach configure. You may have to delete the config.cache file. (There is probably a command to clear a config)

  • Run ./mach build. My build was successful


Summary

  • Mklink /j C:\VS2019 C:\Program Files (x86)\Microsoft Visual Studio\2019\Community
  • set VC_PATH=C:\VS2019\VC\Tools\MSVC\14.26.28801 or add it throw the GUI to persist it
  • ./mach configure (you have to delete the existing config.cache
  • ./mach build

Links


Saturday, April 11, 2020

Configure NuGet Cache Directories

Powershell snippets to change where nuget caches packages. As they are, these lines will add a machine-wide environment variable.

[System.Environment]::SetEnvironmentVariable('NUGET_PACKAGES', 'D:\nuget\packages', [System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('NUGET_HTTP_CACHE_PATH', 'D:\nuget\v3-cache', [System.EnvironmentVariableTarget]::Machine)
[System.Environment]::SetEnvironmentVariable('NUGET_PLUGINS_CACHE_PATH', 'D:\nuget\plugins-cache', [System.EnvironmentVariableTarget]::Machine)
For more https://docs.microsoft.com/en-us/nuget/consume-packages/managing-the-global-packages-and-cache-folders