Deploy your application 97% faster using this technique

Falumpaset
Dev Genius
Published in
4 min readOct 6, 2022

--

The containerless deployments’ next iteration distributes artifacts through OCI registries. Therefore, it maintains its fast execution upside while significantly improving scalability and stability. We now resemble serverside WebAssembly through dynamic execution of bespoken artifacts within runtime containers.

Recently, I proposed an alternative way of deploying code to Kubernetes. The idea follows the principle of distributing code instead of container images. However, it introduced more moving parts to the deployment process, which made it error-prone. Consequently, questions about scalability loomed.

Replacing rsync

Therefore, I went back to the lab to improve on these issues. They were connected to using rsync to upload the code into the cluster. So, again I was questioning myself about how to distribute the artifacts.

Kudos to Reddit user temitcha, who shared his wisdom with me. Which ultimately led me to find a really nice solution.

I think one important advantage for containerization is the immutability of the artifact and the easy sharing of this one.

Indeed, these are valuable traits of containers. Container registries facilitate all that; Container images are tarballs — it should be possible to distribute other artifacts through them.

This train of thought ultimately led me to use Project Oras instead of rsync. Its CLI allows me to upload the build artifacts to a container registry and significantly streamline the OneMinuteDeployment idea.

Dynamic loading and execution

Reminiscent of a conventional approach, the deployment process builds the artifacts and uploads them into an OCI registry. However, these artifacts cannot be pulled and executed like container images (yet).

Therefore the artifacts need to be pulled into the runtime container after startup. This is contradictory to a container image’s intended use case. As pointed out earlier, a container image is supposed to be immutable. The container runtime assumes everything is in the image and executes the predefined ENTRYPOINT or CMD.

However, I still want to dynamically define which artifact the container loads and the runargs it executes afterward.

The solution

Figure 1: Container startup

Figure 1 visualizes the current solution. The OCI registry holds two important artifacts for our deployment.

The myruntime image consists of all runtime dependencies, the Oras CLI, and one startup script. This image is defined in the deployment manifest.

As described earlier, the myartifact is the compiled source code or binary. This will later be pulled into the myruntime container and executed.

After deployment, the startup script handles the pulling and execution of the artifact. It does so by reading the IMAGE value from the container’s ENV and pulling it via the Oras CLI.

The ENV is defined in the deployment manifest on the top right of Figure 1. The manifest also defines the container’s CMD.

More importantly, it describes the runargs as “yarn” “start”. These values are also read and executed by the startup script.

This allows to dynamically load any code into a predefined runtime container and execute it.

After successful initial deployment, we can seamlessly load new images through the kubectl set env command. After that, Kubernetes updates the ENV, restarts the containers, and switches over the pods.

Try it out yourself!

This repository provides a little example of deploying a sample NodsJS application using the described design. Also, the update mechanism is showcased. Still, it is essential and based on a Makefile in combination with bash scripts, but it does the job. All you need is a Kubernetes Cluster.

Maturing the project

My objective is to mature the project into a stable application. And while I think that shipping artifacts are the future way of deployment, I’m fully aware that it is not applicable for every scenario.

Therefore, my aim is to support any kind of deployment flow. Usually, pipelines define the deployment process, so creating a universal pipeline execution runtime is the top priority.

I image it resembeling the Kubernetes Api. It should be possible for everybody to create custom extensions and actions that one can plug into the deployment flow, like GitHub Actions, just in golang.

Nonetheless, I’m super intrigued by the prospects of serverside WebAssembly. And actually, I followed the core principle of shipping compiled code within the provided example. But unfortunately, it is not mature, and I’m bending containers to fulfill the job.

So the idea of switching the container with an actual WebAssembly runtime excites me greatly.

Moreover, concepts like Krustlet provide the blueprint for a future runtime.

--

--