Blue-green deployments on a Hetzner VM: A quick guide on releasing your apps with optimized Docker builds, nginx proxy and fast pipelines with Github Actions

During the recent attempts to find a cheaper alternative to big cloud players such as AWS or Azure, I found several options to deploy somewhat simple apps to provisioned VMs and have some sort of Load Balancing, HTTPS and other essentials things. Of course, this varies by your use case, and here I will be focusing on the “backend” part of your application that in most cases could be a single service, or a docker container running it. For more complex workloads, it’s rather unwise to spend time creating your own orchestrator and similar solutions, when so many options are out there. However, in a situation where a simple frontend/backend/db project needs to be deployed somewhere that’s ideally not Kubernetes, the solutions are somewhat limited. Of course, there are many good ones such as Kamal, Docker Swarm, k3s and others, which will, most of the time, be perfectly enough and simple for everyone on your time to ship a prototype app or a small demo in no time.

This is where this article comes into play, hopefully showing a quick, simple way of deploying your app to a VM at almost no cost. The things we will use include:

  • Hetzner, where we buy and configure our VM
  • Nginx
  • Docker
  • GitHub Actions and GitHub Container Registry

First, make sure you create a Hetzner account. There are options to get some credits, possibly run this setup for free for some time, but I personally have no affiliation with Hetzner, just wanted to note that. After you signed up, go ahead and create your project.

Next, choose the cheapest server, preferably ARM64 one. Make sure to select the region closest to you, pick a Shared CPU configuration, add a IPv4 address and create an ssh key. Once done, your server will be available at the IP you got. Actually, the VM will be reachable, but since we didn’t set up any web server, nothing will be there. This article will not get into setting up nginx from scratch, but you should be familiar with setting up a basic upstream server, using certbot etc. For now, let’s push our backend code to a GitHub repository and create our action.

The usual stuff:

name: Hetzner backend deployment

on:
  push:
    branches: [ "main" ]
  workflow_dispatch: # manual triggers so we can test
    
permissions:
  contents: read
  packages: write

Now, build-and-push job should login to our registry of choice, in this case GHCR, build our app using remote cache backend for faster builds and push it to the registry. Note the caching settings.

env:
  REGISTRY: ghcr.io/${{ github.repository_owner }}
  CONTAINER_NAME: project-${{ github.sha }} # modify to your liking

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
        
      - name: Login to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}
          
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
          
      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          push: true
          platforms: linux/arm64 # make sure to align this with your Hetzner VM
          context: /target/docker/
          tags: ${{ env.REGISTRY }}/project:latest # you can add different tags
          cache-from: type=registry,ref=${{ env.REGISTRY }}/project:buildcache
          cache-to: type=registry,mode=max,ref=${{ env.REGISTRY }}/project:buildcache

We have come to the fun part! First, we declare that our deploy job needs the build-and-push job to finish, then use ssh to connect to our VM.

RANDOM_PORT variable generates a random port within the specified range, so that binding to the same port can’t happen. After logging into the repository, we need to note the currently running container ID, as it will be needed later. Passing the flag -q will return ID only. Next, we run the new container, in this case the port the app is configured to listen on is hard-coded, but you can pass this as a secret or a variable. We use the ${{ env.CONTAINER_NAME }} to reference the variable set above and, optionally, pass some env variables.

NOTE: appleboy/ssh-action has issues with passing env vars. Even though this issue is closed, it’s very much not working for me. Therefore, I passed the variables to docker command instead of exporting them to the environment of my VM (also, that would be somewhat insecure).

  deploy:
    needs: build-and-push
    runs-on: ubuntu-latest
    steps:
      - name: Checkout code
        uses: actions/checkout@v4
  
      - name: Set up SSH
        uses: appleboy/ssh-action@v0.1.7
        with:
          host: ${{ secrets.HOST }}
          username: ${{ secrets.USERNAME }}
          key: ${{ secrets.KEY }}
          script: |
            RANDOM_PORT=$(shuf -i 45000-50000 -n 1)
            
            echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u "${{ github.actor }}" --password-stdin
            docker pull ${{ env.REGISTRY }}/project:latest
            
            CURRENT_CONTAINER_ID=$(docker ps -q)
            
            docker run -d -p $RANDOM_PORT:8000 --name ${{ env.CONTAINER_NAME }} \
              -e AWS_ACCESS_KEY="${{ secrets.AWS_ACCESS_KEY }}" \
              -e AWS_SECRET_ACCESS_KEY="${{ secrets.AWS_SECRET_ACCESS_KEY }}" \
              -e JWT_SECRET="${{ secrets.JWT_SECRET }}" \
              ${{ env.REGISTRY }}/project:latest

The action got a little tired, so some sleep is needed. Jokes aside, we introduce a buffer for the new container to finish up starting, so we can do a healthcheck. This is a rather primitive implementation of blue-green deployment, but should work for basic use cases.

Construct a curl request with your endpoint for healthchecking, like /health for example. Next, using sed and assuming your server block looks something like proxy_pass http://localhost:48915; we programmatically change the old port with new one, and reload nginx.

Next, we check if the old container is still running. This check is important as for example, the previous deployment could fail for some reason thus killing the container, and then executing stop and rm commands would fail, as no container is running.

If the current deployment fails to pass healthcheck for some reason, we print the message about it, remove the new container and signal exit 1.

            sleep 10
            
            if curl -s -o /dev/null -w "%{http_code}" http://localhost:$RANDOM_PORT/docs | grep -q "200"; then
              echo "New container is healthy."

              sed -i "s/proxy_pass http:\/\/localhost:[0-9]\+/proxy_pass http:\/\/localhost:$RANDOM_PORT/" /etc/nginx/sites-available/project
              systemctl reload nginx

              echo "Updated nginx configuration."
              
              if [ -n "$CURRENT_CONTAINER_ID" ]; then
                docker stop $CURRENT_CONTAINER_ID
                docker rm $CURRENT_CONTAINER_ID
              fi
              
            else
              echo "New container failed to start."
              
              docker stop ${{ env.CONTAINER_NAME }}
              docker rm ${{ env.CONTAINER_NAME }}
              exit 1
            fi

Hopefully, this article can help you to quickly setup a deployment process for your application, without deciding on the tools too much. This action can be improved in numerous ways, but the core idea behind it works, so feel free to explore it, fork it, update it and make it work for you. Here is the link to the gist.