This site is written using a simple offline stack: Hugo as a static site generator and Ansible as a deployment mechanism taking care of syncing things to my VPS.

I used Claude to carry out some of the initial scaffolding and busywork to get things up and running quickly, with a clear brief on what I wanted the workflow to look like.

the basic workflow

For writing articles, I exclusively use Neovim running with my personal combination of plugins and scripts. It’s my day-to-day editor for most things because of its extensibility and speed. All articles are written in Markdown as you might expect these days.

To preview and refine articles locally prior to publishing, I use a Docker Compose setup which allows me to run a Hugo instance, mounted on my local copy of the blog repository root.

The docker-compose.yml looks like this:

services:
  # Dev: hugo's built-in server with hot reload. Edit content/, layouts/, static/ —
  # the browser refreshes automatically. No Caddy involved in dev.
  blog:
    image: hugomods/hugo:latest
    container_name: blog-hugo-dev
    # Run as the host user/group so any cache or generated-resource files
    # Hugo writes to the bind-mounted repo are owned by the host user, not
    # root. Without this, prod deploy fails when Hugo's --cleanDestinationDir
    # can't remove root-owned leftovers in public/.
    user: "${UID:-1000}:${GID:-1000}"
    working_dir: /src
    command:
      hugo server --bind 0.0.0.0 --port 80 --baseURL http://localhost:18080/ --appendPort=false
      --disableFastRender
    ports:
      - "18080:80"
    volumes:
      - .:/src
    restart: unless-stopped

Running the blog locally is just this:

docker compose up

Once I’m happy with a new post — or a tweak to the site structure — I publish by running an Ansible playbook that builds the site locally and rsyncs the result to my VPS.

ansible-playbook deploy.yml

The deploy.yml used for publication looks like this:

- name: Deploy blog
  hosts: blog
  gather_facts: false

  collections:
    - ansible.posix

  vars:
    repo_root: "{{ playbook_dir | dirname }}"

  tasks:
    - name: Build Hugo site locally (Docker one-shot)
      ansible.builtin.command:
        cmd: >-
          docker run --rm -v {{ repo_root }}:/src -w /src --user {{ lookup('pipe', 'id -u') }}:{{
          lookup('pipe', 'id -g') }} hugomods/hugo:latest hugo --minify --cleanDestinationDir
      delegate_to: localhost
      run_once: true
      changed_when: true
      tags: [build]

    - name: Sync rendered site to host
      ansible.posix.synchronize:
        src: "{{ repo_root }}/public/"
        dest: "{{ web_root }}/"
        delete: true
        recursive: true
        rsync_opts:
          - "--omit-dir-times"
      become: true
      tags: [sync]

    - name: Re-apply SELinux file contexts on web root
      ansible.builtin.command:
        cmd: "restorecon -R {{ web_root }}"
      become: true
      register: restorecon_result
      changed_when: restorecon_result.stdout | length > 0
      tags: [sync]

In addition to the deploy.yml playbook, I also have a playbook for provisioning all the pre-requisites on a given VPS, with the assumption that the VPS is running a Fedora/DNF based distribution.

This means moving between VPS providers is trivial - stand up a fresh host, point Ansible at it, done.

All authentication is carried out via SSH where the keys live on one of my personal Yubikeys.