Guide: Opinionated Guide to Docker Host the Sync Server

Opinionated Guide to Docker Host the Sync Server

PostgreSQL 16


First start the DB to get the UID & GID needed for the init files.

Store the secrets somewhere. E.g. in ~/docker-server-env/postgresql:



Then let’s start the default PostgreSQL container. No volume mounts for data, as we want to discard it anyway:

cd ~/docker-server-env/postgresql
. ./secrets
docker run -d \
  --name postgresql-16 \
  -p 5432:5432 \
  --restart unless-stopped \
  --network internal-docker-net \
  -e POSTGRES_DB=mydbname \

docker logs postgresql-16

Now let’s get the UID & GID needed:

# get the user ID of the user running the process
docker top postgresql-16
# was 999 for the process postgres

# get the user name
docker exec -it postgresql-16 /bin/bash
cat /etc/passwd | grep 999

# got the line for the user postgres for the group id
cat /etc/group | grep postgres

# hence, this is what we want: 999:999

# let's clean up
docker stop postgresql-16
docker rm postgresql-16

Note down the UID & GID somewhere for later.

PostgreSQL init files

In the folder docker-host:/home/user/docker-server-env/postgresql/initdb.d/ create your init scripts like

set -e


echo "Creating database: ${DB_NAME}"

psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL

and to restore backups at later points, when you start a new PostgreSQL version 02_atuin_data.sql:

-- paste import statements here to apply a backup to a new db

To allow execution in the container, we need to fix the ownership to the UID & GID noted down in the last section and file permissions:

cd ~/docker-server-env/postgresql
sudo chown 999:999 initdb.d/*
sudo chmod u+x initdb.d/*

Start and initialise the DB

cd ~/docker-server-env/postgresql

docker volume create postgresql-data

. ./secrets
docker run -d \
  --name postgresql-16 \
  -p 5432:5432 \
  --restart unless-stopped \
  --network internal-docker-net \
  --mount 'type=volume,src=postgresql-data,destination=/var/lib/postgresql/data' \
  --mount 'type=bind,src=/home/user/docker-server-env/postgresql/initdb.d,destination=/docker-entrypoint-initdb.d' \
  -e POSTGRES_DB=mydbname \

docker logs postgresql-16

In the log look out for ERRORs…

Test the connection with the newly created user to it’s DB:

psql -h docker-host-01 -U atuin

In case something went wrong, delete everything and start from scratch again:

docker stop postgresql-16
docker rm postgresql-16
docker volume rm postgresql-data
docker volume create postgresql-data

PGAdmin for Admin tasks

# create volumes
docker volume create pgadmin

# start
docker run -d \
  --name pgadmin4 \
  --restart unless-stopped \
  --network internal-docker-net \
  --mount 'type=volume,src=pgadmin,destination=/var/lib/pgadmin' \
  -e "" \
  -d \

# to update
docker stop pgadmin4
docker rm pgadmin4
# and execute the run from before

Atuin setup

Reference: Atuin server config

Then to run an own sync server:

cd docker-server-env/atuin-sync-server

Create the config file server.toml:

host = ""
port = 8888
open_registration = true

And start the db for real

docker run -d \
  --name atuin-sync-server \
  --restart unless-stopped \
  --mount 'type=bind,src=/home/user/docker-server-env/atuin-sync-server/server.toml,target=/config/server.toml' \
  -p 8888:8888 \
  --network internal-docker-net \ \
  server start

# for error search add: -e ATUIN_LOG=debug -e RUST_LOG=debug

login to register a new atuin installation to my own sync server: atuin register

I hope this helps someone.

1 Like

Any particular reason not to solidify this within a docker-compose file?

Whats sshdh1? in the Atuin setup?

oh! my bad. That’s just an alias, to ssh into my server. :smiley:

I have a bunch of different dockerized services running, all with --restart unless-stopped. Was an easy way to not to deal with systemd and the like.

For some I have written compose files and made systemd configs for them, but was a bit much to maintain.

I just created a Swarm cluster on my server today. Guess, I will move over slowly everything, as I can mount then NFS directly and also use secrets (I know, they are also available with compose, but not NFS as far as I know).

All good, thanks for the answers. I never stepped into Swarm. But I use a mixure of a TrueNAS SCALE NAS - which nativley runs a k3s single-node cluster with their Web UI managing helm based charts (TrueCharts is their most active community) or docker-compose which runs inside a systemd-nspawn sandbox on that same machine.

So I never dealt with the actual complexity of the cluster, it either is supported by the comunity and I use it ‘as-is’, or I run it with docker-compose.

I also tried to use the TrueNAS Scale k3s for some stuff. But to honest, I didn’t have much luck with what I wanted to do.

I even locked me out of TrueNAS by starting Traefik at port 80. I had to reset my TrueNAS config to access it again. :man_facepalming: Lost my configuration for my TimeMachine backups…

When I did an upgrade of TrueNAS, there was the warning at the Treafik config to not do it, as you brick your NAS… really annoying

I somehow don’t like the TrueNAS GUI to configure such services, as it adds another layer of complexity to an already complex environment. Also all what I have heard, k8s is way more complex than Swarm.

All I need from it is secrets and NFS mounts for Docker volumes. AFAIK NFS mounts are not available with docker compose, but on Swarm. Just regoogling this, I am not so sure anymore…

I identify with most of what you wrote, as I’ve gone through quite a few iterations until I finally started understainding it.

TrueNAS SCALE was my actual introduction to kubernetes, as in the first time I actually did something on it - and to be fair, just the concepts, because the configuration and setup all are done via the UI, which at the begining wasn’t great, and now it’s just good enough.

But now, after sticking with it for over 2 years, I gotta say - this is the best system I’ve ever run for myself.
And, I learned a lot about troubleshooting kubernetes which is something that I hope to apply in a work setting sometime in the future.

I must say though, I never experienced data-loss, the most that I’ve ever lost was the application setup, because the storage always remains. The OS is completely separate from the rest of the system, and with some (non trivial) work, things can be restored, even when proper backups were not made - it’s just a pain.

I am curious about swarm though, as it was assumed to be dead at one point, but today I see more and more people choosing it over k3s.

I know, for suer there should have been a way to get back to a workable system.
But I saw the “reset configuration” option after boot… The rest is history.

For sure was not a glorious moment in my IT… whatever.

Regarding, k8s/k3s/swarm… I use docker for development since years. For hosting up to now just docker-compose. Always chickened out going further… Wanted always learn k8s, but it’s a time issue… Always too much on the plate.

Swarm was my middle ground, not seeming that complex and more than what I need.

Anyway. As memory is limited in my mac mini, I am really tempted to learn Nix/ to have local dev environments which don’t need the VM to run everything deterministically. Toyed around with it, but again, it’s a time sink! :smiley:

direnv is greate, I used it in my Linux development life (now I’m back in the Microsoft world for work)

Nix is fascinating, I keep trying, but never feel comfortable within it yet.

If I ever have an ‘aha’ moment, l’ll be happy to share.

1 Like

Is there a NixOS for WSL?

Same for the aha moment as well! No idea, whether this will happen once.

Maybe, but I’m more focused on the package manager.

I would rather make Nix my go-to instead of brew, or containers for running local tools.

If I master that, then switching to NixOS will start making sense to me.