I doubt I’ll be alone when I say that my server setup is a mess. It is a monolith of different services slammed together on the same machine. Some run on the bare metal, some run in a VM, some run in containers.

Recently I put some work into migrating my applications away from bare metal and into containers. While that effort is not complete just yet, it has already broken on me after podman-compose stopped creating pods per compose file. Apparently that was a bug, and I relied on this behaviour with a wrapper script.

The sequence this script followed is rather simple:

  1. Remove any existing podman-generated systemd unit files from ~/.config/systemd/system
  2. Run podman-compose up
  3. Generate new system files with podman generate systemd
  4. Move all *.service files generated to the ~/.config/systemd/system directory

However, this relied on the previous behaviour in podman-compose which created a pod for all services to live in per compose file, so it can guess the pod name and generate a set of systemd units for it with one invocation of the podman generate systemd command. Nifty, worked for a while, but now it’s time to move on.

The puzzle pieces

Let’s lay out what we want to achieve.

  • For security, containers should be rootless
  • For maintainability, containers should be defined in a docker-compose.yml file
  • For usability, containers should auto-start on boot
  • Purely for the sake of laziness we’ll throw in the extra requirement that containers should be self-updating
    • This might haunt me later. We’ll see about that, but because of this it should be trivial to disable

Target 1: Rootless

I refuse to run my containers in a rootful environment. I strongly believe that podman is designed to remedy this design decision in Docker and I should take advantage of that.

My containers run as a separate podman user with virtually no privileges on the system. Most containers run the software they host under a different user as well, which means that it’s practically impossible for container X to touch container Y’s files.

This imposes challenges when you want to share folders between containers, but that’s something a little GID magic can solve; give the containers the same GID and set a restrictive permission mask on that, so the actual owner gets write permissions yet other containers get read permission only.

Target 2: docker-compose

The standard docker-compose now works with podman v3!

A standard docker-compose.yml file should be sufficient. As an example, below is the configuration file for my Nextcloud instance:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
version: '3'

services:
  db:
    image: postgres:alpine
    restart: always
    volumes:
      - /containers/nextcloud/db:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=VERY_SECURE_POSTGRESQL_PASSWORD_HERE

  redis:
    image: redis:alpine
    restart: always

  app:
    image: nextcloud:apache
    restart: always
    ports:
      - 81:80
    volumes:
      - /containers/nextcloud/html:/var/www/html
      - /mnt/storage/Nextcloud:/var/www/html/data
    environment:
      - POSTGRES_HOST=db
      - POSTGRES_PASSWORD=VERY_SECURE_NEXTCLOUD_POSTGRESQL_PASSWORD_HERE
      - POSTGRES_DB=nextcloud
      - POSTGRES_USER=ncuser
      - REDIS_HOST=redis
    env_file:
      - db.env
    depends_on:
      - db
      - redis

  cron:
    image: nextcloud:apache
    restart: always
    volumes:
      - /containers/nextcloud/html:/var/www/html
      - /mnt/storage/Nextcloud:/var/www/html/data
    entrypoint: /cron.sh
    depends_on:
      - db
      - redis
      
  onlyoffice:
    image: 'onlyoffice/documentserver'
    ports:
      - '9980:80'
    environment:
      DB_TYPE: postgres
      DB_HOST: db
      DB_PORT: 5432
      DB_NAME: onlyoffice
      DB_USER: onlyoffice
      DB_PWD: VERY_SECURE_ONLYOFFICE_POSTGRESQL_PASSWORD_HERE
      JWT_ENABLED: 'true'
      JWT_SECRET: VERY_SECURE_JWT_SECRET_HERE
    volumes:
      - /containers/onlyoffice/logs:/var/log/onlyoffice
      - /containers/onlyoffice/certificates:/var/www/onlyoffice/Data
    depends_on:
      - db
    restart: always

The only thing really of note is that every service has restart: always set. This will be explained later.

Please note that rootless containers might not have the same privileges as a rootful container in terms of networking etcetera. Always double check the compose files you copy!

It is preferred that you use the podman secrets infrastructure for passwords instead of hardcoding them into this text file. This is beyond the scope of this article, do some research!

Target 2.1: SELinux

Fedora has SELinux enabled by default, and we don’t want to make Dan Walsh weep. Thankfully, setting up SELinux to play nice with your containers is trivial.

What I like to do is create a /containers directory (yes, in the root folder) whose owner I set to the podman user. After this it is just a matter of making sure the directory has the right SELinux type label, which is container_file_t. You can achieve this with the following command:

1
semanage fcontext -a -t container_file_t '/containers(/.*)?'

This will ensure that the /containers directory and all of its sub-directories and files will get the appropriate SELinux type applied. This however does not immediately apply to existing files or directories, so we can apply it by running:

1
restorecon -Rv /containers

If all is well you should be seeing a lot of label changes being printed to the console, depending on how many files exist in this directory.

Target 3: Autostart on boot

This took me too long to figure out, but it turns out that Podman already ships with a solution built in. Remember the restart: always that I mentioned earlier?

It turns out Podman ships with a systemd unit file specifically to start all containers marked to always restart. It is called podman-restart.service and it can be enabled as a user service!

1
systemctl enable --user podman-restart.service

Start a container marked with restart: always, and then reboot your system. Nothing happens, congratulations!

By default, user units are only started whenever the user starts a session. Since nobody has logged in to your podman user, no session was ever started. Even worse, should someone login as the podman user, the session will start, which will start your container - and then it will get killed when that person logs out. That’s super inconvenient.

systemd-logind has a concept of lingering users. The loginctl man page describes it as:

… If enabled for a specific user, a user manager is spawned for the user at boot and kept around after logouts. This allows users who are not logged in to run long-running services. …

We can enable this for our podman user with the following command:

1
loginctl enable-linger PODMAN-USER-HERE

Set that, reboot again and you should see it all start to come together.

Target 4: Automatic updates

The tricky part. We can have dnf automatic updates through dnf-automatic, and Podman even has its own auto-update system! However, we cannot use this in this setup, because we do not rely on per-container systemd unit files, because we use podman-compose and the global podman-restart.service.

Instead, what we can do is replicate the setup using custom unit files per docker-compose file. Create the following file at ~/.config/systemd/user/podman-update@.service:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
[Unit]
Description=Podman auto-update service for %i
Wants=network-online.target
After=network-online.target

[Service]
WorkingDirectory=%h/infra/%i
Environment="DOCKER_HOST=unix:%t/podman/podman.sock"
Type=oneshot
ExecStart=/usr/bin/docker-compose pull
ExecStartPost=/usr/bin/docker-compose up -d
ExecStartPost=/usr/bin/podman image prune -f

# Uncomment the following if you want to have e-mail notifications
#ExecStopPost=/usr/bin/unit-status-mail-user.sh %n "Hostname: %H" "Machine ID: %m" "Boot ID: %b"

[Install]
WantedBy=default.target

This is a template file which can be used multiple times with a different input parameter (the string after the @ sign).

Make sure to check the WorkingDirectory setting! This assumes you have your docker-compose.yml files stored in ~/infra/[subfolder]/docker-compose.yml

And add the following timer at ~/.config/systemd/user/podman-update@.timer:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
[Unit]
Description=Podman update timer for %i

[Timer]
OnCalendar=daily
RandomizedDelaySec=900
Persistent=true

[Install]
WantedBy=timers.target

For every subfolder, you can now enable the timer, which will call the unit:

1
systemctl --user enable podman-update@nextcloud.timer

Where nextcloud is a subfolder of ~/infra. Your nextcloud container will now update daily at midnight!

Bonus: Mail on unit start/failure

northernlights.se has a great resource on automatically mailing unit status after a unit starts or stops.

To adjust this approach for user unit files, create /usr/bin/unit-status-mail-user.sh with:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#!/bin/bash
MAILTO="root"
MAILFROM="unit-status-mailer"
UNIT=$1

EXTRA=""
for e in "${@:2}"; do
  EXTRA+="$e"$'\n'
done

UNITSTATUS=$(systemctl status --user $UNIT)

sendmail $MAILTO <<EOF
From:$MAILFROM
To:$MAILTO
Subject:Status mail for unit: $UNIT

Status report for unit: $UNIT
$EXTRA

$UNITSTATUS
EOF

echo -e "Status mail sent to: $MAILTO for unit: $UNIT"

Then uncomment the commented line in the systemd service above. Don’t forget to chmod +x the file and add SELinux rules:

1
2
3
chmod +x /usr/bin/unit-status-mail.sh /usr/bin/unit-status-mail-user.sh
restorecon -vF /usr/bin/unit-status-mail.sh
restorecon -vF /usr/bin/unit-status-mail-user.sh