Failed to unmount `/home` on shutdown

I’ve been getting the error

failed to unmount /home

during shutdown for the last week or so. Not sure what exactly might be causing this.

Here’s some output from journalctl

Jul 08 15:44:02 pavilion systemd[1]: Stopped Create Volatile Files and Directories.
Jul 08 15:44:02 pavilion systemd[1]: Stopped target Local File Systems.
Jul 08 15:44:02 pavilion systemd[1]: Unmounting /boot/efi...
Jul 08 15:44:02 pavilion audit: BPF prog-id=0 op=UNLOAD
Jul 08 15:44:02 pavilion audit: BPF prog-id=0 op=UNLOAD
Jul 08 15:44:02 pavilion systemd[1]: Unmounting /home...
Jul 08 15:44:02 pavilion systemd[1]: Unmounting /mnt/storage...
Jul 08 15:44:02 pavilion systemd[1]: Unmounting /run/credentials/systemd-sysusers.service...
Jul 08 15:44:02 pavilion umount[54571]: umount: /home: target is busy.
Jul 08 15:44:02 pavilion systemd[1]: Unmounting /tmp...
Jul 08 15:44:02 pavilion systemd[1]: Unmounting /var/cache/pacman/pkg...
Jul 08 15:44:02 pavilion systemd[1]: Stopping Flush Journal to Persistent Storage...
Jul 08 15:44:02 pavilion systemd[1]: home.mount: Mount process exited, code=exited, status=32/n/a
Jul 08 15:44:02 pavilion systemd[1]: Failed unmounting /home.
Jul 08 15:44:02 pavilion systemd[1]: var-cache-pacman-pkg.mount: Deactivated successfully.
Jul 08 15:44:02 pavilion systemd[1]: Unmounted /var/cache/pacman/pkg.
Jul 08 15:44:02 pavilion systemd[1]: Unmounting /var/cache...

Here’s my /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a device; this may
# be used with UUID= as a more robust way to name devices that works even if
# disks are added and removed. See fstab(5).
#
# <file system>             <mount point>  <type>  <options>  <dump>  <pass>
UUID=1A14-AF0E                            /boot/efi      vfat    umask=0077 0 2
UUID=792fd0e5-0942-45cc-b07b-400213890a85 /              btrfs   subvol=/@,defaults,noatime,space_cache,noautodefrag,compress=lzo 0 1
UUID=792fd0e5-0942-45cc-b07b-400213890a85 /home          btrfs   subvol=/@home,defaults,noatime,space_cache,noautodefrag,compress=lzo 0 2
UUID=792fd0e5-0942-45cc-b07b-400213890a85 /var/cache     btrfs   subvol=/@cache,defaults,noatime,space_cache,noautodefrag,compress=lzo 0 2
UUID=792fd0e5-0942-45cc-b07b-400213890a85 /var/log       btrfs   subvol=/@log,defaults,noatime,space_cache,noautodefrag,compress=lzo 0 2
tmpfs                                     /tmp           tmpfs   defaults,noatime,mode=1777 0 0
UUID=792fd0e5-0942-45cc-b07b-400213890a85 /var/cache/pacman/pkg btrfs subvol=/@var-cache-pacman-pkg,defaults,noatime,space_cache,noautodefrag,compress=lzo 0 2

UUID=792fd0e5-0942-45cc-b07b-400213890a85 /swap          btrfs   subvol=@swap,defaults,compress=no 0 0
/swap/swapfile none swap defaults 0 0
UUID=27454cca-524b-4918-afa5-9ae9dbb2d1ee	/mnt/storage	ext4	defaults,noatime 0 2

This normally means that something is holding the filesystem open, e.g. an open file or running process. Do you use anything like psd or asd, or leave things running on log out? Maybe try quitting all applications, logging out, then shutting down?

2 Likes

Is there a way to figure out which processes might be holding the filesystem back?

A basic process of elimination. You would need to look at which ones are running when you shut down, and if the issue doesn’t occur then try some different applications until you find the one causing the issue.

1 Like

Turns out it was a script that was spawning zombie processes. Now it’s fixed.

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.