If autofs connections delay shutdown / reboot

If you are using autofs to mount your external shares (e.g. NAS) then you probably experienced the same issue like me. When you want to shutdown the computer it takes 3 minutes because the autofs service hangs and waits for the default time out. This is caused when e.g. the Network Manager disconnects your network during logout of your session but the autofs still runs. I have seen people posting to reduce the service timeout but a better solution is to sudo systemctl edit autofs.service and add the option ExecStop=umount. The file should look like this:

Description=Automounts filesystems on demand
After=network.target ypbind.service sssd.service network-online.target remote-fs.target rpc-statd.service rpcbind.service
Wants=network-online.target rpc-statd.service rpcbind.service

ExecStart=/usr/bin/automount $OPTIONS --systemd-service --dont-check-daemon
ExecReload=/usr/bin/kill -HUP $MAINPID
ExecStop=umount -a -f -t nfs


Of course you may want to alter the nfs entry with your current network file protocol you access the external data, like nfs4 or cifs.

You would like to reload the services by
systemctl daemon-reload

Now autofs should not cause a waiting time during shutdown anymore.
Hope this is useful for some but if you have got any other solution this would be awesome to know.

You can use a systemd-automount in /etc/fstab or in a unit file. That is what they are for.

Here is an nfs example for /etc/fstab

server:/path /target/path nfs x-systemd.automount,x-systemd.device-timeout=10,timeo=14,x-systemd.idle-timeout=1min 0 0

As a side note, never edit the files in /usr/lib/systemd/system. Either use the systemctl edit command or copy the file to /etc/systemd/system and modify it there.

1 Like

Great - thank you.

I gonna add systemctl edit to my post

Just to understand correctly… The fstab entry is made instead of the /etc/auto.misc file to define all shares and where they are connected to? But /etc/auto.master still needs to be defined.

The fstab entry is all you need. No other changes are required anywhere. It “just works”

1 Like

Just to say, I much prefer autofs to systemd mount. I’ve never managed to get systemd mount to work properly, as it mounts EVERYTHING as root, and I need things mounted as the current user (assuming multi-user per machine), in order to allow files/folders created to have correct permissions. Autofs you can do UID=$UID, GID=$GID to do this, if there’s a way in systemd mount, I haven’t found it on any of the tutorials I’ve read.

I have never had this problem. What filesystem are you trying to mount this way?

cifs (smbfs), the only real choice on my NAS as it’s geared towards home users.

That definitely should work fine. Let me pull one of my old entries out of my notes and test it.

OK, depending on what exactly you are trying to solve for here are a couple of options:

This mounts it as a specific user you set:

//servername/sharename /path/to/mount cifs x-systemd.automount,x-systemd.idle-timeout=1min,rw,uid=yourusername,gid=yourgroupname,credentials=/etc/samba/private/sharename.cred,iocharset=utf8,vers=2.0 0 0

It needs root to make the initial mount but since that happens at boot time it will have root at that point regardless of who logs in.

This mounts it as the current user and doesn’t require root at all:

//servername/sharename /path/to/mount cifs x-systemd.automount,x-systemd.idle-timeout=1min,rw,user,credentials=/etc/samba/private/sharename.cred,iocharset=utf8,vers=2.0 0 0

I am using a credentials file in both examples but that isn’t required. If you would prefer, you can put the smb creds inline.


Might try it again, but I seem to recall having tried setting that before, and whenever a user would attempt to copy a file over to the NAS, it would do it, but would pop up an error saying it had failed (using dolphin).

If you think you can help fix it, feel free to split this into a new thread…but it just doesn’t WORK on my systems with systemd-mount. Set it all up (unit files instead of fstab), and I get nothing but:

[tim@sovereign soth]$ pwd
[tim@sovereign soth]$ ls documents
ls: cannot access ‘documents’: Too many levels of symbolic links
[tim@sovereign soth]$ cd documents
bash: cd: documents: Too many levels of symbolic links

And journalctl shows no errors, just lines upon lines of

May 03 09:22:13 sovereign systemd[1]: mnt-soth-documents.automount: Got automount request for /mnt/soth/documents, triggered by 5045 (bash)

Since on the NAS at home I have anonymous browsing enabled, seems related to having guest and user both set… If I have guest, it’ll mount, but mounts read-only even if rw is set. If I add user, gets those errors.

So, if I do guest w/ file_mode=0777,dir_mode=0777, it MOSTLY works, but gets the odd behavior that I remember if you attempt to MOVE a file to the share, gives error that it doesn’t have access, but does indeed copy (instead of move) the file if you choose “skip all”