Mounting a WD MyCloud

My fstab:

//192.168.0.24/Steve  /media/mycloud  cifs  uid=xircon,credentials=/home/xircon/.smbcredentials,iocharset=utf8 0 0

Doesn’t mount at boot, but if I issue sudo mount -a it does.

Journalctl grepped for mycloud:

Jun 20 10:59:16 xircon-w6567sz systemd[1]: Mounting /media/mycloud...
Jun 20 10:59:16 xircon-w6567sz systemd[1]: media-mycloud.mount: Mount process exited, code=exited, status=32/n/a
Jun 20 10:59:16 xircon-w6567sz systemd[1]: media-mycloud.mount: Failed with result 'exit-code'.
Jun 20 10:59:16 xircon-w6567sz systemd[1]: Failed to mount /media/mycloud.

Any ideas?

Might this help with problem solving. Perhaps it’s trying to start before the network is running in systemd?
https://bbs.archlinux.org/viewtopic.php?id=228685

Since the content of your fstab is converted to systemd units it makes sense to use a systemd mount unit

Create the file

/etc/systemd/system/media-mycloud.mount

with content (change WORKGROUP to your system if different)

[Unit]
Description=WD MyCloud
After=network-online.target
Wants=network-online.target

[Mount]
What=//192.168.0.24/Steve
Where=/media/mycloud
Type=cifs
Options=_netdev,iocharset=utf8,rw,file_mode=0777,dir_mode=0777,user=xircon,credentials=/home/xircon/.smbcredentials,workgroup=WORKGROUP
TimeoutSec=30

[Install]
WantedBy=remote-fs.target
WantedBy=multi-user.target

Complement with an automount unit

/etc/systemd/system/media-mycloud.automount

with content

[Unit]
Description=WD MyCloud
After=network-online.target
Wants=network-online.target
ConditionPathExists=/media/mycloud

[Automount]
Where=/media/mycloud
TimeoutIdleSec=10

[Install]
WantedBy=remote-fs.target
WantedBy=multi-user.target

Since you already created the folder you only need to start and enable the automount

sudo systemctl enable --now media-mycloud.automount

The share will mount at first access - no matter if it is filemanager or terminal

ls /media/mycloud

Note:

  1. Mount and automount units uses a mandatory naming
    path-to-mount.{mount,automount}
  2. When creating new mount and automount units - start and stop the mount unit once - as this will create the necessary path.
1 Like

You shouldn’t need to use mount units to do all that. You can do it much more simply in /etc/fstab

Here is the OPs mount command turned into a systemd automount in /etc/fstab

//192.168.0.24/Steve /media/mycloud cifs x-systemd.automount,x-systemd.idle-timeout=1min,rw,uid=xircon,credentials=/home/xircon/.smbcredentials,iocharset=utf8,vers=2.0 0 0

Yep, the timeout is the thing. The “32” given by systemd in your logs pointed to that.

The mount units spew out a lot of error:

Jun 20 13:44:45 xircon-w6567sz kernel: raid6: skip pq benchmark and using algorithm avx2x4
Jun 20 13:44:45 xircon-w6567sz systemd[1]: Condition check resulted in Rebuild Dynamic Linker Cache being skipped.
Jun 20 13:44:45 xircon-w6567sz systemd[1]: Condition check resulted in File System Check on Root Device being skipped.
Jun 20 13:44:45 xircon-w6567sz systemd[1]: Condition check resulted in Repartition Root Disk being skipped.
Jun 20 13:44:45 xircon-w6567sz systemd[1]: Condition check resulted in First Boot Wizard being skipped.
Jun 20 13:44:45 xircon-w6567sz systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.
Jun 20 13:44:45 xircon-w6567sz systemd[1]: Condition check resulted in Create System Users being skipped.
Jun 20 13:44:46 xircon-w6567sz systemd[1]: Condition check resulted in First Boot Complete being skipped.
Jun 20 13:44:46 xircon-w6567sz systemd[1]: Condition check resulted in Store a System Token in an EFI Variable being skipped.
Jun 20 13:44:46 xircon-w6567sz systemd[1]: Condition check resulted in Commit a transient machine-id on disk being skipped.
Jun 20 13:44:46 xircon-w6567sz systemd[1]: Condition check resulted in Virtual Machine and Container Storage (Compatibility) being skipped.
Jun 20 13:44:48 xircon-w6567sz systemd[1]: Condition check resulted in Rebuild Dynamic Linker Cache being skipped.
Jun 20 13:44:48 xircon-w6567sz systemd[1]: Condition check resulted in Store a System Token in an EFI Variable being skipped.
Jun 20 13:44:48 xircon-w6567sz systemd[1]: Condition check resulted in First Boot Wizard being skipped.
Jun 20 13:44:48 xircon-w6567sz systemd[1]: Condition check resulted in First Boot Complete being skipped.
Jun 20 13:44:48 xircon-w6567sz systemd[1]: Condition check resulted in Rebuild Hardware Database being skipped.
Jun 20 13:44:48 xircon-w6567sz systemd[1]: Condition check resulted in Commit a transient machine-id on disk being skipped.
Jun 20 13:44:48 xircon-w6567sz systemd[1]: Condition check resulted in Repartition Root Disk being skipped.
Jun 20 13:44:48 xircon-w6567sz systemd[1]: Condition check resulted in Create System Users being skipped.
Jun 20 13:44:48 xircon-w6567sz systemd[1]: Condition check resulted in Update is Completed being skipped.
Jun 20 13:44:48 xircon-w6567sz systemd[1]: Condition check resulted in Manage Sound Card State (restore and store) being skipped.
Jun 20 13:44:48 xircon-w6567sz systemd[1]: Condition check resulted in SSH Key Generation being skipped.
Jun 20 13:44:56 xircon-w6567sz systemd[4578]: org.gnome.Shell@wayland.service: Skipped due to 'exec-condition'.
Jun 20 13:44:56 xircon-w6567sz systemd[4578]: Condition check resulted in GNOME Shell on Wayland being skipped.

That has nothing to do with the mount units - but somehting completely different.

Well - systemd is converting the fstab to mount units - so why not use mount units in the first place?

But I know - some long time *nix users are heavily tied onto fstab … use what ever you like

@dalto @root - I get the “Condition check” errors from using the unit files or from @dalto’s amended fstab line - take them out and I don’t get them - reading up on it now, but what do they mean?

They are not errors - they are log messages on what the system is doing - and they are quite readable - they are nothing but information.

The amended fstab line is converted to mount/automount units and again - the messages has nothing to do with the mounting.

There is nothing in those messages which refers to the mounting resulting from the newly added mount sequence.

Your initial issue is that your network is not up when you try to mount the network share - and therefore the mount fails.

The above mentioned issue is remedied in the mount units by specifying that the mount units are not to be processed unless network is up.

Furthermore the units specifies when they are needed - at multi-user.target and remote-fs.target.

These conditions will ensure that the system does not mount unless network is up and a user requests it - which is only possible when reaching multi-user.target and network.target is reached.

Condition check - was this first boot? No - then wizard skipped
Condition check - was the first boot completed? Yes - then wizard skipped

You can apply the logic to the other entries as well.

1 Like

Check this out:
https://forums.centos.org/viewtopic.php?t=52507

Specifically referring to “exit status 32” which you are getting.

A bunch of reasons:

  • I have over 50 mounts, it is drastically easier to manage them in a single place
  • Why replace a single line in fstab with two files full of config that do the exact same thing?
  • It is easier to copy and paste a line in fstab than it is to copy two files, edit them and enable the mount.
  • Using fstab is the method officially recommended by the documentation

In general, configuring mount points through /etc/fstab is the preferred approach

1 Like

As I said - whether it is by choice or by habit …

We all manage the system in the way we find most effective and there is nothing wrong with either approach.

I don't care to argue for my way of doing things

But since you wanna know …

I know what’s in the manual of systemd mount units …

I like the systemd mount unit approach because it - for me - has been far easier to troubleshoot a single mount unit than to experiment with a mount command line which I then have to convert to a fstab options list and fiddle with creating the correct sequence for automounting when in the end all your hardwork fstab is parsed and converted to systemd mount units.

It is a breeze to create and test a new unit without having to roll through endless reboots until you get the fstab options list right.

But that’s me …

Not trying to argue here either. You proposed an approach, I proposed an alternative approach that achieves the same goal.

Then you asked why so I answered :wink:

As side note, you don’t need to reboot to test changes in /etc/fstab. In fact, you probably shouldn’t unless you like living dangerously. :cowboy_hat_face: I recommend testing them first before rebooting.

3 Likes

I just came over your other topic and was just thinking - if you are indeed connecting using wlan then use mount on demand as it is very difficult to control when systemd actually connects to the network.

Had an unexpectedly busy day yesterday!

In the end, I bodged it, I reverted to my fstab line and added a line to a script I launch at start-up (sudo mount -a).

This works, the drive mounts and I get no weird errors.