All hail systemd. I have been to the seminar with the strange-tasting punch.
However, I have an /etc/fstab with about 200 NFS mounts in it. systemd does its job and scans the /etc/fstab and makes systemd mounts out of all of them on the fly. Unfortunately when it tries to mount all of them at exactly the same time with 200 individual mount processes, sadness occurs on either the client or the server and some of the mounts take longer than the 90 second timeout in systemd. systemd then sends a TERM to the mount -- it appears to do nothing because all the filesystems get mounted anyway without a retry. I do end up with a bunch of orphaned rpc.statd processes because of the TERM signal.
So far my solutions to this sort-of problem appear to be:
1. Find a way to change the default timeout used for mounts systemd creates on the fly.
2. Find a way to convince systemd to treat them all with a single mount process (like mount -a does).
3. Move the mounts out of fstab into systemd proper. I would have to make some intermediate targets to partially serialize the mounting. I could also adjust the timeout value per mount.
4. Go back to using automount which has caused us so much grief in the past it was removed from our configuration.
5. Just have a script kill all the orphaned rpc.statd processes after boot and walk away, whistling nonchalantly.
Options 3, 4, and 5 are all pretty distasteful for our organization. We also prefer to keep the mounts in /etc/fstab to make for a more homogenous deployment between systemd and non-systemd machines.
Any brilliant ideas?