Drive Mount Failing on Reboot


Last Tuesday I got an email from Let’s Encrypt that they needed to revoke a few million certificates that they had issued, and that the certificate I was using for this website happened to be one of them. I wasn’t particularly bothered; they seemed to have a handle on things and I generally prefer bugs that are found and fixed than bugs that are left to cause trouble.

When I got to work I logged into my server figuring I could update the certificate quickly and be on my way. There’s no real reason to go into detail, so I’ll just say that it was not simple, and I quickly got frustrated and forgot about it until the next day. On the morning of March 4th when I went back on my server, it appeared that my nginx process had not fully stopped during a restart and systemd couldn’t start the service again because the ports it was trying to serve on were still open. I figured it would be easiest to reboot my machine rather than deal with this mess, so I thoughtlessly typed out systemctl reboot, chatted a bit on Slack, and attempted to log back on to my server. After trying to ssh in a few times it became apparent that something was wrong, since my server’s IP was not being resolved. This wasn’t much of a problem – nobody looks at this website anyway. It was, however, a bit of a pain getting through the workday without my Plex library.

When I got home I plugged my server into the TV that sits next to it and got to work. Arch was booted into emergency mode and prompted me to log in as the root user and check the logs. Rather than tail the logs like a sane person, I skimmed through the entire session’s logs from start to end before finding these entries:

Mar 04 17:50:28 user systemd[1]: local-fs.target: Job local-fs.target/start failed with result 'dependency'.
Mar 04 17:50:28 user systemd[1]: Dependency failed for Local File Systems.
Mar 04 17:50:28 user systemd[1]: Failed to mount /media.
Mar 04 17:50:28 user systemd[1]: media.mount: Failed with result 'exit-code'.
Mar 04 17:50:28 user systemd[1]: media.mount: Mount process exited, code=exited, status=12/n/a

I’ve always been surprised that the NTFS drive I have mounted at /media actually works. It’s been completely painless ever since I set it up, and Plex has been serving files from it without any extra configuration or errors. However, it was so long since I set it up that I completely forgot what it was that I did to get it working.

I pulled up my /etc/fstab file and found the entry for my media drive:

UUID=$UUID1 /      ext4    rw,relatime 0 1
/dev/sdb2   /media ntfs-3g defaults    0 0 

Well there’s the problem. My drive was attempting to mount /dev/sdb2 to /media, but when I listed my drives with fdisk -l, the drive I wanted appeared to exist at /dev/sdd2. I rebooted the server again with systemctl reboot, and the drive I wanted showed up this time at /dev/sdc2. It had for some reason never occurred to me that these values might change on reboot, since the 4 hard drives I have connected to this machine are never switching connectors. That being said, a quick skim of the Arch Wiki made me realize how wrong I was. It seems that I had just gotten lucky the few times I had rebooted this machine since mounting the media drive, and was less lucky the last two days.

All that needed to be done was to reference the drive by its UUID like so:

UUID=$UUID1 /      ext4    rw,relatime 0 1
UUID=$UUID2 /media ntfs-3g defaults    0 2 

After another reboot, everything came up without an issue.