Creating a Collaborative User Space


This weekend Nicolas Knoebber asked me if I would be willing to host the web server for his dotfile project. Always looking to play at being a sysadmin, I was ecstatic when he asked. The project is primarily a CLI application used to provide quick access and version control for your linux dotfiles such as your .bashrc, but the hope is to provide a file server with the codebase so that users can host their own remote file versions. One such server will be hosted by us at dotfilehub.com, which is the instance Nick has asked me to host from my home server for the foreseeable future.

This post will be used to document the essential setup I went through to create a space on my server where we both could comfortably work, and where the files could be built and served (relatively) easily and reliably. The primary focus is on creating the user space – that is, the users, groups, ownership of directories, and service configurations that were needed to make sure we could both reliably change anything on the project we needed to. If you’d like to jump around, feel free. I have a tendency to use more words than I need to convey a point.

  1. Securing SSH Access
  2. Creating a Shared User
  3. Managing Shared Directories
  4. Creating Systemd Services
  5. Creating a Reverse Proxy with Nginx
  6. Updating DNS Records

Securing SSH Access

The biggest security vulnerability in my homelab had long been the allowed use of account passwords for SSH access. Now that someone else’s work was going to be subject to my security practices, I decided it was time to lock down remote access to my machine. The Arch wiki provided a few helpful links, and DuckDuckGo a few more, so you shouldn’t have any problem securing your machine. I found stribika’s guide to be the most complete, and I went through the whole process to make sure I was using secure protocols that stribika claims the NSA struggle with (since there are claims in the Snowden leaks that they are able to access weakly configured servers).

Nick already had a user on my server, so him and I copied over our public RSA keys by running ssh-copy-id [email protected]$SERVER, which should be included with your OpenSSH package. Afterwards I restarted my sshd daemon, and was no longer able to access users through their passwords remotely. I checked the journal to make sure the restart did not throw any errors, and all looked well, so I consider this a successful update.

Creating a Shared User

This post is not organized in order, but in the end I found it best to create a shared user for us to use to manage builds, deploys and daemons. The setup was not difficult, and provides us both access using sudo -su $USER to switch to the user, or sudo -u $USER $COMMAND to run any command we wish under that user. The most important benefit of this scheme, is that it isolates the journal to a single user, so that we do not have logging from my blog, Plex server, and other services being written to the same journal that dotfile uses.

First, I created a new user dotfile, and gave it a home directory. I specifically did not give the user a password, which will be explained further below.

useradd -m dotfile

I then created the dotfilehub user group, and included all of us in it:

groupadd dotfilehub
usermod -a -G dotfilehub dotfile
usermod -a -G dotfilehub lombardi
usermod -a -G dotfilehub nicolas

The dotfile user should never be logged into directly by any unauthorized user, but we still want access, so I gave all users in the dotfilehub group sudo access into this user. Since the user has no password defined, it is impossible to use su dotfile to change to the user, but sudo -u and sudo -su will still work for users in the dotfilehub group.

To edit the sudoers file you need to run visudo. This will open the file in the vi editor by default, and requires root access to work. To allow the dotfilehub group to sudo to the dotfile user, I added the below line to sudoers:

%dotfilehub ALL=(dotfile) NOPASSWD:ALL

With this, we had password-less access to a shared user to manage our builds, deploys and services. This setup means that our server is only as secure as our user accounts are, which I think is appropriate given the increased security on our SSH access.

Managing Shared Directories

After some deliberation, I decided that /var/dotfilehub was the appropriate place to keep the files for the server. The main considerations being that:

  1. Files that are going to be changing should be stored in /var/
  2. The closer the directory is to root, the easier it is to find when I inevitably forget where I put them

Initially, I felt that they should be stored in /var/www/dotfilehub.com, in the same convention I follow to serve this blog and my image server. However, since this content isn’t static, it felt out of place when I put the files in there.

Creating the directory and managing the permissions is straightforward, so I will just leave the process without further explanation:

mkdir /var/dotfilehub
chown nicolas:dotfilehub /var/dotfilehub 
chown lombardi:dotfilehub /var/dotfilehub 
chown dotfile:dotfilehub /var/dotfilehub 

My grasp of user permissions is not great – in the end I needed to chown us all to get full read, write and execute permissions instead of just giving the group access.

I switched over to the dotfile user and created a few more directories. Nick had requested a development route to host code from the development branch, so the structure ended up in this form:

/var/dotfilehub/
|
+-- production/
|   +-- dotfile/     <-- clone of master branch
+-- development/
|   +-- dotfile/     <-- clone of development branch

With this setup, all relevant users were able to read, write and execute all the relevant files.

Creating Systemd Services

Since we wanted a simple way to start and restart the services, we decided it was best to create systemd services for the production and development servers. The systemd service files were created as dotfile user services in order to keep access restricted to the dotfile user and to maintain a clean journal entry under that user. This has the added benefit of keeping Nick and I from accidentally stopping or restarting the service by forcing us to deliberately switch contexts to the dotfile user.

I created two service files in /home/dotfile/.config/systemd/userdotfilehub.service and dotfilehub-development.service. They are the same except for their execution path and port. Here is the production service configuration:

[Unit]
Description=Dotfile Hub Server
After=network.target

[Service]
Type=simple
ExecStart=/var/dotfilehub/production/dotfile/server -addr=localhost:6870
Restart=on-failure

[Install]
WantedBy=default.target

I can never decide on what port numbers to use, so I decided to use the ASCII decimal code for D followed by F. Since this value will be used in the nginx configuration as well, it helped me remember what number I chose until then.

For the development service, the directory for the ExecStart value was changed to the development directory, and the port was bumped by 1. Everything else was the same.

Services were enabled on reboot and started from the dotfile user:

systemctl --user dotfilehub
systemctl --user dotfilehub-development
systemctl start --user dotfilehub
systemctl --user dotfilehub-development

It was also necessary to log the dotfile user in on reboot and keep the login persistent.

loginctl enable-linger dotfile

Creating a Reverse Proxy with Nginx

I’ve gotten fluent with nginx at this point, as nginx was already configured on the server as a part of this blog. I have a post on how I configured nginx previously, so I will spare the boilerplate that was written there. All that was needed was to add entries for the production and development servers by adding two server blocks inside of my HTTP block.

server {
    server_name dotfilehub.com www.dotfilehub.com;
    location / {
        proxy_pass http://127.0.0.1:6870;
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }

    listen 80;
}

server {
    server_name dev.dotfilehub.com;
    location / {
        proxy_pass http://127.0.0.1:6871;
    }

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }

    listen 80;
}

These additions provide reverse proxies to the local port 6870 and 6871 for the appropriate domain names, and use the default nginx 500-range response page.

I added these two blocks, restarted the nginx service and proceeded to configure the DNS record as outlined below. Afterwords, I used Let’s Encrypt’s certbot tool to manage the SSL configuration for the server by running certbot --nginx and following the directions provided on the command line. This updated the nginx configuration to provide a reverse proxy for HTTPS to the HTTP server block. It is important that this needs to be run after the DNS records are configured.

Updating DNS Records

Nick had already purchased the dotfilehub.com domain name through Google Domains close to a year ago, so I asked him to give me access to the domain and got to work. I began by configuring a dynamic DNS record for the base URL on the Google Domains page, and adding a record in my /etc/ddclient/ddclient.conf file.

protocol=googledomains
login=$LOGIN_PROVIDED_BY_GOOGLE_DOMAINS
password=$PASSWORD_PROVIDED_BY_GOOGLE_DOMAINS
dotfilehub.com

This record worked flawlessly even though it was the second Google Domains record I included in the file. I restarted the service and was able to reach the dotfilehub server by pinging dotfilehub.com.

After I was confident in the dynamic DNS record, I went ahead and set up subdomain forwarding for the “www” subdomain to redirect to the base address, and created a CNAME record for the dev.dotfilehub.com domain. This was essentially the same setup used on this blog for its subdomains, which I plan to include a complete post of in the near future.

Conclusion

This provided Nick and I with everything we needed to serve dotfilehub content to the public, in a way that is easy to manage for two competent developers. Deploys are managed by manually SSH’ing into the server, pulling the latest code, and restarting the daemon as the dotfile user.

Of course, this is not sufficient for a complete project. However, considering the fact that at this point all we have is an index page, it is more than enough. All the work outlined in this post was completed in one long quarantined Saturday, with communication, research, debugging, errands, this blog post, and beer drinking included.

As things progress, I would like to get a Jenkins build server together to automate some of the deployment work as well as manage the other sites I have on this server. We will also need a reasonable backup system, which, given Nick’s choice of an SQLite database, should be straightforward and efficient. The network hardware I have at the moment will not be able to handle much traffic, since it’s all powered through the router I lease through AT&T, but it is definitely a great start. If you’re interested in contributing, please reach out to Nick or I, or push some code out to the dotfile repository!