Local Nextcloud server with let's encrypt certificates (nextcloud-snap)

DISCLAIMER

USE THIS GUIDE AT YOUR OWN RISK. I MAKE NO GUARANTEE OF FUNCTIONALITY OR SECURITY. YOU SHOULD BE AWARE THAT YOU ARE EXPOSING PARTS OF YOUR NETWORK TO THE PUBLIC AT LEAST TEMPORARILY. THIS IS ALWAYS A RISK.

ONCE AGAIN - BY FOLLOWING THIS GUIDE, YOU DO SO ENTIRELY AT YOUR OWN RISK AND RESPONSIBILITY.


This tutorial utilizes the the following apps and tools to set up and automatically backup a snap package based nextcloud server with encryption certificates by lets-encrypt on a EndeavourOS system, accessible from your local network or by VPN (VPN instructions not included):

  • snap
  • apparmor
  • cronie
  • nextcloud_snap
  • dnsmasq

Prerequisites

  • A public domain/subdomain pointing to your router’s public ip address (needed to get officially signed encryption certificates)
  • If you don’t own a static IP address, you’ll need a working DynDNS setup.
  • This guide assumes you’re running an EndeavourOS installation, with it’s included tools and utilities.
  • PAY ATTENTION to the firewall rules, it’s assumed your’re using the “public” zone as a default. If not you might have to customize the rules slightly.

Install snap and apparmor

yay -S snapd
sudo systemctl enable --now snapd.socket

yay -S apparmor
sudo systemctl enable apparmor

Reboot!

Open the grub settings and edit the following line to add the needed kernel-parameters, then regenerate the grub config:

Open the config file:

sudo nano /etc/default/grub

Locate and extend the following line by the ‘apparmor’ and ‘security’ parameters:

GRUB_CMDLINE_LINUX_DEFAULT="... apparmor=1 security=apparmor ..."

Update grub:

grub-mkconfig -o /boot/grub/grub.cfg

Reboot!

Check if apparmor is up and running:

systemctl status apparmor

Enable the snap apparmor service and add classic snap support by creating a symbolic link:

sudo systemctl enable --now snapd.apparmor.service

sudo ln -s /var/lib/snapd/snap /snap

Add the snap binary installation folder to the PATH:

Create/Open:

sudo nano /etc/sudoers.d/90_snap

Add the following (insert your username):

Defaults:<your-user-name> secure_path="/usr/local/sbin:/usr/local/bin:/usr/bin:/var/lib/snapd/snap/bin"

Reboot!

Test:

sudo snap install hello-world

This command should return “Hello World!”:

hello-world

This command should return a “Denied Error…” as a sign that apparmor is working:

hello-world.evil

As a preparation for the next steps install and enable cronie:

sudo pacman -S cronie
sudo systemctl enbale --now cronie

Set a static IP to the server


== Set a static IP address to the server using your favorite network configuration tools. ==

== Add Port-Forwardings to your router’s config. Add 80 tcp, 443 tcp pointing to your static server IP address. ==
If you need further informations, consult your router’s manual


Afterwards edit your server’s firewall configuration to accept inbound traffic at 80/tcp, 443/tcp and 53/tcp+udp from your local network only.

ATTENTION! You have to change the network’s IP address to fit your own network.

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" port port="53" protocol="udp" accep  
t' --permanent

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" port port="53" protocol="tcp" accep  
t' --permanent

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" port port="80" protocol="tcp" accep  
t' --permanent

sudo firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" port port="443" protocol="tcp" accep  
t' --permanent

Install the Nextcloud Server and generate let’s encrypt certificates

Install nextcloud_snap

sudo snap install nextcloud

Goto http://localhost using your browser and follow the steps shown there!

Create lets-encrypt encryption certificates

First temporary open your server to the public by adding port openings to the “Runtime” configuration of your firewall:

sudo firewall-cmd --zone=public --add-port=443/tcp
sudo firewall-cmd --zone=public --add-port=80/tcp

Configure nextcloud-snap to use new official certificates by “let’s encrypt”:

sudo -i

nextcloud.enable-https lets-encrypt

exit

When everything worked well, remove public access to your server by reloading the firewall. This will remove temporary “Runtime” rules and limit access to local network only again:

sudo firewall-cmd --reload

Fine-tune nextcloud_snap (Insert your domain instead of sub.domain.tld):

sudo snap connect nextcloud:removable-media
sudo snap connect nextcloud:network-observe
sudo snap set nextcloud php.memory-limit=512M
sudo snap set nextcloud nextcloud.cron-interval=5m
sudo snap set nextcloud http.compression=true
sudo nextcloud.occ config:system:set trusted_domains 1 --value="sub.domain.tld"

Setup a local DNS service

Next we set up dnsmasq, which enables us to use our domain name inside the network while utilizing the lets-encrypt certificate.

Edit “/etc/dnsmasq.conf” add/uncomment/customize:

## exclude libvirt virtual network this is optional
except-interface=virbr0

## insert the IP address of your server/local DNS i.e. 192.168.1.20
bind-dynamic
listen-address=::1,127.0.0.1,<your local dns server ip>

no-resolv
domain-needed
bogus-priv

## Insert the desired public DNS servers 8.8.8.8 / 8.8.4.4 (Google) for example. 
## You can use any other public DNS Servers. Pretty sure you don't want to have 
## Google. 
server=<replace with public dns-server1>
server=<replace with public dns-server2>

cache-size=1000

Edit your DNS server’s hosts file (/etc/hosts) by adding your server IP and the domain name you want to associate to it:

<replace with your server ip> sub.domain.tld

Enable dnsmasq via systemd:

sudo systemctl enable dnsmasq

Reboot!


== Configure your router to provide your custom local DNS server IP to dhcp clients ==
If further informations are needed, consult your router’s manual


Configure an automatic backup script executed by cron

Create 3 new files and add the following lines.
Edit the TARGET path and RETENTION days variables inside the snapsnapshot.sh file as needed:

cd
mkdir bin
sudo nano bin/snapsnapshot.sh
sudo nano bin/fw_open.sh
sudo nano bin/fw_close.sh

snapsnapshot.sh - slightly extended version of scubamuc’s origianl work.

#!/bin/bash
##############################################################
#
# ATTENTION! ATTENTION! ATTENTION! ATTENTION! ATTENTION!
#
# This is a slighty extended version, adding two support scripts.
# The original version of the script can be found here:
#
# Script description  -scubamuc- https://scubamuc.github.io/
# Nextcloud-snap backup with Snap snapshot
#
# Thanks to -scubamuc- for his original work!
#
# ATTENTION! ATTENTION! ATTENTION! ATTENTION! ATTENTION!
#
##############################################################
## create target directory "sudo mkdir /mnt/Backup"
## snapshot rotation 14 days 
## create crontab as root for automation
## 0 3 * * * /home/$USER/bin/snapsnapshot.sh
##############################################################
# VARIABLES #
##############################################################

SNAPNAME="nextcloud"
TARGET="/mnt/Backup"  ## target directory
LOG="$TARGET/snapbackup-nc.log"  ## logfile
SOURCE="/var/lib/snapd/snapshots" ## source directory
RETENTION="13" ## retention in days

##############################################################
# FUNCTIONS #
##############################################################

## Timestamp for Log ##
timestamp()
{
 date +"%Y-%m-%d %T"
}

##############################################################
# SCRIPT #
##############################################################

## must be root, enter sudo password for manual snapshot
## sudo pwd

## start log  
 echo "********************************************************" >> "$LOG" ; ## log seperator
 echo "$(timestamp) -- Snapbackup "$SNAPNAME" Start" >> "$LOG" ; ## start log

## optional stop snap for snapshot  
 sudo snap stop "$SNAPNAME" ;
## create snap snapshot 
 sudo snap save --abs-time "$SNAPNAME" ;
 
## open firewall for certbot/letsencrypt
 sudo ./fw_open.sh ;

## optional if stopped restart snap after snapshot (it will issue certificate renewal also)
 sudo snap start "$SNAPNAME" ;

## wait 45 seconds for certbot to finish and close firewall again
 sudo sleep 45 ;
 sudo ./fw_close.sh ;

## find snapshot file in $SOURCE and move to $TARGET  
 sudo find "$SOURCE"/ -maxdepth 1 -name "*.zip" -exec mv {} "$TARGET"/ \; # find and move
## find old snapshots and delete snapshots older than $RETENTION days
 sudo find "$TARGET"/ -name "*.zip" -mtime +"$RETENTION" -exec rm -f {} \; # find and delete

## end log 
 echo "$(timestamp) -- Snapbackup "$SNAPNAME" End " >> "$LOG" ; ## end log 
 echo "" >> "$LOG" ;  ## log linefeed 

exit

fw_open.sh

#!/bin/bash
## Add port openings to the "Runtime" config to temporary open them for certbot
sudo firewall-cmd --zone=public --add-port=443/tcp ;
sudo firewall-cmd --zone=public --add-port=80/tcp ;
exit

fw_close.sh

#!/bin/bash
## remove temporary port openings by reloading the firewall (remove Runtime rules)
sudo firewall-cmd --reload ;
exit

Change ownership and permissions of the script files:

cd

sudo chown root:root bin/snapsnapshot.sh
sudo chown root:root bin/fw_open.sh
sudo chown root:root bin/fw_close.sh

sudo chmod 700 bin/snapsnapshot.sh
sudo chmod 700 bin/fw_open.sh
sudo chmod 700 bin/fw_close.sh

Edit the crontab of the root account

Edit the root crontab:

sudo crontab -e

Add the following line (change $USER to your username):

0 3 * * * /home/$USER/bin/snapsnapshot.sh

This example will create a daily snapshot at 03:00 AM, temporary expose port 80/443 tcp to the public for a 45 second time-window and try to renew certificates with “let’s encrypt”, afterwards close the ports to the public again and finally move the backup to your backup directory.

The chosen command order in “snapsnapshot.sh” relies on the fact, that certbot will try to renew certificates with “let’s encrpyt” automatically, right after the services get started again, which where stopped for the backup creation.

Enjoy your personal Cloud!


I recommend you take a look at this:
https://github.com/nextcloud-snap/nextcloud-snap/wiki


Other Useful commands and tips

Check if the certificate renewal with lets-encrypt is working correctly

To see if cerbot/lets-encrypt renewal worked out you can have a look at the log-file, usually certbot will continuously try to renew the certificates starting 30 days before they expire, before that it won’t even try to:

sudo journalctl -u snap.nextcloud.renew-certs.service

I wonder how and if the “Nextcloud snap team” are affiliated with Nextcloud. Greetings to my Bavarian neighbours anyway!

What makes me a little suspicious is that no relation to official Nextcloud is mentioned anywhere I could find it, the link shown in the Snapstore (github.com/nextcloud/nextcloud-snap) suggests an affiliation but gets redirected to github.com/nextcloud-snap/nextcloud-snap. This may all be legit, but it could also be some people’s personal fork. Which, I don’t know. Would be nice if it was clearly communicated.

This Snap variant might be nice for people using Snap and wanting to get used to what Nextcloud has to offer (a lot!), but my personal thoughts are:

  • For me, Arch is not a server OS.
  • I wouldn’t use Snap. Actually phasing out my Ubuntu servers against Debian, just for that reason.
  • Nextcloud itself is a good thing, and I actually run my public server on Nextcloud-AIO (All-In-One). Yes, I’m lazy, too…

So the above, albeit well-done, is not for me. It might be for you.

Just my 2¢.

2 Likes

It’s listed under community-projects. And yes it’s not an official release.

Ah, that closes the circle! Trust level immediately increases—maybe you should note that somewhere in the GitHub or Wiki?

Thanks for clarifying!

To be clear, I am not involved in the project, I have simply written down an installation process here. :slightly_smiling_face:

1 Like

I have my own nextcloud running since years and I can only recommend “docker” to run the nextcloud. That is by far the easiest way to get it going and to maintain it. I personally very much prefer “docker compose”. You just need one YAML file to put the image and get it started.

I would not refer to snap for such a thing.

2 Likes

In the end it’s all personal preference. Nobody forces you to do this or that, that’s the beauty of the linux world.

1 Like

Absolutely. You are not forced to use snap :rofl:

I just wanted to mention to people reading your tutorial that with docker there is a much easier way to get a nextcloud installed and maintained even on a rolling release distro like arch. And this includes also a nginx reverse proxy with let’s encrpyt certificate.

Once you have docker installed you could use it for other stuff too. Like pihole dns server, jellyfin media server, etc.

But I do not want to hijack your thread. Sorry for that.

1 Like

You’re free to post, this thread is open to the public. Constructive criticism is highly welcome. Perhaps you can give some details what ist hard to maintain on a snap package? I’m interested in learning new stuff and alternatives.

And yes I’m aware that snap isn’t everybodies darling :smiley: especially outside the Ubuntu circle.

I started my nextcloud journey by installing nextcloud from scratch. Including apache, php, nextcloud, redis, php-fpm, etc. This worked but it was prone for breakage because arch is a rolling release and nextcloud was breaking every now and then because php, apache, etc was updated.

Then I started to use virtualbox/vmware to run a debian server with nextcloud. But in the end that was too complicated and to resouce hungry. Then I started to use docker. I had no experience with it, had to learn some, but I am not looking back. It is great.

There is a fundamental difference between snapd and docker. docker containers are completely isolated from the host system. snaps are excecuted in a sandbox environment which has moderate access to the host system. Therefore snap is more a package manager while docker is a real container environment.

Secondly, snapd is under control of ubuntu and so are all the snaps. The docker image for nextcloud on the other hand is officially provided. Nextcloud GmbH maintains an all-in-one docker immage (AIO):

Thirdly, with docker I have also nginx-proxy-manager installed which puts all my outside services (nextcloud, jellyfin) behind https with let’s encrypt certificates. Again, very easy with zero impact to my host system.

The initial start is very easy:

  1. Install docker and configure the storage location
  2. Create / Download the docker-compose.yaml file for nextcloud
  3. docker compose pull -f /path/to/docker-compose.yaml
  4. docker compose up -d -f /path/to/docker-compose.yaml
    and if you want to shutdown
  5. docker compose down -f /path/to/docker-compose.yaml

Thats it. And for even more convenience I use “portainer” to manage the docker environment.

In a nutshell: Docker is a professional solution for servers. snapd is package management for home users.

But regardless of the professionalism, docker is very easy to setup and to update the environments like nextcloud. From my point of view docker is not used as much as it should.

PS
And if you want to learn and play around with it I strongly recommend perplexity.ai as the search engine to ask questions about docker commands. perplexity helped me a lot. It looks to me as if it had special training about docker :wink:

1 Like

Well that’s a very similar experience I had in the past years :smiley:, what led me to nextcloud-snap in the more recent past.

Managing the snap package isn’t much more complicated.
starting/stoping/take snapshots/restore snapshots are single-line commands.

The package also contains all the components, which are then automatically updated in a coordinated manner by the snap package maintainers. And if really needed you can perform rollbacks relatively easily.

In a nutshell: Docker is a professional solution for servers. snapd is package management for home users.

I would underline this.

Your’re absolutely right, docker definitely seems to be a much better option in a professional environment. Especially because of the stricter separation from the host system and its possibility of multiple usecases.

I personally have never felt the urge to deal with Docker in the last few years, even though I’ve heard of it. Maybe I’ll take this opportunity to tackle this now. I’ll definitely check it out.

Thank you for sharing your perspective on the matter in detail. I highly appreciate it. :slightly_smiling_face: