By: Daniel Rosehill
Backups are those things that nobody likes to think about. Until it’s too late, that is. My Linux backup routine, if I may say so myself, has been in pretty good standing for years — as I have recently documented on Github. I have two backup drives on my computer and use Timezilla and Clonezilla to take both incremental fast-restore backups and harder bare-metal disk images (the latter reserved for when things truly go pear-shaped).
Although my current backup approach was certainly good — as ever, there was room for improvement.
For one, backing up remote sources onto anything other than a home server isn’t ideal. Unless you run your machine 24/7 it’s difficult to ensure that your backups run on a precise automatic schedule (although most backup programs have the option to run missed jobs upon system power-on). There’s also a good chance that your backups are on the same drive of the same machine unless you’re putting them on an external drive. This applies especially to laptop users and, of course, violates a key tenant of the 3-2-1 principle of backups (keep three copies of all your data; two on different storage media; and one offsite.)
And the final tenant of that — the offsite backup — is a little bit trickier from a desktop or laptop too. Residential upload speeds remain generally slow and represent a constant bottleneck to moving backups offsite. As with most things backup-related scripting is the answer. But as heavy offsite pushes can easily take days or even weeks — and obviously the time will be a lot longer if the machine is not left constantly running — a server (or NAS) is a much more sensible intermediate medium.
Also Read: Understanding Linux File System
So this was the good-but-imperfect state of my Linux desktop affairs before Synology reached out to me a few weeks ago with an offer to try out the DS920+ — the successor to their DS918+ Network Attached Storage (NAS). It has ended up making both my local and cloud backups substantially easier.
I’ve Come to Like NASs …. Sort Of
Firstly, a word about the biases that I brought into this testing process. As I confessed in a Medium post I wrote shortly after unboxing the product, NASs were always those hardware devices that I never quite understood the need for.
As a Linux user (like all of us I think) I prefer the DIY approach. I’m also not much of a movie buff and I try to do as much as I can (and store as much as possible) on the cloud. As I’ve explained, I knew a local server would be helpful for improving my backup strategy, but a proprietary server — rather than a NAS — was always what I had envisioned operating. Why would you want to run proprietary and bloated OS when you could have Ubuntu Server installed in a few minutes and customize it to your heart’s content?
After five days spent pushing and pulling all sorts of cloud and local backups of my Linux desktop to and from my NAS — using SSH, rsync, Cloudberry, and the download manager to get all manner of data pools where I wanted them — my perceptions have definitely changed.
Firstly, I assumed that Synology’s OS — Diskstation Manager (DSM), which runs off of an embedded web server — would be tightly locked down. And indeed I did find it a little too restrictive for my liking. For one (although you can, of course, SSH into the device) there’s no terminal in DSM. Which I thought was strange given that there is so much else.
But that wasn’t really an impediment to customization and I didn’t have to look far to find a way to spin up a VM on the NAS or even to configure cron jobs (for the latter, you need to go into the advanced settings). The Cloud Sync tool handles all the awkward pushing and pulling to the cloud automatically. And after setting up all my backup scripts I can (mostly) set it and forget it.
What DSM does well is greatly simplified things from an administrative perspective, particularly when it comes to creating volume shares and setting access permissions for other users on the network. And the packages which can be downloaded from the package manager are easy to use too. Setting up the cloud syncs is trivially easy — a matter of connecting storage sources, creating connections with the right local volumes, and choosing whether to run a local to remote, remote to local, or bidirectional sync.
Configuring the most common things you might want to do on a local server, like running a mail server, is a simple matter of setting up an app from the package center, Synology’s internal software repository. Adding more storage — and selecting the desired RAID configuration — is a matter of slotting drives into available bays and then picking one of the available options. Synology NAS devices are a powerful but user-friendly alternative to running your own local backup server. Consider my mind changed.
Speaking of backups, here are the various tools I tried out for the purpose of backing up my Linux desktop. I’ll make note of what tools they are replacing too.
Grsync is a basic graphical user interface (GUI) frontend to rsync. Finding the NAS on my network to set it as a destination for the run process was not particularly difficult. Using pcmanfm, LXDE’s built-in file manager, I simply looked at what was available on the network and was then able to navigate into the NAS and retrieve a list of all my shared volumes.
While grsync is a nice frontend for easily running rsync jobs, it lacks some key features which I needed to make it a viable replacement for Timeshift which is my current basic snapshotting tool of choice for my desktop. Most presciently, I couldn’t find a scheduling functionality. I already use MSP360™ (CloudBerry) Backup for Linux for pushing offsite backups of my Linux desktop to the cloud. So I thought it only logical to check out whether I could set up a network device as a backup destination.
Setting this up was simple: I just added the NAS as an SFTP share (and as I mentioned earlier rsync, SSH, and several other protocols aren’t enabled on the Synology by default — so make sure to enable these before trying to initiating backups to it over the LAN).
After telling Cloudberry what not to backup, I had created a full disk backup to the NAS running over the local network. Unlike grsync, I could put this on schedule. Like rsync, it only backs up changes. Several backup plans can be set to run at different time intervals in order to create several snapshots / restore points just like Timeshift.
Of course, the rsync command-line interface (CLI) and cron is all that is truly needed to get a basic backup strategy in place. As with the other approaches, you can opt to take a variety of full snapshots — and you can also create incremental and differential backups by using the –compare-dest option. Synology uses RAID to bundle the connected storage media into one logical device. So the only thing I had to remember was the syntax [email protected]::StorageVolume/path. As in:
sudo rsync -av / [email protected]::LinuxDesktop/
Users can create weekly and monthly snapshot folders and run rsync jobs between these on the NAS itself (advantage: computer needn’t be on). To set up crons, look for the ‘Task Scheduler’ icon from within DSM:
Also Read: Top 5 Best Linux Distros For Laptop
Alternative: Bare Metal Disk Imaging with Clonezilla
A good backup approach will often combine incremental backups for fast restore ability and a harder / bare-metal tool like Clonezilla for restoring the system when things have really gone astray.
Clonezilla is perfectly well-equipped to handle a NAS device as a remote destination.
It can create backup images, over the LAN, to:
- An SSH server
- A SAMBA server
- An NFS server
All these protocols are supported by the NAS — although, as mentioned, they first need to be enabled.
As before, I created a separate backup volume in order to keep these backups separate. This process can effectively replace storing Clonezilla images on my drive.
A Total Linux Backup Solution
Of course, grsync, Cloudberry, and Clonezilla are only three of many backup tools for Linux. Unfortunately, my favorite first-line defense against data loss (Timeshift) doesn’t support network remotes at this time, but the tools I used should be more than enough to allow you to run a solid strategy designed to mitigate the threat of data loss. Using NAS sacrifices on some of the flexibility inherent in running your own server. But I found that the convenience more than made up for that.
For more photos from the unboxing process, please visit my Medium page.