Problem to solve:

If you have some Raspberry Pis at home and you like tinkering around with them like I do, then you probably know that feeling that all of a sudden nothing seems to be working as it used to be and you want to re-roll to a certain point and just start all over again.

The solution:

My solution is a Synology NFS share volume (can also be any other Server with a share volume) and a backup script that is located on that share, and is available and executable to all Raspberries in the network. The Pis mount that volume on startup and execute the script as a cronjob every day at night in my case. They create a full .img file from the entire SD card of the Pi. So if something goes wrong you can just take out the SD card from the malicious Pi and flash the .img again via Etcher or another img tool. The Pi than just starts in the state it was when the image was created.
And to avoid that our volume gets overfilled with .img files, we add the option "retention_days=7", to automatically delete the oldest .img after 7 days.

Sounds more complex that it is, so lets get started!

Prepare the Synology share volume:

Go to your Synology and open up the System Preferences.

First activate the NFS service under "File services". Save that.

Under Shared folders create a new shared folder. Lets call it "pibackup".
Also add under the NFS permisisons tab the necesseray NFS-Rule. For our case its enough to leave all at the default settings, just add the IP of your PI and allow read/write access.
If you have more than one PI you need to add this as well. If you are keen you can also add a full network range in there, like "192.168.x.0/24" (important: 0/24 is mandatory). That allows every IP in that range access if it has the username and password.

Then go to "Users" and create a new user. The most easy way is to give the user the same name and password than your pi has. (sure you can also do that differently). I created a user "pi". Under permissions tab give him read/write access to our recently created shared folder "pibackup" and disallow all other folders.
For sure you can create these permissions also with a group or suitable to your setup.

Setup your Raspberry:

ssh into your running PI you want to setup for backup schedule.
First create a mount point with all rights:

sudo mkdir -m 777 /mnt/backup

Then mount the NFS share to the PI:
IMPORTANT: "192.1XX.XX.XX" is the IP address of your Synology or other NFS server you use!

sudo mount 192.1XX.XX.XX:/volume1/pibackup /mnt/backup

If you don't get an error we confirm if it worked with a listing, which should list you the empty share volume folder:

ls -l /mnt/backup/

Now make sure the folder gets mounted when the system starts:

sudo nano /etc/fstab

At the end of the file add in the last line:
IMPORTANT: "192.1XX.XX.XX" is the IP address of your Synology or other NFS server you use!

192.1XX.XX.XX:/volume1/pibackup /mnt/backup nfs rsize=8192,wsize=8192,timeo=14,intr,noauto,x-systemd.automount 0 0

Close and save the file with CTRL + X and Y for override file.

Next we create the backup script on the NFS volume that later on every PI will use:

sudo nano /mnt/backup/system_backup.sh

This will create and open an empty .sh file. Now we copy and paste our script that sets the backup command and the timing (feel free to alternate to your needs):

#!/bin/bash
#
# Automate Raspberry Pi Backups
#
# Kristofer Källsbo 2017 www.hackviking.com
#
# Usage: system_backup.sh {path} {days of retention}
#
# Below you can set the default values if no command line args are sent.
# The script will name the backup files {$HOSTNAME}.{YYYYmmdd}.img
# When the script deletes backups older then the specified retention
# it will only delete files with it's own $HOSTNAME.
#

# Declare vars and set standard values
backup_path=/mnt/backup
retention_days=7

# Check that we are root!
if [[ ! $(whoami) =~ "root" ]]; then
echo ""
echo "**********************************"
echo "*** This needs to run as root! ***"
echo "**********************************"
echo ""
exit
fi

# Check to see if we got command line args
if [ ! -z $1 ]; then
   backup_path=$1
fi

if [ ! -z $2 ]; then
   retention_days=$2
fi

# Create trigger to force file system consistency check if image is restored
touch /boot/forcefsck

# Perform backup
dd if=/dev/mmcblk0 of=$backup_path/$HOSTNAME.$(date +%Y%m%d).img bs=1M

# Remove fsck trigger
rm /boot/forcefsck

# Delete old backups
find $backup_path/$HOSTNAME.*.img -mtime +$retention_days -type f -delete 

Close and save the file with CTRL + X and Y for override file.

Make the script executable:

sudo chmod +x /mnt/backup/system_backup.sh

Last but not least create a cronjob to run the the backup task automatically. We will set it in this example to daily, at 3 am. Feel free to alterate that to your needs as well:
If you open crontab the first time it will ask you for the option to start with, I recommend option 1 for nano.

sudo crontab -e

and add in the last line of the file:

0 3 * * * /mnt/backup/system_backup.sh

Now we are set and the PI will backup itself every night at 3 am to our Synology share!

If you don't want to wait for the cronjob to start, you can also test it in with this command:

sudo /mnt/backup/system_backup.sh

IMPORTANT: Until the script wrote out the full img file, which can take a while, your Terminal will look like stucked. If you cancel or close, the backup task will stop. You can always verify the progress by looking at the file that it creates in your file browser.

Otherwise you can also always test and run the script in the background with appending a & sign. So your terminal session does not get blocked:

sudo /mnt/backup/system_backup.sh &

Happy tinkering and until next time!