Duplicity error – BackendException: No connection to backend

Recently I have been testing out Duplicity to set up backups for my personal server hosting this site. Duplicity is quite complex when you first start using it, but fortunately there are various tools and projects to help set it up. I have been using the excellent Stouts Backup ansible role, which installs duplicity along with Duply.

Tonight I noticed that the backups on my test Vagrant box, which had been working perfectly before, suddenly stopped working.

I was seeing the following error:

BackendException: No connection to backend

I found a blog post describing a similar problem, which suggested setting the AWS signature to version 4 in the duply configuration file.

Unfortunately, this didn’t help, but after some experimenting I noticed that s3cmd had also stopped working…

[root@dev vagrant]# s3cmd ls
ERROR: S3 error: 403 (RequestTimeTooSkewed): The difference between the request time and the current time is too large.

I then checked the date on my VagrantBox and noticed it was 2 days in the past…

[root@dev vagrant]# date
Tue 10 Nov 01:25:12 EST 2015

So I installed NTP using another Ansible role, and forced an update of the time.

S3cmd started working…

[root@dev vagrant]# s3cmd ls
2015-11-08 23:38 s3://something-backups
2008-12-19 18:47 s3://something-test

As did duply and duplicity…

[root@dev vagrant]# /usr/local/bin/duply /etc/duply/mysql backup
Start duply v1.9.1, time is 2015-11-12 21:35:28.
Using profile '/etc/duply/mysql'.
Using installed duplicity version 0.6.24, python 2.7.5, gpg 2.0.22 (Home: ~/.gnupg), awk 'GNU Awk 4.0.2', bash '4.2.46(1)-release (x86_64-redhat-linux-gnu)'.
Signing disabled. Not GPG_KEY entries in config.
Checking TEMP_DIR '/tmp' is a folder (OK)
Checking TEMP_DIR '/tmp' is writable (OK)
TODO: reimplent tmp space check
Test - Encryption with passphrase (OK)
Test - Decryption with passphrase (OK)
Test - Compare (OK)
Cleanup - Delete '/tmp/duply.27408.1447364128_*'(OK)
--- Start running command PRE at 21:35:28.678 ---
Running '/etc/duply/mysql/pre' - OK
--- Finished state OK at 21:35:28.955 - Runtime 00:00:00.276 ---
--- Start running command BKP at 21:35:28.969 ---
Reading globbing filelist /etc/duply/mysql/exclude
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Mon Nov 9 20:18:33 2015
--------------[ Backup Statistics ]--------------
StartTime 1447364129.99 (Thu Nov 12 21:35:29 2015)
EndTime 1447364130.11 (Thu Nov 12 21:35:30 2015)
ElapsedTime 0.13 (0.13 seconds)
SourceFiles 1
SourceFileSize 777926 (760 KB)
NewFiles 0
NewFileSize 0 (0 bytes)
DeletedFiles 0
ChangedFiles 1
ChangedFileSize 777926 (760 KB)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 1
RawDeltaSize 211 (211 bytes)
TotalDestinationSizeChange 315 (315 bytes)
Errors 0
--- Finished state OK at 21:35:30.834 - Runtime 00:00:01.864 ---
--- Start running command POST at 21:35:30.852 ---
Running '/etc/duply/mysql/post' - OK
--- Finished state OK at 21:35:30.889 - Runtime 00:00:00.036 ---

Voila. Problem solved!

Comma separated list of EU country codes

I’ve been doing some work with SQL this afternoon, and needed to query some data for EU countries. I couldn’t easily find a comma separated list of EU-28 2 digit country codes, so here is one for future reference:

'BE','BG', 'CZ', 'DK', 'DE', 'EE', 'IE', 'EL', 'ES', 'FR', 'HR', 'IT', 'CY', 'LV', 'LT', 'LU', 'HU', 'MT', 'NL', 'AT', 'PL', 'PT', 'RO', 'SI', 'SK', 'FI', 'SE', 'UK'

For VAT purposes, the United Kingdom is referred to by GB instead of UK, so you may need this list instead:

'BE','BG', 'CZ', 'DK', 'DE', 'EE', 'IE', 'EL', 'ES', 'FR', 'GB', 'HR', 'IT', 'CY', 'LV', 'LT', 'LU', 'HU', 'MT', 'NL', 'AT', 'PL', 'PT', 'RO', 'SI', 'SK', 'FI', 'SE'

This list was derived from the EU’s Interinstitutional style guide and UK Gov’s VAT EU country codes. These codes are a sub-set of ISO 3166-1 alpha-2 codes, with a few exceptions for UK and Greece.

Enabling WordPress Automatic Background Updates after using Version Control (svn)

Today I decided to enable automatic background updates on my WordPress blog. Previously I had been using SVN to keep WordPress up to date, but this was a manual process and meant that I could sometimes be a few weeks behind when a security update was released.

Since version 3.7, WordPress has been able to keep itself up to date whenever a new version is released. By default, it only applies minor security releases which theoretically shouldn’t break your blog.

So if, like me, you have previously checked out your WordPress via version control, and you wish to enable auto updates, you will need to follow these steps.


First make a backup of your WordPress, in case this goes wrong!

cp -Rp /path/to/wordpress /path/to/wordpressBACKUP

Next remove .svn folders

find /path/to/wordpress -type d -name .svn | xargs rm -rf

To set the permissions to make the files and folders writeable by apache. BE VERY CAREFUL

chown -R apache /path/to/wordpress

Now go to your WordPress admin and install the WordPress Background Update Tester plugin. If all has gone well, all it’s test should pass, and give you an output like the following:

  • PASS: Your WordPress install can communicate with WordPress.org securely.
  • PASS: No version control systems were detected.
  • PASS: Your installation of WordPress doesn’t require FTP credentials to perform updates.
  • PASS: All of your WordPress files are writable.

How to stop Mysql ASCII tables / column separators from being lost when redirecting bash output

Today I needed to write a quick bash script to send a monthly report  to a colleague.  The report required running a few MySQL queries and then concatenating the output into a file,  and e-mailing it to them.  Normally when you run a MySQL query from the command line, the output is shown within a handy ASCII table.

# mysql --table -e "SELECT 1+1";
| 1+1 |
|   2 |

Unfortunately,  it seems that when you redirect the output to a file,  the ASCII formatting is lost…

# mysql -e "SELECT 1+1" > /tmp/test
# cat /tmp/test

It took me a while to find it, but the solution is really simple. Simply add the –table parameter to the MySQL command:

# mysql --table -e "SELECT 1+1" > /tmp/test
# cat /tmp/test
| 1+1 |
|   2 |

And that’s it! Hopefully this post will save someone else some time in the future.

Setting up bridged networking for libvirt on CentOS / RedHat

By default, libvirt is setup to use NAT based networking for any guests created, which keeps them isolated from the rest of the physical network in the sense that they can only connect outbound, and inbound connections from other machines on the physical network would fail (other guests on the same hypervisor in the same virtual network can connect). The hypervisor server acts as a router, and each guest is given it’s own IP addresses in the 192.168.122.* range from libvirt’s built-in DHCP server.

If you would like your guests to be part of your main network, so they get an IP address from your main DHCP server, then you need to set up bridged networking. With bridged networking enabled, all the guests behave as if they are connected directly into the main network without any firewall or router in between.

First make a backup of your existing eth0 config – just in case!

cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/backup-ifcfg-eth0

Then run this small script to update the eth0 config to make it part of the bridge. This script looks for the MAC address in your existing eth0 config, and then writes out a new one using this MAC. NB: This script assumes you were using DHCP on eth0, and there were no VLANS involved. If you have a more complex network, then you will need to write your own custom config – the key point is to add BRIDGE=br0.

eth0_mac=`grep HWADDR /etc/sysconfig/network-scripts/ifcfg-eth0 | grep -i -o '[0-9A-F]\{2\}\(:[0-9A-F]\{2\}\)\{5\}'`
cat > /etc/sysconfig/network-scripts/ifcfg-eth0 <

Now create the bridge config...

cat > /etc/sysconfig/network-scripts/ifcfg-br0 <

Then restart the network...

service network restart

If all goes well you should still have network access, and the new bridge should show up in the output of brctl show:

[root@centos-latest-gpt-basic ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br0		8000.009c02a46912	no		eth0
virbr0		8000.5254007ac74f	yes		virbr0-nic

To get your virtual machines to use this bridge, you would need something like the following in your guest xml file:


Setting up a PXE boot server on Synology DSM 4.2 beta

I was excited to see that Synology have recently integrated a PXE solution in their latest version of Diskstation Manager – DSM 4.2 beta. This makes their NAS devices even more ideal in a home virtualisation lab as they are both cheap to buy and to run (the DS212 unit that I own consumes less than 20W in use), but also easy to configure and they offer a wide range of storage and network services such as CIFS / AFP / NFS / iSCSI, LDAP, PXE, TFTP, VPN, DNS.

They also offer more powerful Enterprise versions of their NAS devices, which run the same operating system but with much faster hardware. I’ve yet to test them in a production environment, but given my experience in the lab, I am sure they would be a competitive solution.

In this post I will show you how to set up a PXE boot server that will let you perform a network installation of Centos 6.3 using your Synology NAS.

What is PXE?

PXE (pronounced pixie) stands for Preboot eXecution Environment. It’s a technology that can be used to boot a computer into an operating system from it’s network card without needing anything to be installed on the computer’s local storage devices in advance. Most modern servers come with PXE support as standard.

It’s incredibly useful if you wish to automate the deployment of many servers without having to attend each one with an installation CD / DVD / USB stick. With a little work, you can also configure custom kickstart files to be served to each server, to save having to enter all the installation options manually.

How to set up your Synology NAS as a PXE boot server

Step 1 – Install DSM 4.2

Upgrade your Synology device to DSM 4.2 beta if you haven’t already. Follow the download links for your region, download the appropriate firmware that for your model of device, then upload it in the DSM admin panel – control panel – DSM update screen.

Step 2 – Set up the DHCP Service on your NAS

I would recommend you set up the DHCP server on your Synology first and test it works. If you are running this on your main LAN, you will need to disable the DHCP server on your router so they don’t conflict. You can download the DHCP server package in Package Center.

You will need to configure the relevant primary and secondary DNS, start and end IP addresses, netmask and gateway settings.

Synology DSM DHCP Settings

Once you are happy this is working, you can move on to configure the TFTP and PXE servers.

Step 3 – Set up the TFTP and PXE Services.

Tick the Enable TFTP service box. You also need to specify a folder somewhere on your NAS that can be used as the TFTP root folder.

Tick the Enable PXE service box. In the boot loader box type ‘pxelinux.0’. Fill out the remaining fields using the same settings you used for DHCP in step 2. This will override the DHCP service settings.

Synology DSM TFTP & PXE Server


This will set up a DHCP service which sets DHCP 67 (boot filename) in it’s DHCP offers to be PXELINUX.0. If the server making the DHCP request is performing a PXE boot, it will attempt to retrieve and load this file via TFTP from the DHCP server IP address. It is possible to tell the server to use a different server for TFTP using DHCP option 66 – but this is not necessary in our case because the Synology NAS is performing both functions.

Step 4 – Upload the PXELINUX scripts and PXE menu to your tftp folder.

In order to get PXE boot working, we now need to upload the PXELINUX.0 and a few associated files from the SYSLINUX project to the TFTP share. I’m sure you could use other boot loaders, but I have never tried any, so I’m going to stick to what I know!

According to the Centos wiki, the minimum required files to perform a PXE network installation of Centos 6.3 are:

  • pxelinux.0
  • menu.c32
  • memdisk
  • mboot.c32
  • chain.c32
  • pxelinux.cfg/default
  • path/to/your_kernel_of_choice
  • path/to/your_init_ramdisk_of_choice

You could download these yourself and edit pxelinux.cfg/default as necessary, but this is out of the scope of this blog, so to speed things up I have created a Github repository with all the files necessary for a Centos 6.3 install.

Simply download this repository as a ZIP file and copy the files inside your tftp folder.

This perfoms a network install using a kickstart I’ve created which will set up Centos 6.3 with a few KVM packages for use as a hypervisor. NB: The default password is changeme1122

Step 5 – Attempt to PXE boot a server.

All you need now is a server. Ensure the server is connected to the LAN with your Synology NAS on it, then power on the server and instruct it to perform a network boot. It should make a DHCP request to the NAS, and then perform a PXE boot using the files that we copied to the TFTP server.

If you want to load a different operating system, you need to copy across the relevant kernels / initial ramdisks for the distribution of your choice and then edit the PXE menu in pxelinux.cfg/default. You may also wish to either remove the kickstart parameter, or refer to a different kickstart of your own creation.



Setting up SSH authorized_keys with SELinux enabled

If you have ever added your SSH key to an authorised_keys file on server running SELinux, but for some reason you still can’t connect with your key, then it may be because the SELinux contexts have not been correctly set on the .ssh folder and authorized keys file. This normally causes the following error on your ssh client:

Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).

And you may see an error in the audit log (/var/log/audit/audit.log) on the server..

type=AVC msg=audit(1358012203.073:43414): avc: denied { read } for pid=5945 comm=”sshd” name=”authorized_keys” dev=dm-1 ino=25583 scontext=system_u:system_r:sshd_t:s0-s0:c0.c1023 tcontext=unconfined_u:object_r:admin_home_t:s0 tclass=file

The way to fix this is to run…

restorecon -R -v /root/.ssh

… substituting /root/ if necessary for the relevant home dir.

How to setup SSH public key authentication with SELinux enabled

The full steps to setup an authorized keys file from scratch would therefore be:

1) Create the .ssh folder

mkdir -p /root/.ssh
chmod 755 /root/.ssh/

2) Set up the authorized_keys file (remember to paste in the relevant key in vim)

vim /root/.ssh/authorized_keys
chmod 600 /root/.ssh/authorized_keys

3) Fix the SELinux file contexts

 restorecon -R -v /root/.ssh


Creating a bootable USB stick from OSX

I was having some trouble getting my ML110 Proliant lab server to boot from a USB drive that I had created with UnetBootin on my Mac. Initially, I thought it was a problem with the ML110 server, but it turns out that Unetbootin does not currently make a fully bootable USB stick in OSX. The trick is that you have to set up the master boot record correctly yourself using DiskUtil, fdisk and an MBR file from the SysLinux project. In this tutorial I will show you how.

If you would like to create a bootable USB drive from OSX, you will need:

Step 1 – Format the disk in Disk Utility, with the correct MBR

Assuming you have already inserted your USB drive into your Mac…

a) Open Disk Utilty

b) Select the USB device

c) Click Partition

d) Select 1 partition in the partition layout

e) Select “Master Boot Record” in the options

f) Select MS-DOS (FAT) in the format type.

g) Click Apply, then Partition

h) Close Disk Utility

This will wipe the USB disk and set it up with the correct boot record.

Step 2  – Install the MBR binary from the SysLinux project

Open up a terminal and then

a) Use the command line diskutil to find the device name for your USB drive.

diskutil list

a) Umount the USB drive with the command line. NB: Be sure to swap the device reference (in my case it is /dev/SOMEdisk2) with the correct one for your usb key that you identified in the previous step – this will change for each machine.

diskutil unmountDisk /dev/SOMEdisk2

b) Mark the partition active, then unmount it again

sudo fdisk -e /dev/SOMEdisk2
f 1
diskutil unmountDisk /dev/SOMEdisk2

c) Download Syslinux

mkdir -p ~/Documents/BootableUSB
cd ~/Documents/BootableUSB
curl -L -O http://www.kernel.org/pub/linux/utils/boot/syslinux/syslinux-5.00.zip
unzip syslinux-5.00.zip -d syslinux-5.00
cd syslinux-5.00/mbr

d) Install the MBR – NB: Update the device name (/dev/SOMEdisk2) to the one you identified in the first step!!!

sudo dd conv=notrunc bs=440 count=1 if=mbr.bin of=/dev/SOMEdisk2

Step 3 – Use UnetBootin to install your OS install files

a) Download and install UnetBootin if you haven’t already from http://unetbootin.sourceforge.net

b) Load the application, choose your preferred distribution, and then click OK.

c) When it’s finished, eject the usb key and use it!


Thanks to a tip I found on http://perpetual-notion.blogspot.co.uk/2011/08/unetbootin-on-mac-os-x.html

Showing total disk use on Linux – a.k.a How to sum the output of df

If you want to find the total amount of disk space used on Linux, and other Unix based systems (such as OSX), you can do so quite easily with the following one liner…

df -lP | awk '{total+=$3} END {printf "%d G\n", total/2^20 + 0.5}'

What this does is…

  • df -lP … shows a disk report of all local disks, in posix format (e.g. one line per volume)
  • | awk ‘{total+=$3} END {printf “%d G\n”, total/2^20 + 0.5}’ … this takes the output of the df command, pipes it to awk which then sums the 3rd columns into a variable called total, and when it’s finished it prints out this number converted to Gigabytes.  To get to Gigabytes, we divided by 2^20 (1024*1024), and we also add 0.5 so that it is effectively rounded to the nearest whole number.

This is particularly helpful if you have a lot of volumes on a system.