Secure browsing on an insecure network – the easy way!

In my post yesterday, I talked about how to securely browse the web on an insecure Internet connection. The method I used was to install a proxy server (squid) on a trusted machine and ssh tunnel to it. However, one of my developers, Andy, kindly pointed out to me that there is a much easier way – just use SSH’s dynamic port forwarding as a SOCKS proxy.

To create the tunnel:

ssh -D 3128 [email protected]

To configure OS X to use the proxy, go to System Preferences > Network > Advanced > Proxies

OS X SOCKS Proxy Settings

Tick SOCKS Proxy, and specify the server as 127.0.0.1 port 3128, then click OK and Apply on the following screen, and that’s all you need to do!

Secure browsing on an insecure network with my Macbook

UPDATE: There is a much easier way to achieve a secure tunnel/proxy that doesn’t require squid to be installed. I’ve blogged it here. The method described on this page may be useful if you want to log the pages you visit. Also, if you wanted to block out ads, you could swap out sphinx for another proxy such as privoxy.

I’m currently on holiday in Avoriaz in France, and I’m browsing the Internet via an open wireless hotspot. Given how easy it is to intercept traffic on an open wlan, this could have posed a bit of a security problem as a lot of the website admin panels I access (including my blog’s wordpress admin) are in an insecure (http) area.

However, there is a solution that ensures that all my traffic (not just https) is encrypted, at least until it gets back to a more trusted part of the Internet.

The solution involves setting up a proxy server (squid) on a trusted server somwhere (e.g. a datacentre, or your home or office) and then connecting to this server via an SSH tunnel.

For this particular howto you will need the following:

  • An Apple laptop running OS X 10.5 (Leopard)
  • A Linux server (preferably running Centos / RHEL) in a trusted location

Installing Squid on your Linux Server

Firstly install squid using your desired package manager… I have a Centos 5 server, so I’m going to use yum:

[root@pablo ~]# yum install squid

Next, edit the squid config to allow any local ips that might be listening on that server:

[root@pablo ~]# vim /etc/squid/squid.conf

I added a line to allow my servers public ip. NB, at this point we aren’t permitting your laptop’s IP, only the local IP addresses on your server.

acl localhost src 127.0.0.1/255.255.255.255
acl localhost src 87.124.70.62/255.255.255.255

Now setup the runlevels for squid so that it starts when your server starts:

[root@pablo ~]# chkconfig squid on

If that worked, it should be set to on for run levels 2,3,4 and 5:

[root@pablo ~]# chkconfig --list squid
squid 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Finally start squid if it isn’t already running:

[root@pablo ~]# service squid start

Setting up your laptop to use the secure proxy

To get the laptop using our secure proxy, we must do two things. Open an ssh tunnel to the proxy, and then setup Safari (or your browser of choice) to use this proxy for any required connections.

To setup the secure SSH tunnel from port 3128 on your laptop to port 3128 on the squid server, just run the following command:

paul-macbook:~ paul$ ssh -L 3128:localhost:3128 [email protected]

Then all you need to do is configure Safari (or Firefox) to use port 3128 on your local machine as its proxy, and all traffic will be routed via this secure tunnel before being re-routed to the rest of the Internet. Of course, this won’t secure your browsing from then on, but you can at least be sure that it is not being intercepted by fellow users of the wifi hotspot.

So click on the Safari Menu at the top of the screen, and then click preferences (or press CMD + ,) This will open up the Safari preferences. Make sure you have the advanced tab open.

Safari Advanced Settings Menu

On this menu, click the Change Settings button next to Proxies. This will take you to the System Preferences Proxy menu.

OS X Leopard Proxy Settings

Select the protocols you wish to enable the proxy for (in my case I just chose HTTP), then fill out the proxy server address, which is 127.0.0.1 (localhost) and the port, 3128.

And that’s it! You should now be able to browse the web as if you were using your Linux server directly. This method has the added advantage that it can be used to bypass geographic ip based restrictions, as it makes you appear to be where you server is located.

Downloading iPlayer MP4 streams on Linux

Last week, the BBC made their iPlayer content available for the iPhone, and by doing so they unwittingly made all their content available to download DRM-free as an MP4 stream.

The process is simple; change your browser’s user agent to replicate an iPhone, then you will be able to view and download the mpeg 4 videos.

Download MP4 iPlayer videos in 2 steps

In this example I am going to use wget to download the files via the command line.

1. First you need to lookup the URL for the MP4 stream. The easiest way to do this is to use a web tool that extracts program information from an iPlayer URL (e.g. Eastenders). Paste the iPlayer URL you want to download into the search box on that page and submit, then right-click download the MP4 video and copy the url.

2. Now fire up a terminal and run wget, replacing the URL with the URL you copied from the first step:

wget --user-agent="Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420+ (KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3" http://www.bbc.co.uk/mediaselector/3/auth/iplayer_streaming_http_mp4/b0094z1j

An alternative method is to use this ruby script, which takes an iPlayer URL directly and does everything for you.

BBC Reaction

So far the BBC haven’t said a lot about this revelation. Currently, their official line is as follows:

The BBC iPlayer on iPhone and iPod Touch is currently in beta, which enables the BBC to pick up on these issues and find a solution that ensures the content is delivered to users in a secure way before the service is rolled out

According to their technology blog, they will be posting a fuller response in the next few days. My hope is that they don’t do a u-turn on the MP4 format. If any staff from the BBC Media team read this post, here is my message to you:

Dropping the DRM from your mp4 streams for the iPhone is a fantastic step forward, so please embrace it!

Using an open standard will allow license paying users of any platform to enjoy the content they have a right to view, with minimal additional development costs to yourselves.

There is no DRM when people save shows on Sky+ or their video / dvd recorders (or even straight to their computer via a DTV tuner), so why cripple the iPlayer service with it?

At the very least you could make your in-house productions available on MP4 to all, whilst you get the third party producers on board.

Downloading MySQL rpms with a Linux one liner

I love linux. It’s almost one year now since I switched my main work desktop machine to Linux from Windows XP, and I’ve not looked back. Windows was slow, unreliable (regular blue screens) and lacked many of the advanced features that linux has out of the box (or should I say off of the web).

Linux’s features are too numerous to list, but every now and then I use one that just reminds me how superior it is to it’s proprietary rival. In this case I needed to download the latest MySQL 5.1.23 rpms for installation on an ndb cluster.

Normally I would go to MySQL’s download page, and manually right click to save them individually, but since I have been doing it so frequently recently, and that I was likely to need to do it again in the future, I thought there must be a better way.

The answer lay with a few bash commands strung together with pipes:

wget -O - http://mirror.fubra.com/www.mysql.com/Downloads/MySQL-5.1/ | grep -o -P 'href=".+5.1.23-0.glibc23.x86_64.rpm"' | grep -o -P 'MySQL[^"]+' | xargs -I {} wget http://mirror.fubra.com/www.mysql.com/Downloads/MySQL-5.1/{}

The command above (which should be all on one line) does the following:

  • First we use wget to download a directory listing of all MySQL 5.1 downloads from our local MySQL mirror (but this could be any mirror). We invoke the -O – option to direct the output of the webpage to STDOUT rather than a file.
  • The output from wget is piped to grep which does a perl regular expression to look for links to all rpms from the particular version of MySQL we want, in this case generic 5.1.23 for x86_64 machines. This is returned as a list of every link from the html source containing a link to one of these files.
  • The output from grep is piped to another instance of grep. The reason for this is that we want to cut out the href=”” from the links so we are left with just the file name.
  • The tidied list of filenames is piped to xargs which runs wget for each one, pre-pending the full path to the beginning of the filename

And that’s it. We end up with each rpm being downloaded to the current working directory.

I know that it is technically possible to do things like this with Windows Power Shell, and cygwin, but they are not native solutions that are available to every machine by default, as they are on all *nix machines.

Real media to Xvid (divx) and cropping with Ubuntu

If you liked my last post on how to save real player streams and convert them to mpeg 4 avi files, then you might find this post useful.

Today I needed to save a real player stream of last night’s 10 o’clock news from the BBC and then convert it into an xvid file. I also wanted to crop it down to just show the particular news item we were interested in (a feature about one of our sites). This is how I did it:

  1. First follow the steps on how to install mencoder and then dump the ram stream as per my previous post.
  2. Next convert the dump file to xvid with the following command:

    mencoder stream.dump -o bbc-10oclocknews.avi -ovc xvid -xvidencopts bitrate=128 -oac mp3lame

  3. Now use Avidemux to crop the video file. If you haven’t already got this program you can install with

    sudo apt-get install avidemux

  4. Open the video file with Avidemux, and then ensure the video and audio are in sync by selecting Audio > Build VBR timemap from the menu.
  5. Use the selection Markers to chop out bits you don’t want (or select the bit you do want), and then save the resulting video with File > Save > Save Video
      And that’s it! Here is the resulting file:

Converting RealPlayer .rm files to MPEG (mp4) .avi with Ubuntu

You will need mplayer and mencoder, as well as the win32 codecs in order to convert .rm files to their mpeg equivalent. So if you haven’t already got them installed you can get them with:


sudo apt-get install mplayer mencoder
wget http://www3.mplayerhq.hu/MPlayer/releases/codecs/essential-20071007.tar.bz2
tar jxfv essential-20071007.tar.bz2
sudo mkdir -p /usr/lib/win32
sudo mv -i essential-20071007/* /usr/lib/win32/

Next you need to download the realplayer feed you want to convert. First of all download the realvideo file you want and then view it to see the realtime streaming protocol (RTSP) url inside:


wget http://www.bbc.co.uk/england/realmedia/politicsshow/south/bb/politicsshow_16x9_bb.ram
cat politicsshow_16x9_bb.ram
// displays something like rtsp://rm-acl.bbc.co.uk/england/politicsshow/south/bb/politicsshow_16x9_bb.rm

Now you can use mplayer to dump the stream to your local disk:


mplayer -dumpstream rtsp://rm-acl.bbc.co.uk/england/politicsshow/south/bb/politicsshow_16x9_bb.rm

This will take as long as it would take to view the stream normally. Once it’s finished you can then use mencoder to convert the stream to your required file format. To have H264 (mpeg 4) video, and mp3 audio you would use:


mencoder stream.dump -o bbc-politics.avi -ovc x264 -oac mp3lame

You can get a list of all supported formats with mencoder -ovc help.

Multi-process CLI scripts with PHP

I’ve been wanting to write a multi-process command line script in PHP for a while now, and tonight I finally got round to it. Proc_open() is really useful if you want to run a batch of commands simultaneously, such as querying the A records for multiple domain names, or running a bunch of whois commands.

Anyway, in the following example I’ll show you how to lookup the A records of a number of Google’s domain names in parallel:

You’ll notice there are two foreach loops. In the first we simply “launch” our commands, without waiting for any response from them. In the second, we iterate though and grab the output of each command in sequence until an end of file character is received.

Upgrading WordPress with Subversion

Besides being a useful tool for software developers to keep track of their source code versions, Subversion also provides a quick and easy way for users to install/upgrade software such as blogs, wiki’s and forums that are constantly being updated with new features and security patches.

However if, like me, you initially installed WordPress manually by downloading the zip file and extracting the files, you will first need to do a little bit of work to link your software in to the subversion repository.

Assuming your wordpress installation is in a folder called blog, you can get it working with the following steps:


# backup the original blog folder
cp -Rp blog blogBACKUP
# create a new folder
mkdir blogNEW
# checkout the latest version of wordpress from their subversion repository
svn co http://svn.automattic.com/wordpress/tags/2.3.1/ blogNEW/
# copy in any custom changes in wp-content and also the wp-config.php file
cp -Rp blog/wp-con* blogNEW/
# copy in the .htaccess file if you have one
cp -Rp blog/.htaccess blogNEW/
# delete the original blog
rm -rf blog
# move the new blog to your blog locatio
mv blogNEW/ blog
# finally run the http://yourdomain/blog/wp-admin/upgrade.php script in your web browser

Once you’re linked into a Subversion repository, future updates can be applied by simply running:


svn switch http://svn.automattic.com/wordpress/tags/2.3.2/

Where the url given is the subversion repository location of the new version you wish to upgrade to.

NB: After every update, you should go to http://yourdomain/blog/wp-admin/upgrade.php in your web browser as their may be some database tables that need upgrading.

Google’s mobile strategy (Android) is spot on

It’s not often that Google (NASDAQ: GOOG) launch a new software development kit with a $10 million prize for developers and a video explanation / press release from one of it’s founders. Yet that is exactly what has happened today with Android’s offical SDK release.

Before you read on, you should probably watch Sergey Brin’s YouTube video.

There has been a lot of fanfare with this latest announcement, but it’s easy to see why; the stakes are huge. By 2010, just over 1 billion people will have access to a computer, but around 4 billion will have access to a mobile phone, with over 1 million new subscribers every day! Google know that the potential for advertising to this vast market is enormous. They already lead the way with their contextual and search based adverts, but with mobile they will be able to target them to location as well.

Now I must admit that I’ve not been particularly enthusiastic about Internet on mobile phones until this year. Previously devices were clumsy to use, access speeds were slow, data transfer was expensive, and sites optimised for the small screen were few and far between.

However fast forward to now and we have flat rate data packages, phones that can cope with complex websites as easily as your desktop browser can and pioneering new interfaces such as the Apple’s (NASDAQ: AAPL) multi-touch technology. The prospects for the Mobile Internet are looking up.

And this is great news for Linux. I think Android will do for Linux on mobile phones what Ubuntu has done for Linux on the desktop. If Google’s powerful brand can help get handset makers to write drivers for their hardware then the community as a whole will benefit.

The scope for software developers is enormous. By the end of next year, most handsets will have built in GPS and Android developers will therefore be able to craft a wide range of innovative location based applications. Think free sat-nav and local business enquiries via Google maps!

So, have Google missed out by not launching a single “gPhone”, and focusing on a software platform instead? In my mind, not at all. Linux’s growth is due to its open source nature and the fact that it can run on an enormous range of hardware, and I think Android will benefit from the same.

If Android will run on a mobile phone, then why not run it on a PC as well? Say hello to Google OS.

Querying postfix’s queue size

We use Zabbix to monitor our servers, but recently the monitoring agent has been causing some problems of it’s own.

About once a week we send a fairly large mailshot out to our users. Zabbix monitors the sizeof the postfix mail queue on each of our mail servers, and then stores this in its database so it can draw graphs and send us an alert if the mail queue gets too big. But here’s the problem: the action of counting the mail queue itself is quite intensive, and it seems to be locking up the server when it runs.

After some investigation I found (in /etc/zabbix/zabbix_agentd.conf) that we were using the following command to measure the mailq:

[root@mx1 ~]# time mailq | grep -c '^[0-9A-Z]'
34619
real 0m6.590s
user 0m2.144s
sys 0m0.289s

As you can see it took 6.59 seconds to run on a queue size of about 35,000. You could also run the postqueue command and look at the end of the output:

[root@mx1 ~]# time postqueue -p | tail -5
-- 158346 Kbytes in 34621 Requests.
real 0m5.668s
user 0m0.075s
sys 0m0.225s

But, again this takes over 5 seconds for 35,000 mails. So a much quicker way would be to use:

[root@mx1 ~]# time find /var/spool/postfix/deferred/ /var/spool/postfix/active/ /var/spool/postfix/maildrop/ | wc -l
34640
real 0m0.033s
user 0m0.030s
sys 0m0.022s

Using find is over 100 times faster than the other two methods. Each of those command reports a slightly different size of the mailq, but they are pretty close. If anyone knows of an even quicker way to measure the queue size then please let me know!