Jeremy Day (Incorporating Jeremy Dennis) is Now Live

I think that I have got version 1 finished (obviously Jeremy has the final word on whether this is the case), and I have set up the old to redirect to the new site.

It isn’t a particularly complex site, but here’s an outline of how its set up.


The main sections to include are

  1. The Weekly Strip, a webcomic series she started in 2001. On the old site this was static HTML, auto-generated by a Makefile and a mixture of Tcl and Python code. The new version is a Django app that uses the same input files and generates the strip pages on demand.

  2. A list (incomplete) of Jeremy’s other projects on the Web. This is actually an application of my little tiny Django app Spreadsite to present a browseable list.

  3. An archive of Jeremy’s drawing and photography mini-sites that she originally hosted on GeoCities. This is all static HTML files and is now hosted at

  4. A front page that incorporates the latest entries from her Twitter and LiveJournal sites.

I also want to use plenty of appropriate caching so as to make the best use of tiny GNU/Linux node.


The pages are generated using a Django app, which in turn gets its data about the strips form the Python library I wrote when generating the old version. I changed it so that there is an optional parameter specifying the base URL of the image files—during development this was, and once I had copied the files to the new server I could change it to

Only because it is easy to do, I added RDF resources for the strips as well: visit in your browser and it should redirect you to the usual HTML view; visit it with an RDF application and you should get a metadata summary, either or In theory this might be useful to some semantic web enthusiast out there. In contrast to the Atom feed (discussed in a previous note), I generated these pages using RDFLib, a Python library for handling RDF. This is still a work in progress.

Twitter and LiveJournal

Both of these use use JavaScript (with the jQuery library) to acquire the most recent entries and insert the results in to the page. The Twitter feed works directly with Twitter’s search API via the JSONP conventions.

The LiveJournal equivalent is a little trickier. I ended up creating a Django view that acquires a copy of the page by making its own HTTP request, and then extracts the part I want to display, which it then returns as a JSON object. The point of returning JSON rather than just the HTML fragment is that it means I have a way to tell the caller whether the extract was obtained successfully. If you just return HTML, there is no way for the JavaScript code to tell an error message from a successful page.

The extraction of the post content from the LiveJournal page is achieved using Beautiful Soup. I hope the LiveJournal developers are not too offended that I am not trusting them to return valid XHTML under all circumstances.

The LiveJournal page uses a ridiculous amount of nesting to allow for flexibility when reformatting the page with CSS. Yahoo!’s YSlow analysis suggests I could do with stripping out some of the redundant DOM nodes so as to reduce the complexity of the page and make it render faster. Something for Jeremy Day version 1.1, perhaps.


The external-facing web server is Nginx. This serves the static domains ( and directly, and delegates the dynamic pages to a FastCGI server. The FastCGI server is implemented directly by Django’s utility, using the Flup library. Keeping the FastCGI server running is the responsibility of daemontools (there is a Ubuntu package for this in the Universe repository).

I have already written on how I update the files on the server using Git. Generally this means I add a new Weekly Strip strip by adding a line to the file, testing on the local version, then doing git push followed by

git pull
sudo svc –du /etc/service/jeremyday

on the server.


There are probably a few glitches that will emerge now I have declared it live. We’ll see …