I think that I have got http://jeremyday.org.uk/ version 1 finished (obviously Jeremy has the final word on whether this is the case), and I have set up the old
www.jeremydennis.co.uk to redirect to the new site.
It isn’t a particularly complex site, but here’s an outline of how its set up.
The main sections to include are
The Weekly Strip, a webcomic series she started in 2001. On the old site this was static HTML, auto-generated by a Makefile and a mixture of Tcl and Python code. The new version is a Django app that uses the same input files and generates the strip pages on demand.
A list (incomplete) of Jeremy’s other projects on the Web. This is actually an application of my little tiny Django app Spreadsite to present a browseable list.
A front page that incorporates the latest entries from her Twitter and LiveJournal sites.
I also want to use plenty of appropriate caching so as to make the best use of tiny GNU/Linux node.
The pages are generated using a Django app, which in turn gets its data about the strips form the Python library I wrote when generating the old
jeremydennis.co.uk version. I changed it so that there is an optional parameter specifying the base URL of the image files—during development this was
http://www.jeremydennis.co.uk/, and once I had copied the files to the new server I could change it to
Only because it is easy to do, I added RDF resources for the strips as well: visit http://jeremyday.org.uk/tws/r/strip183 in your browser and it should redirect you to the usual HTML view; visit it with an RDF application and you should get a metadata summary, either http://jeremyday.org.uk/tws/data/strip183.xml or http://jeremyday.org.uk/tws/data/strip183.n3. In theory this might be useful to some semantic web enthusiast out there. In contrast to the Atom feed (discussed in a previous note), I generated these pages using RDFLib, a Python library for handling RDF. This is still a work in progress.
Twitter and LiveJournal
The extraction of the post content from the LiveJournal page is achieved using Beautiful Soup. I hope the LiveJournal developers are not too offended that I am not trusting them to return valid XHTML under all circumstances.
The LiveJournal page uses a ridiculous amount of nesting to allow for flexibility when reformatting the page with CSS. Yahoo!’s YSlow analysis suggests I could do with stripping out some of the redundant DOM nodes so as to reduce the complexity of the page and make it render faster. Something for Jeremy Day version 1.1, perhaps.
The external-facing web server is Nginx. This serves the static domains (
lastcentury.jeremyday.org.uk) directly, and delegates the dynamic pages to a FastCGI server. The FastCGI server is implemented directly by Django’s
manage.py utility, using the Flup library. Keeping the FastCGI server running is the responsibility of daemontools (there is a Ubuntu package for this in the Universe repository).
I have already written on how I update the files on the server using Git. Generally this means I add a new Weekly Strip strip by adding a line to the file
tws.data, testing on the local version, then doing
git push followed by
git pull sudo svc –du /etc/service/jeremyday
on the server.
There are probably a few glitches that will emerge now I have declared it live. We’ll see …