21 entries tagged
web
(Sunday night.)
Still nothing up for you to see yet, I’m afraid. (Apart
from anything else, I need to ask
my host to install a few
Python packages...) But I do do now have the start of the
second CGI script, the one that accepts reader’s votes for
the current round of pictures. These votes later are used to
decide which picture to use for that panel of the comic strip.
At present the script accepts your vote but does not display
them in any way.
If you vote again, your previous ballot is silently overwritten.
I plan
to support Approval
Voting in future by having a page where you have a checkbox
for each candidate picture and can select as many as you like.
The word ‘your’ is a little misleading; we use
people’s IP addresses as their identifiers, which sort of
works most of the time, but means that people sharing a proxy
server will end up sharing a vote. The alternative (requiring
users to register in order to vote) is not likely to work
because noone will want to register.
Update (Monday night):
The voting form now shows you the pictures with checkboxes.
When you first visit the page, the picture you cloicked on is
ticked, but then you can tick as many more as you like. Because
of the way HTML forms are processed, each form parameter is
potentially a sequence anyway, so the code for each time around
the voting form can be exactly the same. The code that adjusts
the totals is very simple:
def vote(self, uid, pns):
"""Register a vote from the user identified by uid.
uid is an integer, uniquely identifying a voter.
pns is a list of picture numbers
"""
oldPns = self.userVotes.get(uid, [])
if pns == oldPns:
return
for pn in oldPns:
self.pictures[pn].nVotes += -1
for pn in pns:
self.pictures[pn].nVotes += 1
self.userVotes[uid] = pns
The first line retrieves that user’s old ballot, if any.
The first for
statement reverses the effect (if
any) of their former vote, the second counts the new vote.
Finally the ‘ballot’ is saved for later. Behind the
scenes, ZODB
takes care of reading the old data in off disc and
(when the transaction is committed) saving the updated
data.
My paid job involves writing a web application as well, except
this one uses Microsoft ASP .Net linked via ADO .Net to Microsoft SQL
Server® 2000. To do a similar job to the above
snippet, I would be writing two SQL stored procedures (one to retrieve
the exisiting ballot, one to alter the ballot). Invoking a
stored procedure is several more lines of code in the C♯ or VB .Net layer as you create a Command
object, add parameters to it, execute it, and dispose of the
remains. (Or you can create DataSet objects which are even
worse, but have specialized wizards to help you draft the code.)
The actual algorithm (the encoding of the business logic) would
be buried in dozens of lines of boilerplate. By comparison, the Python+ZODB implementation is a
miracle of concision and clarity. The
ZOPE people deserve much kudos.
Feel Mark
Pilgrim’s distress at the excision of
cite
from XHTML
2.0’s Text module. The irony is that
cite
is one of the ‘semantic’ tags
(‘logical’ tags, as they used to be called) that is
actually used and supported by web browsers. Meanwhile fossils
like dfn
, kbd
and samp
are retained.
The case for cite
I once visited a real printing house, and discovered that the
keyboards actually have two quotation-mark keys: one for the
apostrophe (’) and one for the inverted comma
(‘). Alas! That such simplicity was denied to us by, well,
by Apple.
A tragicomic tale of the lost
punctuation
We have a client who decided that a shared ‘document
library’ was the way to collaborate on a bug list. They
gave me a URL and said to log in with my full name.
So what went wrong?
Apparently cite
was not intentionally deleted from the XHTML draft. Mark
Pilgrim has decided to spend some time being a ‘late
adopter’ for a while anyway. He probably deserves the
holiday, and a change
is as good as a rest...
Mark uses cite
differently from me: I follow
the semantics of @cite
in Texinfo, where it is used
for titles of publications you are citing, where it might be
translated as italic text or as a quoted title. Mark also uses
it for author’s names; I suppose you could
rationalize that by saying that personal weblogs have the
author’s name as their alternative title. It’s
necessary if his automatic list of citations is to work.
There is a general problem with (X)HTML text styles: their
meaning is not well specified. That’s why I fall back on
the Texinfo definitions.
Responding to my quotation-marks
rant, an anonymous
poster points out some browsers do support
q
elements. Well, they sort of do.
Here is a silly proposal for a solution
to the problem of English
punctuation and conventional keyboards: define a new
character encoding that blesses the Knuthian convention used on
Unix systems. Obviously this would need support in web browsers
(they would have to allow charset=Latin-1p
and add
an extra table to their built in character maps). But a few web
sites already use the `...' convention, and Latin-1p pages would
work no worse than they do.
One objection to sorting out the
problem of computer keyboards’ lack of English
punctuation (or rather restriction to typewriter-style
punctuation) is that typewriter punctuation is OK, so why
bother?
On with the slipperty slope
argument...
I have added a rudimentary subject-tagging scheme to the system
I use to publish these web pages. Not Faceted Metadata, not Topic Maps, just subject
elements in the style of the
Dublin Core. My ‘database’ of entries are just
files on disc, and they can now have dc:subject
elements using topic names from an ad-hoc taxonomy (that is a
fancy way of saying I just make up the topic names as
I go alonmg). The
Tcl script that generates subjects.html scans all the files for
such elements and builds up its database of links in-memory. It
then writes all the index pages automatically.
Only entries I have taken the time to tag with subjects
will be included, of course.
In a discussion of quoting in weblogs I found a link to a
note by Lore
Sjöberg on one of the things mentioned by Tim Berners-Lee
in his ancient Style Guide
for online hypertext, namely that when writing
hypertext you should make it make sense without the links.
More on linking styles
I have been scraping the syndicated version of my
RSS
feed on LiveJournal in
order to add comments links to my articles (not that anyone
does). They recently changed the format, so that
(a) readers must click through to a second LJ page to find
the link to click read the post itself, and (b) my scraper
broke. But that’s their perogative, and offering a
comment service to strangers who aren’t even LiveJournal
members is hardly part of their core mission, so I cannot fault
them for it!
They have also switched to using ‘cool’
URLs (in the
sense described by Tim Berners-Lee
in his Style
Guide to Online Hypertext) of the form
~pdc/1234.html
rather than
talkread.bml?this=that&thother=1234
. Apart from
making the URLs shorter, this change means that the mechanism
used to serve the files is now invisible, and can be altered
without having to change the URLs in future. It could even be
(gasp!) static files generated once a night when they scan
my RSS feed.
Happy New Year.
This is a test of how badly my web-site-generating software
fails during the year-end changeover.
Read more
I recently stumbled across the tag
URI scheme, a convention
that does a lot of what urn
and http
-based identifiers do with
less ambiguity and confusion. But perhaps I had better explain what I
mean by URI first.
Read more
The Picky Picky game had a bug which mainly affected
Teacake (badasstronaut): pictures submitted for the current
panel would instead turn up in last week's panel (the one being voted
on), or even once in the panel from the week before!
Read more
There has been all sorts of trouble with web developers being unable to
cause their web servers to issue the correct Content-type
headers.
Most recent fallout was Mark Pilgrim's essay on XML.com.
Read more
Nowadays my web presence (such as it is) is split over several sites
(Flickr, Delicious, LiveJournal, etc., as well as
this site). I want my home page to include pointers to these other sites--such as
the Flickr badge I described in the previous article.
To do this I need to download feeds from these other sites in order to
mix them in to my home page. I wrote a Python program called
wfreshen
to do this.
Read more
Web servers started as a solution to getting information from other sites. Then
it became convenient to use HTML and HTTP on one's local-area network, and for
some reason we had to call that idea an 'intranet' to make people pay attention.
Sometimes it is useful to run a mini-server on the same computer as your desktop
application; in this note I'll discuss this idea in the context of an
application written to Microsoft's .Net platform, since that's what we use at
work.
Read more
I’m still abuzz after the Oxford Geeks Nights #1 last night. Working as I do in the slime pits of Microsoft Windows ASP.NET, it was exciting just seeing so many Macs lined up on the stage, their little apple logos glowing in the semidarkness; the only non-Macs I spotted were running Ubuntu Linux or similar.
Read more
My current project at work has involved a quick-and-dirty crash-course in Drupal, a content-management system written in PHP. Here are some of my initial impressions.
Read more
Some friends and I started a small comics convention in Oxford called
CAPTION a few years ago—so long ago, in fact, that it predates web sites
for events like or anything like that. I started created brochure sites for
CAPTION from 1998 (though I think that SpaceCAPTION1999 was the first to have
a real promotional site). Reading through the old archives it’s interesting
seeing the capabilites of web styling—and my facility with them—improving from
year to year. So here is a slideshow!
Read more
I created a miniature web app for decoding Morse code called Morseless
for the fun of it yesterday. It makes for a nice little case study in
progressive enhancement.
Read more