From: Simon Willison (firstname.lastname@example.org)
Date: Sat Sep 07 2002 - 12:25:09 BST
At 13:14 07/09/2002 +0200, mort wrote:
>On Sat, 2002-09-07 at 12:46, Simon Willison wrote:
> What do
> > people think of the system I specced out here?
> > http://www.aquarionics.com/misc/archives/blogite/0033.html
>I think it's a great framework!
>I made a couple of questions earlier but they went unanswered
My apologies - I meant to answer that mail but Eudora froze and by the time
I reloaded it I had forgotten what I was doing :/
>so if my blogware doesn't have a local copy of this server repository,
>downloads it from a sort of canonical source. Then, either
>a) the canonical(s) server(s) ping my blogware when any record is
>inserted / deleted / updated. My blogware can then choose to sync itself
>incrementally or not.
>b) my blogware downloads a fresh copy of the repository at certain
>times (every hour / day / week / month) to be sure it's up to date.
I think it will have to be a client-pull rather than a server-push thing.
If the server had to ping every blog on the list whenever a new blog was
added /to/ the list things could get messy. Clients downloading the full
list once a week would seem to be a more sensible option.
There's also the third option - the central server has a method to return
all PingBack servers that are "interested" in a URL. Blog clients can
therefore send a list of URLs in a post and get back a list of servers that
they should ping (which they can add to their local copy so that next time
they "see" those URLs they won't have to make a request).
>Other than that, i also think that auto-discovery systems could log the
>servers they find and transmit those logs to the repository. So the
>repository can discard the already known ones and add those really "new"
>This way the repository works for the blogging systems and the blogging
>systems also work for the common wealth :)
Good idea. <link> auto discovery could also theoretically be handled by a
spider / crawler operating on behalf of the central repository. The problem
here is that the central server methods relies on knowing the "URL
patterns" that are accepted by each pingback server - for example, my
server accepts any URL of the format:
In its current format, the <link> element does not provide this
information. I can think of two methods of fixing this, but both require
changes to the spec (which is not much of a problem at this early stage but
could become one later on). Firstly, pingback servers could implement a
pingback.getURLPatterns() - returns an array of URL patterns that this
server will accept
The crawler could then call this method on any <link> indicated PingBack
servers it finds to get a list. The other option is to change the <link>
element so that instead of pointing directly to the server it points to a
descriptive XML file instead:
<link rel="pingback" type="text/xml" href="/pingback.xml" />
<server host="www.bath.ac.uk" path="/~cs1spw/pimgback.php" port="80">
This matches the format described in my earlier mail.
-- Web Developer, www.incutio.com Weblog: http://www.bath.ac.uk/~cs1spw/blog/ Message sent over the Blogite mailing list. Archives: http://www.aquarionics.com/misc/archives/blogite/ Instructions: http://www.aquarionics.com/misc/blogite/
This archive was generated by hypermail 2.1.5 : Sat Sep 07 2002 - 13:05:00 BST