Getting with the program is a “social bookmarks manager,” or in plain English, a web page that lets you keep a list of interesting websites. What makes it interesting is that it lets you use tags to classify your links a rough-and-ready sort of way (this kind of undisciplined tagging is now sometimes called “folksonomy”), lets you see links from other people with the same tags (or any tags) and shows you how many other people link to a given URL.

I’ve been keeping a “hit and run” blog for some time, and this fulfills the same role for me as would, but I had been unwilling to switch over two for a couple of reasons: 1. The data doesn’t live on my machine; 2. It’s not easy to control the presentation–it is possible to republish your links on your own page, but you’re kind of stuck in terms of presentation. There are ways to get at the data programmatically, but that involves programming, and that means work, and I’m lazy.

But I finally decided to sit down and figure it out (as a way to avoid something even harder: my current translation job). Somebody has already provided a library of PHP tools for messing with, and I know just enough about PHP to get myself in trouble. Here’s what I did [caution: entering geek mode]

I took the sample script provided, hacked around with that, and made one modification to the postsByCount function, adding the following line:

'extended' => $child_node->get_attribute('extended')

This makes it possible to get the snarky comment, which the library’s author mysteriously omitted. To reduce load on the server, I modified the script to write its output to a stub file, and the stub file is included in this page. The script is curl‘d once an hour by a cron job. For the record, here’s the script:

$thefile="/path/to/stub/file.html"; // change this
$tagurl=''; // this is just a convenience
$del = new DeliciousData('username','password'); // change username, password
$myPosts = $del->postsBycount('',10); // gets 10 posts, all tags
$delstub = '<dl id="hitnrun">';
foreach ($myPosts as $post) {
 $delstub .= '<dt><a href="' . $post['href'] . '">' . $post['description'] . '</a>';
 $delstub .= '<dd class="xt">' . $post['extended'] . '</dd>';
  $delstub .= '<dd class="tags"><ul>';
  foreach ($post['tags'] as $tag) {
    $delstub .= '<li><a href="' . $tagurl . $tag . '">' . $tag . '</a></li>';
  $delstub .= '</ul></dd>';
$delstub .= '</dl>';
$file=fopen($thefile, "w+");

[Later] To generate valid HTML, it would be a good idea to wrap the variables with the htmlspecialchars function, like this: htmlspecialchars($post['href'])

3 thoughts on “Getting with the program”

  1. Very nice. Looks like a simpler solution than what I did, which was to hack something together using aggrss, a RSS aggregator (with caching!) in PHP.

    On the plus side, once I’d done it for my feed I could also use it to make my two blogs show each other’s headlines in a sidebar. (Yep. Two. I know.)

  2. Yeah, I know that this does not solve problem 1, but I am going to start running a variation on the above script to generate a local cache of all entries, perhaps once/day. I’ve already learned about an improved caching strategy that doesn’t rely on a curl from cron, and is handled entirely in the script. I’ll be adding that in RSN.

    And FWIW, my blog’s front page shows headlines from my “long form” blog as well.

Comments are closed.