technology

Macintel

I caught up with the big news a day after everyone else, since I was travelling. When I read it, my jaw sagged open, and I checked the date more than once, on the off chance that it was April 1.

I have mixed feelings about the move. The PowerPC architecture is, IMO, more elegant than the x86 architecture. And I believe that have more than one platform in circulation is good for the industry as a whole. But. There are a couple of big “buts”: Although PPC may be more elegant than x86, Intel seems to be better at actually making their chips run fast. Real-world performance beats out theoretical elegance 10 times out of 10. Also, MotorolaFreescale and IBM both seem to have bigger fish to fry than catering to Apple’s needs. Freescale obviously has had problems pushing the speed limit with their chips. IBM has done better, but apparently would rather make chips for video games than desktops.

Many people have wondered why–if Apple is switching to x86–they aren’t going with AMD. My own take on this is that Intel execs would rape their own mothers if doing so would take market share away from AMD. I would not be surprised if Intel is practically paying Apple to take its chips rather than have Apple turn to AMD. Supply lines, roadmaps, etc, all seem secondary to this.

I also wonder if Apple is going to use Itaniums (Itania?), and give Intel a way to get rid of some of them–they may be technically great, but have sold poorly because they aren’t x86-based. Since Apple is switching to a new platform, there’s no added penalty in switching to Itaniums (other than optimizing another compiler). Then again, Apple has hinted that people will be able to run Windows on their Macintels, which would mean that Itanium isn’t in the picture.

Technological slumming

Everyone knows that Lego-based movies are a medium for low-budget cinematic self-expression, covering every genre from politics to gay porn (of course, these days, the two aren’t that different). One particularly popular genre is Star Wars fanfilms.

And while there is certainly a big overlap between Star Wars fans and videogame players, and there have been many successful Star Wars-themed games to date, one thing that action games are definitely not known for is their humble production values. So what are we to make of Lego Star Wars, The Video Game? Here’s a game from a major developer that takes all that texture-mapping inverse-kinematic razzmatazz and puts it to work rendering…low-budget lego animation.

Multiple iTunes libraries, one music folder

What follows is a solution to a problem that has annoyed a lot of people for some time now.

Suppose you are in a household with two Macs. Each person has a copy of iTunes installed. They both want access to the same music directory, but they both want it to be part of their own library.

iTunes already makes it easy to share your music over a LAN, which is nice up to a point, but doesn’t give you much flexibility: you can’t assign star ratings to someone else’s music, make playlists, or load up an iPod with it. What you really want is for all that music to be yours (and all your music to be similarly available to your cohabitant).

Here’s the recipe. I’ll assume you have a LAN set up already.

  1. On each computer, go into System Preferences : Sharing : Services and enable “Remote Apple Events”
  2. Designate one computer as the “music host”; the other will be the “music client.”
  3. On the client, connect to the host, and mount the hard drive on the host that contains the iTunes music folder. Go into iTunes Preferences : Advanced on the client and set it to use the same folder as the iTunes music folder as the host (the one on the host’s computer)
  4. In the interest of good file management, you probably want to go into iTunes Preferences : Advanced on the host and enable “Keep iTunes Music folder organized” and “Copy files to iTunes Music folder when adding to library”. However, on the client machine, I think you will need to disable these (otherwise multiple computers will contend over where and how the files should be organized). If the client already has music files stored locally, relocate those files to the host and remove them from the client. Add those tracks to the library of the host computer manually.
  5. Find and remove the files “iTunes Music Library” and “iTunes Music Library.xml” (or create an archive of them) from the folder ~/Music/iTunes on the client machine. Manually add all the tracks on the host machine to the client’s copy of iTunes by dragging the into the iTunes window. For very large collections, you should probably do this in chunks (iTunes seems to get confused otherwise). I added all the artists starting with A at once, then B, etc. Took a while, but it worked.
  6. Now both users have access to the same music directory, can make their own playlists, set their own ratings, load up their own iPod, etc. The problem is that the situation is static–if anyone adds a new track, things get out of sync, and only that user will have access to that track (without additional futzing).
  7. That is where the following mystical-magical script comes in. This was pretty much written by “deeg” (with some nudging from me) in the Applescript for iTunes forum at iPod Lounge.
    (*=== Properties and Globals===*)
    property theDateofLastSync : "" -- date of last sync
    property theOtherMachine : "" -- ip address of other machine
    
    (*=== Main Run ===*)
    
    if theDateofLastSync is "" then set theDateofLastSync to ((current date) - 1 * days) -- force date of last sync to sometime ago for first run
    if theOtherMachine is "" then
     display dialog "Please enter address of other Mac" default answer "eppc://"
     set theOtherMachine to text returned of the result
    end if
    
    -- chat with other machine
    set GotsomeTracks to true
    try
     with timeout of 30000 seconds
      tell application "itunes" of machine theOtherMachine
       using terms from application "iTunes"
        activate
        set theListofTracks to location of file tracks of library playlist 1 where date added > theDateofLastSync
       end using terms from
      end tell
     end timeout
    on error
     set GotsomeTracks to false
    end try
    
    -- back to this Machine
    set SyncedOK to false
    if GotsomeTracks then
     set SyncedOK to true
     try
      tell application "iTunes"
       if (count of items of theListofTracks) is greater than 0 then
        repeat with alocation in theListofTracks
         add alocation to library playlist 1
        end repeat
       end if
      end tell
     on error
      set SyncedOK to false
     end try
    end if
    
    -- save sync date if all ok
    
    if SyncedOK then set theDateofLastSync to current date
    
  8. Copy this script and save it as “sync libraries” to the directory ~/Library/iTunes/Scripts (if you don’t already have a Scripts folder there, create it). Relaunch iTunes and it will be available under the Scripts menu. You can now run this script manually on each computer to update its library against the host. Better yet, use a timed macro (or cron job, which you can set up easily with cronnix) to launch the script in the wee hours. This assumes that each computer will be turned on when the script executes.

Additional notes:

  • Assuming that different computers will have different user accounts, you will need to specify the other user’s username and password in the “please enter the address” dialog that appears when first running the script. The URL format looks like this: eppc://username:password@machinename.local I’m not sure how to deal with spaces in the computer name (perhaps a backslash \ before the space–my machines all have one-word names; you can change the computer’s name in System Preferences : Sharing).
  • Likewise, it should be possible to sync libraries between two user accounts on a single machine using the above format. This probably requires that both users are always logged in (using Fast User Switching).
  • This script only works for one host and one client. It should be possible to modify it to deal with multiple clients. I will leave that as an exercise for the reader.

Update With recent versions of iTunes, this is all redundant. Although it’s not entirely automated, there is a much simpler way to deal with this problem.

As above, treat one Mac as the host and one as the client. On the client, go into Preferences:Advanced and make sure that “Keep iTunes music folder organized” and “Copy files to iTunes music folder when adding to library” are both unchecked. This is important.

Make sure the host’s disk is mounted on the client mac. Again, in iTunes, select the menu item “File:Add to Library…” and select the music folder on the host disk. This will scan the entire directory and add all the files to the client’s iTunes database. The client’s database will need to be updated whenever new files are added on the host (new files should only be added on the host); to do this, just repeat this process. It takes a few minutes.

Shuffling along

Apple’s release of the iPod Shuffle created a lot of buzz, as would just about anything new from Apple. And it is interesting that Apple would take the interface they developed for the bigger iPods–which is one of the aspects of the iPod that really sets it apart–and rather than try to shrink it down to fit a smaller unit, simply discard it.

It’s interesting for reflecting the changing way we listen to music. It used to be that we listened to albums, sometimes with the liner notes laid out in front of us, and there were only about six tracks per side to remember before you had to flip the record. Some people would make mix-tapes, but that was fairly arduous. And of course there’s always been the radio. I get the impression (I can’t back this up) that more and more radio is talk, though, and the music programming that remains is increasingly narrow, with two conglomerates pushing uniform formats to radio stations all over the country, and very little variation within those formats. If you want to listen to something different now, you have to listen to something other than radio.

Apple was already partly responsible for changing the way we listen, thanks to iTunes and the iTunes music store. iTunes and programs like it make it trivially easy to rip your music to your hard drive and put together a mix CD, taking individual tracks out of the context of their original albums. Or listen to customized or randomized playlists at your computer (or on your iPod). And the iTunes music store (and other online music vendors) sell tracks individually, so you may never have the whole album to start with (and this has been a point of contention for some artists, who refuse to sell tracks individually). And of course there’s that whole P2P thing, not that I would know anything about that. The existence of collections like Massive Attack’s Singles 90/98, with four or five different mixes of a given song, mocks the idea of listening to an album straight through, and invites shuffling with unrelated tracks.

So I was initially dubious when I saw the iPod shuffle, sans display, but I realized that I already listen to a lot of music from my own collection without being able to identify what it is, so the lack of a screen might not be that big of a deal after all. Right now in my car, I have the CD changer loaded with 6 CDs filled with random stuff from my music collection. I suspect most runners, pedestrians, and people riding public transit or in cars don’t check the screens on existing MP3 players much. What is most interesting about the iPod shuffle is not that it innovates (deleting features isn’t exactly an innovation) but that it is the first to acknowledge reality.

MP3 Sushi

It’s getting pretty common to have all or much of your music on a hard disk. This in theory makes it possible to do all kinds of nifty things with it. One nifty thing is listen to it remotely. It seems obvious: if your computer is online, and your music is on your computer, you should be able to get at your music over the Internet. But how?

If you use a Mac, the answer is simple: MP3 Sushi. This is actually a bundle of open-source Unix tools packaged up with a nice Mac interface. It sets up a music server you can access over the web, with handy features like live downsampling of high-bitrate music, creating m3u streams, etc.

I’ve got a fixed IP number, which makes it a little easier, but there’s a solution for dynamic IP as well.

My music is online, but is hidden behind a password to limit access. Ask me if you want to listen in.

Overglobed

Memo to icon designers: Look. I get it. I’m on the Internet, and the whole world is all interconnected, and my computer is this global information nexus, and it’s cool. Do half my application icons need to remind me of this? My dock looks like a freaking warehouse full of UN flags.

These are all the globe-themed application icons I could find on my hard-drive in 3 minutes. There are a couple representing other planets as a bonus.

Date authentication

In the slow-motion controversy over the gaps in the record of GW’s Texas Air National Guard Duty, the latest wrinkle has been the emergence of some damning documents that some people are concerned might be forgeries. While I’d be delighted to see Bush publicly embarrassed for shirking his military duty, I have to admit that the documents do look suspicious, and if they are forgeries, whoever is responsible is really fucking stupid.

But enough about all that. This got me thinking: today in the electronic world, there are ways to prove that you are the author of a document. But is there a way to prove that you authored the document on a certain date?

Currently, I don’t think there is a verifiable way to do this. But I can imagine a system that would make it possible.

First, we need to review the general ideas behind public-key cryptography (often abbreviated PKI, for “public-key infrastructure”). Traditional cryptography encoded a text using a single key, and both sender and recipient had to have copies of this key. Moving the keys securely was obviously a very serious problem.

PKI solves this. Everybody has two keys: a public key and a private key. The operations of these keys are complementary: a document encrypted with one’s public key can only be decrypted with the private key. So anybody can look up your public key, and secure the document so that only you can read it. Conversely, a document encrypted with one’s private key can only be decrypted with one’s public key. This allows you to “sign” a document electronically: your public key can be considered well-known, and can only be paired to your private key, so if a document can be decrypted by your public key, that’s evidence that it was encrypted with your private key, and either you wrote it or you left your private key lying around for someone to abuse.

Another important concept is the “secure hash.” A secure hash is a relatively short string of gibberish that is generated based on a source text. Each hash is supposed to be unique for each source text. It is trivial to generate the hash from the source text, but effectively impossible to work out what the source text might be based on the hash. Hashes can be used as fingerprints for documents. (Recently, a “collision” was discovered in a hashing algorithm, meaning two source texts resulted in the same hash, but it would still be effectively impossible to work out the source text or texts from any given hash.)

Now, PKI is fine for authenticating authorship, but doesn’t authenticate date of authorship. Not without some help.

PKI relies on key-servers that allow you to look up the public key of other crypto users. Imagine if we set up trusted date-servers to authenticate that a document was actually written when we claim it was written. It might work something like this: An author wishing to attach a verifiable date of authorship to a document sends a hash of that document to a trusted date-server. The date-server appends the current time and date to the hash, encrypts it under its own private key, and sends it back as a “dateprint. The author can then append the dateprint to the original document. If anyone ever doubts that the document was authored on the claimed date, they can decrypt the dateprint using the date-server’s public key; this will give them the claimed date and the document hash. The skeptic then takes a hash of the current document and compares it to the hash contained in the dateprint: if they match, then the current document is identical to the one submitted for dateprinting.

The new iMac

Everyone else is talking about it, so why not me?

There are two categories of reactions to Apple products: emotional and rational. Most technology companies don’t evoke much of an emotional reaction, and when they do, I suspect it’s more often negative than otherwise. But Apple’s got the kavorka. You can look at the spec sheets and form a reasoned opinion of their machines, but before you do that, you have to get through the visceral response.

My gut reaction to the new iMac was mild disappointment. Don’t get me wrong–in the grand scheme of things, I like it. But the fact that so many people did such a good job of predicting what the new machine would look like suggests a lack of inspiration at Apple. The new design is clean, uses almost no desk space, and probably will prove to have a host of merits once people start getting them on their desks. But it doesn’t wow me the way its “iLuxo” predecessor did: that machine, although the base did look a little clunky, had an innovative, unexpected design. Another surprising disappointment about the new iMac is that it is plainly a step backwards in terms of ergonomics: the iLuxo’s screen could be moved in three degrees of freedom; the new, in one (two if you put it on a lazy susan). This may have been a cost-cutting move (those swingarms must have been expensive). Apple may have discovered that most people didn’t really take advantage of all that adjustability, and chose to invest in other features. I wonder.

I’d been planning on making my next Mac a powerbook, but I could see using this iMac instead. Which brings me to my other point: the rational side. It’s interesting looking at the tradeoffs Apple made in speccing this machine, to reach a price point and/or to avoid cannibalizing sales from other machines. In many ways, the iMac seems to be best compared to the 17″ Powerbook in terms of value for money. They both have the same screen, which accounts for a disproportionate amount of their price. Here’s a quick comparison of some major features for the base 17″ iMac and the 17″ Powerbook (the better spec shown in bold):

  iMac Powerbook
CPU 1.6 GHz G5 1.5 GHz G4
Ethernet 10/100 10/100/1000
Firewire 400 800
Video NVIDIA GeForce FX 5200 ATI Mobility Radeon 9700
Video out Analog, mirror Digital, 2nd display
Portability OK Good
Bluetooth Optional Standard
Wifi Optional Standard
Optical drive Combo drive Super drive
Price US$1300 US$2900

Updating the iMac to add Bluetooth, Wifi, and a Superdrive gets it up to about $1600, still a lot less than the powerbook. Apple is charging a huge premium for portability (which is kind of weird, because the iBook is a pretty good deal) and a few geeky features. The G5 chip itself probably could command a premium for its performance benefit, but in reality is cheaper than the G4 (though the supporting circuitry may not be). This suggests to me that Apple’s pricing on the 17″ Powerbook is out of line.

More on music storage

Some time ago, I wrote an essay on music storage options (mostly on how bad they are).

We’re at a point today where even a big music library–say, 1,000 CDs–can be easily archived on a single hard drive using high-quality MP3s–say, 192 Kbps encoding. Some people claim this encoding rate is indistinguishable from CDs; others claim it’s barely adequate for listening. Whatever. It sounds good to me. In any case, at this rate, one hour of music is encoded as about 83 MB, meaning that 1,000 CDs (which are usually somewhat under an hour) will fit onto the 160 GB hard drives that are now available (as bare mechanisms) for under $100, with plenty of room to spare.

Purists will argue that lossy encoding is a bad compromise. We don’t need to use lossy encoding–a lossless format called Shorten has been around for years, and Apple’s iTunes now comes with something called “Apple Lossless Encoding.” These can shrink a CD’s data down to a little less than half its original size, meaning about 250 MB for one hour of music. The fact that ALE is built into iTunes means you have a nice interface for dealing with these tracks (as opposed to the more arcane software required to deal with Shorten files), making lossless encoding a practical option. I have no idea if there are converters that recode ALE as Shorten to avoid lock-in.

Anyhow, at that rate, it would take three 160-GB hard drives (and some kind of enclosure) to store a 1,000-CD music collection, but assuming Moore’s Law holds, in a few years, we’ll be back at the $100 mark.

Smaller MP3s still have their uses, though: If you have an in-car MP3 player that reads MP3 CDs, you’ll still need to recode your lossless files to MP3 in order to take advantage of it. If you have a portable MP3 player for jogging, likewise (though if you splash out on an iPod, you won’t need to bother).

Simultaneous invention

Liebniz and Newton invented the calculus at roughly the same time.

Alexander Graham Bell and Elisha Gray both invented the telephone at the same time, and filed with the patent office a few hours apart.

While it takes smart and insightful people to make these things happen, inventions are also the product of their time, and of other trends that are more or less well-known. Often the invention is a matter of recombining existing technologies in a novel way.

Apple’s recent demo of Tiger got a lot of people thinking “gosh, Dashboard looks an awful lot like Konfabulator. Apple must have ripped off those Konfabulator guys.”

This created a stir on Mac sites, with some claiming it’s a ripoff, some suggesting that the gracious thing for Apple to do would at least be to compensate the Konfab people for pulling the rug out from underneath them, and others pointing out that in fact, there was plenty of prior art to Konfabulator. John Gruber, astute as usual, pointed out that Apple in fact is not ripping off much of anything.

What do all these things do? They give you a simple way to script mini-applications–both Dashboard and Konfab use Javascript–and a method for skinning them. Konfabulator uses a somewhat unfriendly XML format; Dashboard uses straight HTML/CSS, but the two are pretty similar.

Here’s the thing: neither one is even a little bit original. Mozilla uses this idea already: it has a markup language called XUL for painting the browser’s “chrome,” and Microsoft is working on it’s own version of this, XAML. These use javascript to build application interfaces and javascript to handle user interactions. Gosh, that sounds familiar. In fact, when MS announced XAML, there was some hand-wringing a while back over how it was ripping off XUL.

Ideas like skinning, making scripting more accessible to more people, using standard markup languages to generate interfaces, etc, are ideas whose times have all come in the computing world. Lightbulbs lit up over lot of people’s heads, and they combined these ideas in similar ways. Konfabulator clearly beat Dashboard to market (though I find the product all-but unusable), but it is original only in the sense that its creators had the idea on their own (if in fact that is so), not in the sense that its creators are the only ones to have the idea.

Perils of porting

Friday night, I got together with some of my blogger friends for dinner at the Mongolian BBQ downtown. Don had just ported from Sprint to T-mobile and gotten the same phone as me, partly at least on my recommendation.

While we’re sitting there, I get a call from someone claiming that someone else had just called him from my number. It was a bad connection, so I didn’t hear everything he said, but it was very odd. Then it happens again. And again. Five times in about twenty minutes. Don is wondering if I’ve given him a bum steer.

About an hour later, I get another call. The caller ID shows as “unknown.” It turns out to be a Sprint operator, who asks for me (for a change). She explains that Sprint had just given out my old number (which was, and is, still my number) to a new subscriber, and that she was taking care of the problem.

Clearly, another bump on the road to seamless number portability.

iPodlet redux

I wrote before that we’d see interesting things come of these matchbox-sized hard drives, now that they’ve got pretty serious storage capacity. And now we have. Apple doesn’t call it the iPodlet, though.

At the risk of sounding churlish, I’m a bit disappointed in how big it is. It’s the size of a business card. OK, that’s churlish. But I really thought they could get a microdrive-based iPod down to about half that size.

Later: There’s been a shitload of virtual ink spilled over this thing. My thoughts:

The iPod mini is probably using the Hitachi 4 GB Microdrive. Hitachi also makes 1 GB and 2 GB units. I am guessing that after Apple fleeces the early adopters, they’ll contemplate bringing out downmarket versions for a little less. Either that or the Microdrive capacities will ratchet up, and the 4 GB model will itself become the downmarket version. People condemn the mini for its capacity, but seem to forget that the original iPod had only 5 GB capacity.

People bitch about the price. Considering that a bare 4 GB Microdrive retails for about $500, I think it’s a steal.

People bitch about the capacity. I suppose that if I wanted to use my iPod as my primary storage for MP3s, I would too. But that’s what my desktop computer’s hard drive is for–I wouldn’t need to have all my music on my iPod, and Apple has done a lot of work to make it easy to move MP3s between the computer and iPod, to generate random playlists, and generally to keep the iPod full of whatever music you want. There are two limiting factors on how much music you can play on any portable player: the memory capacity (coupled with your tolerance for listening to the same thing repeatedly, I suppose) and the battery capacity. It’s a happy non-coincidence that the same device you use to charge the iPod is what you use to transfer music to it. Having 100 hours of music on a portable player is redundant if you’ve got 8 hours of playtime with the battery. I might want more than 8 hours of music so that I don’t need to pre-select exactly what I will want to hear, but I don’t think I’d need 12.5x more to satisfy my desire for variety. I get about 48 hours of music into 4 GB–or 36 hours plus my entire home directory, and I rip my music at a higher bitrate than the iTunes default. music time: battery time ratio of at least 3:1 and as much as 5:1 sounds about right.

People suggest that it’s foolish to buy a 4 GB unit when one can buy a 15 GB unit that’s $50 more. If I were getting an iPod, I’d get the mini. I don’t need a 15 GB player. I do need portability, and the original design of the iPod, clever though it is, just isn’t as small as I’d want.

People bitch about the design. That’s a matter of taste, and de gustibus, non disputandum est. I admit to being a little disappointed in the dimensions, and unthrilled by the styling myself. But I still look forward to playing with one.

Still later Now I read about Toshiba’s 0.85″ drive. Obviously Apple wouldn’t have had time to engineer the new mini around this, but it suggests we could see even smaller iPods, or that the mini in its current form will get a capacity boost sooner rather than later.


Also interesting is GarageBand. I watched The Keynote, and Jobs made a couple of wry references to file-sharing. In the back of my mind, I mused that with GarageBand, he might be taking the ultimate end-run around the MPAA. Surely this has nothing to do with Apple’s decision to develop and market this program. Surely not.

But it’s fun to contemplate.

Ten years on the web

Macworld San Francisco begins today. I am sure there will be some interesting announcements that send the Mac cognoscenti a-nattering. But for me, it’s an occasion to think back.

I attended Macworld SF in 1994, staying with a friend from my days in Japan, Robin Nakamura, who attended as well and was also a bit of a Mac geek. It was fun. The big thing was CD-based entertainment, like The Journeyman Project. The hottest Mac you could buy was a Quadra 840av, and I remember watching a demo of an amazing image-editing app called Live Picture, which looked set to beat the pants off of Photoshop at the time.

On the plane ride back, I was reading a copy of Macweek that had been handed out at the show, and got to talking with a guy in nearby seat, Greg Hiner. Turns out he worked at UT developing electronic course material; he invited me to drop by his office to check out this new thing on the Internet called the World Wide Web. I had an Internet account at that time, and was acquainted with FTP, Gopher, and WAIS, but hadn’t heard of this Web thing.

So a few days later, I stopped by his office, and we huddled around his screen as he launched Mosaic. It immediately took us to what was the default home page at the time, on a server at CERN, in Switzerland. I noticed the “.ch” address of the server in the status bar and said excitedly, “we’re going to Switzerland!” A gray page with formatted text and some pictures loaded. This was cool. This was not anonymous, monospaced text, like you get with Gopher. He clicked on some blue text that took us to Harvard, I think, and I commented “now Boston!” This was exciting. This was big, and I knew it was going to be really, really big.

I’ve still got a few of the earliest e-mails we exchanged, in which we traded links, and I am tickled to see that (at least through redirects) some of those sites are still live (see: mkzdk, John Jacobsen Artworks).

I quickly figured out how to write HTML and put up a web page to serve as a resource for my fellow Japanese-English translators, who I knew would want to latch onto this Web thing and just needed something to help them get started (ironically, the page is too old to be included at the Internet Archive).

And here we are today. I am writing this in a program that runs on my computer, and communicates over a (relatively) high-speed connection with a program that runs on my server to create and manage web pages. Many of my friends do the same, and I’ve made new friends just because of this simple activity. The boundary between one computer and another, between my hard drive and the Internet, is, if not blurry, at least somewhat arbitrary. I’m watching Steve Jobs’ Macworld keynote in a window in the background as I type. Things have changed a lot. And I feel like we’ve barely gotten started.

More toys!

As if getting a new (and amazingly gadgety) cellphone weren’t enough, I just upgraded to OS X 10.3. So far so good: despite doing a wipe-and-install, with manual restoration of preferences, things have gone pretty smoothly (the one unaccountable and annoying problem is the complete loss of my NetNewsWire Lite subscription list).

This is a big upgrade: Apple was modest in adding .1 to the version number (as they were with 10.2). My mac is much more responsive now, the interface, for the most part, has been subtly improved, and there are a lot of obvious new “bullet-point” features that really are useful. It’s encouraging to see that Apple has not been resting on its laurels after the success of the 10.2 release–that was a pretty good OS, and they could have gotten away with tweaks for this one, but it’s clear that either they had a lot of stuff in the pipeline already that couldn’t be vetted in time for 10.2, or they still see OS X as an unfinished work (which is true of all software). It’s also interesting to see how much headroom is apparently left for system optimizations, and I wonder what we’re in for when they eventually release XI.

I have some beefs–I’m not sold on the metallic Finder, or the sidebar (which is resizable, but the resizing apparently doesn’t take, and which can’t be manipulated from the keyboard like other columns, as far as I can tell). And so on. But it’s good.

Phone phun

More on the saga of phone switching.

Two days ago, I received my Belkin Bluetooth dongle. Plugged it in and my Mac instantly recognized it.

Yesterday, I got the Sony Ericsson (I always forget whether to double the C or the S) T610 and Jabra headset. Reactions:

  • The phone has very nice industrial design. The buttons are small, but spaced so that I haven’t really had any fat-finger problems, and I really appreciate that they’re laid out as a keypad, not in the amorphous formations Nokia has favored of late. The joystick doesn’t always respond predictably to the “push in” action but is a nice idea. Screen is ok. Color screens on cellphones create more problems than they solve, but it does look pretty. Some have complained that it washes out badly in sunlight–this is a little bit of a problem, but tolerable. The phone’s overall size is a little taller than my old phone when folded, but as slim as the old phone when unfolded.
  • This is my first candybar phone. The previous one was a flip phone, and the one before that was a sort of hybrid that was a candybar–or rather brick–shape with a flap over the keys (this remains my favorite phone shape). This is the first phone I’ve had where I need to worry about accidental key activation–it’s already been making calls without me realizing it. This can be prevented using the key lock feature, but another problem cannot be: touching a key activates the screen; if the keys are constantly being pressed, the screen is always on, and with a color phone, that means the battery gets run down very quickly. The key lock should really be a dedicated slider, rather than a combination of regular keypresses. Time to get some kind of holster. One odd quirk is that the numeric keys can get hooked under the faceplate–when this happens, the soft keys and joystick stop working. Very frustrating and mystifying until I noticed the wayward key.
  • I was surprised that this phone doesn’t auto-discover the time.
  • As you can see, the camera on the phone takes amazingly shitty pictures. And that picture was taken in “high-quality” mode.
    sample photo from phone, depicting wacky Chinese space-babies
  • Bluetooth is a hoot. I love it. It’s a little fussy getting two devices to recognize one another, but from a security standpoint, that’s as it should be. Moving all my contacts from my address book on my Mac to the phone proceeded smoothly–everything is properly tagged, though I would have preferred that the “company” field be ignored. I suppose there must be a way to hack that… Apart from contacts, files can be moved back and forth between phone and computer via a little file browser. This is OK, but really, the phone should appear on my desktop like just another device. Using the Salling Clicker is great fun, in an incredibly nerdy way.
  • After following these very helpful instructions, I succeeded in connecting my Mac to the Internet via the phone. Slow, but usable. This is pretty nifty. Some bandwidth tests: I found a bandwidth-testing WAP page that works in the phone’s WAP browser: a pathetic 1.43 Kbps. (I have actually owned 300-baud and 2400-bps modems. Funny how these things come around.) Using the phone as a modem, and loading the 2wire bandwidth page, I get a more respectable 31.4 Kbps. By way of comparison, that page shows my DSL connection as yielding 1596.2 Kbps. Incidentally, this is much higher than DSL’s nominal 384 Kbps, but roughly in line with similar tests. The more informative Speakeasy tests show the following
      GPRS DSL
    Up 9 Kbps 214 Kbps
    Down 26 Kbps 1200 Kbps
  • The phone’s voice dialing works just well enough to be frustrating.
  • Sound quality seems OK. I can’t really comment on reception: there’s a T-mobile tower within rock-throwing distance of my home, so I always get 4 bars here, but at Gwen’s, I rarely get even two bars, as her neighborhood is poorly served by T-mobile (you hear that, guys?).
  • My speech coming through the Jabra headset sounds poor, but incoming sound is fine. The headset doesn’t feel very secure on my head, and apparently will not pair with my Mac, but it works. The phone comes with a wired headset that’s also OK.
  • The phone has a huge array of bells and whistles–both literally and figuratively. There are scads of annoying ringtones, and if you don’t like those, you can import more, or even compose them on a little in-phone music sequencer. No kidding: it has a four-track display (drums, guitar, keyboard, horns) with 32 canned snippets for each; you lay down one snippet per measure for each, and keep building up measures until you’ve got a song. I’m pretty sure Moby has traded in his studio for this phone.
  • Apart from that, this phone is complicated enough that you really need the manual. I couldn’t figure out how to put the phone into vibrate mode without navigating through four layers of menus until I found out I had to set one setting and then I could hold down the C key whenever I wanted to go into vibrate mode. The phone is also set up to encourage you to use its Internet connectivity more than you might expect–it has a dedicated Internet button on the side, and several menu options put Internet-based content higher up than content inside the phone. This strikes me as a bit cheesy, but I can live with it.
  • One interesting feature for managing the complexity of this phone is a feature called “profiles” (there’s a similar feature on the Mac called “Location,” which would obviously be a problematic name if applied to a mobile phone). Profiles are a group of settings for use in different situations–at home, in your car, walking around, at the office, etc. Switching profiles changes a bunch of profiles all at once. Good idea, poor execution. How?
    • The phone comes with several canned profiles; to change one, you select the profile as your working profile, and then edit everything. This makes it harder to reuse your existing settings and modify them–much better would be an option to save the current settings as new profile.
    • Although many features can be subsumed under a profile, there are some that cannot–for example, the key lock, which is handy when out and about, but useless at home.
    • Profile switching is mostly a manual affair. The phone does come with a headset, and it automatically switches to a handsfree profile when the headset is plugged in (but apparently not with the Bluetooth headset), so clearly there’s some ability to switch automatically. This approach should be extended: I’d like the phone to go into “at home” mode when it is within reach of my computer (as discoverable through Bluetooth), or perhaps when plugged in. This idea could be taken a step further by placing (or discovering) “bluetooth buttons” at other locations one regularly visits, so I could have one in my car, one at my coffee shop, etc. A bluetooth button needn’t be more than a transponder that identifies itself with a name and perhaps a GPS position.

My old number has not been ported to the new phone yet, but I have initiated the process. I wound up speaking with four different operators yesterday, each of whom told me I needed to talk to a different department (except for the last one), and each of whom encouraged me to bring my phone and an old Sprint bill in to a local T-mobile office in person (including the last one, but I insisted on doing it over the phone, so she relented). By the way, “port” seems to be the magic word–anyone who is transferring service from one carrier to another will save a couple minutes by using that word rather than “transfer,” etc. But the new phone does work, and is providing me with much amusement.

So, what would make the phone better? Better reception. A camera that’s actually worth using. A memory-card slot. Perhaps an MP3 player (though that would probably be politically unpopular at Sony). A slider to control key-locking, and making ring volume and silent ring part of the volume controls (why they are not is a mystery). The UI could do with a few tweaks.

I’ll update this entry as news develops.

Bitrot

Simson Garfinkel writes about bitrot saying, in so many words, it won’t be that big of a problem. Jeremy Hedley warns, Cassandra-like, that for invidivuals, it might be pretty bad indeed.

I side with Garfinkel.

Like pretty much everyone who has been using a computer for more than 15 minutes, I’ve lost data. The problem of bitrot is one that is pretty widely recognized by now, even if we’re not sure exactly how best to guard against it. This awareness in itself is probably going to help minimize the problem: we may look back on the period from, say, the fifties to the nineties as an anomoly when we didn’t routinely plan on making data available to our future selves.

Bitrot is a three-layered problem:

The physical layer
If you can’t read a floppy, or whatever physical medium you’re using, you are sunk. This really breaks down into a couple sub-layers: the media itself has degraded (all media has a lifespan before it starts losing data; for some, like floppies, it’s pretty short); or the drive requires a connector and/or software drivers you can’t use with any known device.
The data layer
Fine, so by some chance your floppy is still good, but back in 1993 you were using MS Works 2 to store your business data, and there aren’t any programs that can read those files.
The cultural layer
This ties in with the data layer–some formats will almost certainly be well supported in the future, at least to the extent that format translators will exist to convert Ye Olde Data Phyle into the sleek and modern DataFile 3000. This comes down to how popular a format is/was, and whether it is clearly and publicly specified. The file format used by Word 2000, for example, is not publicly specified but is so widely used that a number of programmers have done pretty good jobs of reverse-engineering it. The PDF and RTF formats are publicly specified and very widely used. But MS Works 2? Nope.

So what can we do to avoid the heartbreak of bitrot in our own lives? A few things.

Back up
This should be obvious. My own backup strategy is to back up my home folder to an external hard drive daily, and to a magneto-optical disk (estimated to have 50-year data integrity) weekly.
Save files in publicly specified formats
As I wrote to a friend recently, “every time I save a file in Word format, I’m afraid I’m doing something that will come back to haunt me.” From now on, I’m saving my work as RTF. Plain text would be better, but RTF strikes a balance between preserving formatting and universality.
Move forward
This does not mean jumping on the bleeding edge and buying every gadget that comes along. It means recognizing when a physical or data format is on the way out, finding a safe successor, and moving to that. As long as you’ve got data you can read on a hard drive that works with your computer, and a backup you can read somewhere else, you should be in the clear indefinitely. Eventually we will see net-based storage that is convenient and affordable (we’re not quite there yet), and at that point, we won’t have any excuse for failures at the physical layer.

The iPodlet

When the iPod was new, it was a breakthrough product. It wasn’t the first MP3 player, nor the first MP3 player based on a hard drive, but it managed to find a sweet spot in terms of storage capacity and physical size that no previous product did. This was mostly because of its 1.8″ hard-drive mechanism, which only became available at about the same time as the iPod itself, and partly because of some good industrial design by Apple.

The first iPods had 5 GB of capacity–probably nowhere near enough to contain the entire collection of a music buff, but probably enough for 50-100 CDs-worth of music. Plenty for a road trip.

Today the smallest iPod is 10 GB, and the largest is 40 GB. I’ve got over 500 CDs, and I could fit my entire collection on a 40-GB iPod with plenty of room to spare. This makes the iPod something fundamentally different: When I can put all my music, all my digital pictures (about 500 MB), and my entire home directory (about 1 GB, including everything I’ve written on my computer for the past 13 years, and a lot of old e-mail), the iPod can be a primary repository for all my personal stuff, rather than a very capacious place to carry around music and maybe some other files temporarily. Can be, but perhaps shouldn’t be–the whole idea behind the iPod is that it is more portable than other hard-drive MP3 players. Meaning you’ll carry it around. Meaning you might lose it, or at least leave it lying around where someone could copy personal data off it (and thanks to that firewire port, it wouldn’t take long). Encryption would be one obvious step to take.

But just as the iPod has graduated to being something else, something else could graduate to be the iPod. Microdrives–tiny 1″ hard drives–maxed out at 340 MB when they were introduced. Just like all other hard drives, though, they store a lot more now, and they’re available in 4 GB and even larger today–the original iPod’s territory, but a lot smaller. The difference between 1″ and 1.8″ may not sound like much, but it’s the difference between a matchbox and half a sandwich.

A microdrive-based MP3 player might be wearable as a chunky wristwatch. Or be embedded into a set of headphones. Or hung around the neck as a high-tech pendant. I’d be more interested in a gadget that can effectively disappear than one I need to consciously carry around. I’m looking forward to seeing interesting things happen with these 1″ mechanisms.

Chilly processor units

Anyone who has used a laptop atop a lap is intimately familiar with the heat that a modern CPU can generate. Every watt of that heat is wasted.

Talking with Dave earlier, he mentioned that he had converted one of his PCs to liquid cooling, silencing at least some of the fans that had made the thing sound like a damned airplane. He explained how the cooling system used an aquarium pump to circulate water; I hypothesized that the pump was probably redundant–the CPU itself probably put enough energy into the system for natural convection to circulate the water adequately, as long as there’s a one-way valve in the plumbing somewhere. He was skeptical.

Anyhow, if it hasn’t been done already, it would make a good project for a casemodder. But after thinking about it a bit, I realized this idea had a lot of potential. Take it a step further: rather than using energy to cool the system, actively scavenge the processor’s waste heat. I can imagine a couple ways to do this:

  1. Install a water turbine in the radiator. This could drive a generator to produce a little juice, or be mechanically coupled to a fan. This would be quite elegant: the computer would become a homeostatic system that cooled itself down as a natural consequence of heating up.
  2. Install a stirling engine in the case. Again, this could be coupled to a fan or a generator.

Imagine the steampunk/geek-cred you’d earn by having a functioning stirling engine installed in your case.

Mice

When I got my current computer, a Powermac G4, I started using the mouse that came with it, a featureless polycarbonate ovoid. It’s a beautiful mouse–nice heft, nice materials, nice shape. But a one-button mouse. These days, many mice have four buttons plus a scroll-wheel that can also be clicked, and OS X is built to recognize two buttons plus a clickable scroll-wheel with no extra software required (unless you want to remap the button functions).

Perhaps out of exasperation with my button-deficient mousing lifestyle, my über-geek friend Drew just flat-out gave me a new-in-box Microsoft mouse with two buttons and a clickable scroll-wheel that he happened to have lying around (this isn’t that surprising–as near as I can tell, he’s got 8 computers in varying levels of service). So I plugged it in this morning and have been using it.

My findings: In terms of shape, materials, and build-quality, Apple wins hands-down. As to the extra controls, I’m actually a bit ambivalent. The scroll-wheel is a big advantage, no question. But it clicks along in steps, not smoothly, which is actually jarring when following a screenload of text. The second button is also handy (though not as frequently so), but it forces me to pay more attention to what I’m doing. And despite that, having two extra buttons doesn’t quite feel like enough. With a one-button mouse setup, I use the keyboard more to make up for the mouse’s deficiencies. If I’m going to really use the mouse, I need to be able to use it more. Chorded input would give me 7 possible clicks from three buttons, but I’m not sure if that can be accomplished, and I am loathe to install Microsoft’s bloated (4.9 MB) mouse software to find out. A mouse with more physical buttons would do the trick, though perhaps at the expense of a lot more mistaken clicks. There are other mice out there that offer multiple buttons with better ergonomics, and I might break down and try one out.

Necesito más trabajo, Mister Roboto

A discussion on metafilter led me to a Sony robot demo. It’s quite uncanny to watch the robot, winsomely called Qrio, moving around, righting itself from a fall, or waving hello. As nifty as Sony’s Aibo is, this makes it seem like a Furby by comparison. Or perhaps even a Weeble.

But what’s with Japan’s fascination with robots, especially anthropomorphic ones? One might snarkily cite the fact that the guys designing these robots grew up with Tetsuwan Atom and Japan’s other robot-heroes, but what made those characters so popular in the first place? I don’t know.

Japan’s big electronic companies trot out the problem of the country’s rapidly graying society as something that robots can solve–the idea being that robots will do all the scut work for society, especially looking after incontinent oldsters. This seems like the most complex solution possible in search of a problem. Relaxing immigration policies would be the blindingly obvious solution, except for isolationism in Japan even stronger than most countries.

Beyond that, though, the economics of a robotic workforce make my mind reel. Robots wouldn’t be cheap to purchase or maintain. In a society with a high proportion of old people receiving government assistance, those seniors would probably be hard-pressed to pay for these robots out of their own pockets. So would the government: the tax burden would be falling harder on the dwindling (and perhaps resentful) younger population. And if we assume that there are WN man-hours of work to be done by society in general per year, with robots doing some amount WR, and humans doing the rest (WL) such that 0 < WR < WN, then the higher the value of WR, the lower the government’s tax revenues (I assume robots would not be paying taxes). In short, the government would literally need more warm bodies to tax to pay for all those cold bodies.

In a country with full employment, the economics must be different–but with a large fraction of the population on the dole, the economics get all screwy. Ironic that the root of the word “robot” is in the Czech word for drudge-work.