technology

Pulldock

The “unofficial Apple weblog” had a post calling on readers to submit their wishlist for future iPhone OS features, which got me to thinking.

Multitasking is an obvious shortcoming on the iPhone right now. Multitasking is possible: some of Apple’s own apps run in the background, and there are jailbreak apps that allow apps to run in the background and for the user to switch between running apps. But Apple does not allow app-store apps to run in the background at all, presumably because of performance and battery-life problems.

I believe that multitasking on the iPhone can be broken down into two functional categories: apps that you want to run persistently in the background, and what I’m calling “interruptors”: brief tasks that only take a few seconds to complete, and where you don’t want to break out of your current app. I’m concerning myself with the latter case here.

A jailbreak Twitter app, qTweeter, has the kernel of an approach to presenting these interruptors: it pulls down from the top of the screen like a windowshade, and is accessible any time.

This approach could be generalized to present multiple apps in what I’m calling a pulldock. There could be one pulldock that pulls down from the top, another that pulls up from the bottom, to present up to eight interruptors.

I envision these interruptors being stripped-down interfaces to existing apps or services, such as twittering or text messaging, that would appear in some kind of HUD-like view superimposed over the running app. Interruptors should be lightweight enough that they wouldn’t overburden the phone. I can also imagine new ways of passing information between a regular app and an interruptor—such as launching a camera interruptor while in the mail app as a way to take a photo and insert it into a mail message, which would save a few steps.

Here’s a screencast:

Yeah, there’s a lot of “umms” and sniffing in there. It’s the first screencast I’ve ever done. The visuals were done in Keynote using the template from Mockapp.

Search tip

A couple of nights ago, Gwen used the phrase “Googling for something on America’s Test Kitchen” instead of “searching for…”, which just reinforces that Google has become a synonym for search.

Google search results are often polluted by irrelevant links to commercial websites like bizrate and dealtime, though. Wouldn’t it be nice if there were a way to avoid that? There is: use Give me back my Google.

It would be even nicer if you could search via GmbmG right from the search field in your browser. And in fact you can, but you’ll need to set it up first

Safari

Safari does not let you customize your search field out of the box, but there are some hacks like Glims that add this capability. Once you’ve done that, you’ll need to add GmbmG to Glims as a custom search engine and teach it the specific search syntax that GmbmG uses. It is:
http://www.givemebackmygoogle.com/forward.php?search= search key

Firefox or Internet Explorer 7+

These browsers support something called the “open search description document,” which makes adding a new search engine dead-simple. I have no idea how this works in IE, but in Firefox, just install this plugin (which I created, not the creator of GmbmG—the plugin is currently listed as experimental, but it’s perfectly innocuous, I promise) and it will add that site to the list of search engines your browser uses.

Mobile Hub

Years ago, Apple promoted the Mac as a “digital hub” for media. Today we take for granted that computers can be used as hubs for media.

Two of the numerous points covered in Apple’s recent demo of version 3.0 of the iPhone OS was that Apple was finally giving developers access to the port, as well as unblocking bluetooth. These points have largely gone unremarked, but in the long run, I suspect they’ll be especially significant. I think Apple is positioning the iPhone to be a mobile hub.

Right now, the iPhone/iPod Touch has a sharper display, more processing power, and better input affordances than many of the gadgets that we deal with on a day-to-day basis. I predict that some manufacturers will take note of this and start producing headless products that will only work with an iPhone snapped onto the front to take care of these functions and/or provide new functions. This could be a nice moneyspinner for Apple, because of their “iPod tax” on products marked as compatible, and because it would make the iPhone that much more appealing, thus increasing demand for it. It could also be a profitable niche for manufacturers to exploit, since they could sell a product that is more functional than conventional equivalents but cheaper to build.

For example:

  • Car stereos: A car stereo with no faceplate, but simply a bracket and plug for an iPhone strikes me as the most obvious category. Many aftermarket car stereos already have detachable faceplates anyhow, and this would be a logical extension of that idea. Giving direct access to music on the phone would be the obvious benefit, but it would allow tidier integration of phone functions into the car, and add navigation as a lagniappe. A custom app might give simplified access to phone, nav, and music, and perhaps would have a radio controller that communicated with an actual radio in the car stereo over the port.
  • Bike computers: There are already a number of interesting apps that can use the GPS chip on the iPhone to track performance and routes over a bike ride, but these are hampered by some of the phone’s limitations. One is that there’s no way to continue logging GPS data when the app is not running, for example, when answering a call. I’m not sure if 3.0 will change that. (A friend who works at Apple suggests it will, but I’m skeptical. All that would really be needed would be a simple background daemon that logged GPS data at regular intervals, allowing the actual app to pick up where it left off.) Apart from that, a bike-mounted cradle for the phone would permit wheel and heart-rate monitor data to be fed into the phone through the jack and could provide extra battery power to the phone—so far, the only way to get heart-rate monitor data into the phone has been through a specialized product that uses wifi. This is a clever hack, but bluetooth or some kind of low-power radio communicating with the cradle would be more appropriate.
  • Cameras: I’m not the first person to observe that serious cameras could use an interface more like an iPhone’s. Snapping an iPhone directly onto a camera back would be ungainly, but tethering it over a wire could make a lot of sense, making it a tool for managing captured photos, backing them up, appending metadata (including GPS), and transmitting them.
  • Trackpads: Again, there is already an app for the iPhone that allow it to function as a trackpad (or as a tenkey input, etc), but again, these work over wifi. Apple has done a lot to make the trackpad on laptops not only a tolerable alternative to the mouse but in many ways a superior one. I can imagine a keyboard with a snap-in slot for the iPhone that turns it into a trackpad, giving those advantages to people using desktops.

What is not clear so far is whether a new port connection can trigger an app on the iPhone. This would be helpful—if not necessary—to create a seamless experience. Ideally one would simply snap one’s phone onto one’s car stereo in order to put it into car-stereo mode—the phone would recognize what it was connected to, and an application would have been registered to automatically launch when that connection is made.

Later: Looks like I’m not the only person to think about this. See iPhone 3.0 As the Accessory to …? and PC 1.0, iPhone 3.0 and the Woz: Everything Old is New Again

Museum screens

Gwen recently got a new Macbook, and I’m thinking mighty hard about getting a new iMac. One controversial feature of both is the glossy screen.

These days, most laptops and many desktop monitors have glossy screens—Apple held the line for a long time, but has gone over to the shiny side (with the exception of the 17″ Macbook Pro, where the anti-glare option costs $50 extra). Glossy screens look great—better than the anti-glare screens—but only when there’s no glare. That’s an environmental condition that is hard to control, especially when out and about with a laptop. In the presence of direct light, the glare from the screen can make the screen image almost invisible, and generates a lot of eyestrain. Add-on anti-glare films do exist, but seem to get negative reviews, and strike me as an imperfect solution.

There is another way, and I’m surprised nobody has tried it yet: museum glass. This is a specialty product that I’ve only seen at framing shops. Its appearance is startling: the glass is just invisible. No glare, no visible matte coating, nothing—the only way you can tell it’s there is by poking at it and getting fingerprints on it. It’s worth going to a framing shop just to check it out. As far as I’m aware, it’s never been used for computer monitors, and I don’t doubt it would be a somewhat spendy upgrade, but it’s one that I suspect many buyers would gladly spring for—I know I would.

I know that the glass on the iMac is removable, with some difficulty. I wonder if it would be possible to get a piece of museum glass cut to fit it.

A humble case against everything buckets

Alex Payne recently wrote The Case Against Everything Buckets, which earned a rebuttal from Buzz Andersen.

Alex Payne’s post is ranty and prescriptivist, but there’s a nub of a good point buried in there: “Computers work best with structured data…With an Everything Bucket, you … miss out on opportunities to do interesting things with data”

What Alex Payne means by an “everything bucket” is a notebook-style application that you dump all your random notes, clippings, web links, pictures, etc into. There are a lot of independent software developers making interesting apps that fall in this general category. I don’t use one myself, mostly because I don’t need to manage big piles of notes.

I’ve always gravitated towards structured data—I put my contacts’ info in my address book, my links on delicious, and so on. And this can pay dividends—on a Mac, if you use the Address Book, other apps know where to look for your contact info and can do “interesting things” with it—like sync it to your phone, or check whether incoming e-mail is from someone you know. That’s what Alex Payne means by “interesting things.”

Here’s what’s funny, though: the distinction between the everything bucket and structured data may be a false dichotomy. That is to say, there’s still a difference in how you would get to the endpoint, but you’re still getting to the same endpoint of being able to do interesting things with your data. Those two paths are what Mark Pilgrim referred to as million-dollar markup vs milllion-dollar search.

Macs today (and also about ten years ago, right before the switch to OS X) come with “data detectors,” which will notice when a chunk of unstructured text contains something that looks like, say, a date, and will offer to create an iCal entry based on it.

Long before that, Simson Garfinkel wrote an app called SBook that looks like an everything bucket, but also attempts to do interesting things with your data. This is pretty much limited to contacts and related notes, but the idea is there.

Google searches can recognize mathematical formulas to give the results, personal names to give their contact details, musical groups to give their discographies, and so on.

If the software is smart enough—perhaps with a little coaxing from a person—to recognize the structure into which a chunk of data might fit, it shouldn’t really matter whether everything gets tossed into an everything bucket or meticulously sorted into multifaceted, hierarchical, schematized structures. The tools aren’t quite there yet, but there’s no technical reason it wouldn’t work.

Right now, though, it doesn’t work, and the benefits of those interesting things outweigh whatever cognitive load is associated with context-switching between different containers for different kinds of data.

My gripes about translation memory

I recently tweeted that I was experimenting with OmegaT, a translation-memory tool. When asked by one of its proponents how I liked it, I responded

@brandelune do not like omegaT. really only works with plain text. ugly. burdened w/ typical java on mac shortcomings. not customizable.

That barely begins to cover what I don’t like about OmegaT. I’ve been thinking about what I would like in a translation tool for a while now. My desires break down into two categories: the translation-memory engine, and the environment presented to the translator.

Thoughts on job tracking and invoicing

I’ve been looking for a good job-tracking and invoicing program for a long time. I’ve looked at just about everything, and nothing suits my particular needs. My needs are a little different—almost none of my work is billed by time but piecework instead (support for this kind of thing is often an afterthought) and I need support for multiple currencies (this is rare).

I’ve tried using Apple’s Numbers spreadsheet app. Spreadsheets in general have one big thing in their favor: they impose no assumptions on you. The flipside of that is that you have to build everything from scratch. There are a few aspects of job tracking where you’d prefer for your program to have made those assumptions for you. A bigger problem I had with Numbers in particular is that it corrupted my job log spreadsheet, which is a colossal PITA. A more philosophical problem is that spreadsheets are basically flat-file databases, and a proper job tracker really needs a relational database behind it.

Watching Gwen figure her work on a letterpress printing project was a lesson in how very different the job-tracking requirements for two solo freelancers can be. Ideally, one app would have the flexibility to meet these different needs. I’ve been giving the subject some thought, and what follows is my attempt to crystallize them.

Tunnelling towards the truth

tunnelling spelling trouble

Check out the two screenshots crops above. Observe that in each, one instance of the word has been flagged as misspelled, and the other has not.

I ran into this problem while working on a translation job. The client had instructed me to overtype an existing Japanese document in order to preserve the formatting. After nosing around for a minute, I discovered that the upper line was marked as US English, and the lower line was marked as UK English (I prefer the UK spelling in this case). Not sure how those language settings got into a Japanese document.

iTunes overload

Media types in the iTunes library iTunes is overloaded.

iTunes first came out in 2001. At the time, you could use it to rip CDs, burn CDs, play ripped files, and organize those files. You could also use it to copy files to an MP3 player (the iPod didn’t exist at that time).

The personal computing landscape has changed a lot since then, and so has iTunes. It’s being called on to do a lot more.

I’ve said before that the Mac works best when you “drink Apple’s kool-aid,” that is, organize your contacts in Apple’s address book, your appointments in iCal, etc, because these apps act as a front-end to databases that other apps can easily tap into. The same goes for iTunes: Apple nudges you into using it to organize not only your music, but also your video files, and the iTunes database becomes almost like a parallel file system for media files. iPhoto is the manager for, well, photos.

When I first got an iPod a couple years ago, it seemed a bit odd that I could sync my contacts and calendars to it—through iTunes. While it makes sense to manage one’s iPod through iTunes, there was already a subliminal itch of cognitive dissonance.

With the iPhone, that cognitive dissonance is breaking out into a visible rash. The media types that it manages now includes applications for the iPhone. iPhoto is the mechanism for selecting photos to copy to the iPhone, and either it or Image Capture is used to download photos taken with the iPhone’s camera. Curiously, there’s no way to get photos off the iPhone within iTunes; this feels like an oversight, or perhaps someone in Apple was feeling a bit of that itch as well, and felt unwilling to load up iTunes with another function even further from its central purpose.

While I’m not aware of any yet, at some point there will be apps from independent developers that need to exchange files between the desktop and the iPhone other than those handled by iTunes—it’s easy to imagine word-processing files, PDFs, presentation decks, etc, being copied back and forth. It’s not clear how that will happen. It could all happen via the Internet, although that would be indirect both physically and in terms of the user’s experience. For large files, it would be annoying, and for people without unlimited-data plans, potentially expensive. Apple does offer programmers a bundle of functions called “sync services,” but this requires the desktop application be written to support syncing in the first place. For a lot of the file transfers I envision, syncing wouldn’t be the appropriate mechanism. There’s not even a way to get plain text files from Apple’s own Notes app off the iPhone. It’s widely speculated that cut and paste are absent from the iPhone because Apple hasn’t figured out a good interface for it. I suspect it’s the same thing here: they haven’t figured out a good, general mechanism for moving files between iPhone and desktop.

At some point, Apple is going to have to re-think the division of labor in its marquee apps, to separate organizing files from manipulating or playing them.

Japanese input on the iPhone

I don’t expect to do a lot of Japanese text entry on my iPhone, but I’m glad that I have the option, and I’ve been enjoying playing around with that feature.

Any Japanese text-entry function is necessarily more complex than an English one. In English, we pretty much have a one-to-one mapping between the key struck and the letter produced. Occasionally we need to insert åçcéñted characters, but the additional work is minimal. In Japanese, the most common method of input on computers is to type phonetically on a QWERTY keyboard, which produces syllabic characters (hiragana) on-screen—type k-u to get the kana く, which is pronounced ku; after you’ve typed in a phrase or sentence, you hit the “convert” key (normally the space bar), and software guesses what kanji you might want to use, based on straight dictionary equivalents, your historical input, and some grammar parsing. So for the Japanese for “International,” you would type k-o-k-u-s-a-i; initially this would appear on-screen as こくさい, and then after hitting the convert key, you would see 国際 as an option. Now, that’s not the only word in Japanese with that pronunciation—the word for “government bond” sounds exactly the same and would be typed the same on a keyboard. To access that, you’d go through the same process, and after 国際 appeared on-screen, you’d hit the convert key again to get its next guess, which would be 国債, the correct pair of kanji. Sometimes there will be more candidates, in which case a floating menu will appear on-screen.

The first exposure most English speakers have had to the problem of producing more characters than your input device has keys has come with cellphones. T9 input on keypad is a good analogue to Japanese input on QWERTY: you type keys that each represent three letters, and when you hit the space key, T9 looks up the words you might have meant, showing a floating menu.

The iPhone, of course, uses a virtual QWERTY keyboard for English input, which is pretty good, especially considering the lack of tactile feedback and tiny keys. It guesses what word you might be trying to type based on adjacent keys. It does not (as far as I can tell) give multiple options, and isn’t very aggressive about suggesting finished words based on incomplete words. For English, at least, I’m guessing Apple decided that multiple candidates for a given input are too confusing. In general, the trend for heavy English text input on mobile devices seems to be towards small QWERTY keyboards, despite the facility some people have with T9. I’m wondering how many people are put off by the multiple-candidate aspect of T9, and if that’s why Apple omitted that aspect, or if it’s simply that not enough English-speakers are accustomed to dealing with multiple candidates.

Japanese input on the iPhone is different. It is aggressive about suggesting complete words and phrases. It does show multiple options (which is necessary in Japanese, and which Japanese users are accustomed to). In fact, it suggests kanji-converted phrases based on incomplete, incorrect kana input. Here’s an example based on the above:

iphone Japanese input sample - 1

Here, I typed k-o-k-u-a-a-i (note the intentional typo), which appears as こくああい. It shows a bunch of candidates, including the corrected and converted 国際, a logical alternative 国内, and some much longer ones, like 国際通貨基金—the International Monetary Fund. Since it has more candidates than it has room to display, it shows a little → which takes you to an expanded candidates screen. Just for grins, I will accept 国際通貨基金 as my preferred candidate. Here’s another neat predictive trick: immediately after I select that candidate, it shows sentence-particle candidates like に が, etc.

iphone Japanese input sample - 2

Let’s follow that arrow and see what other options it shows:

iphone Japanese input sample - 3

I’m going to select ã‚’ as my candidate. It immediately shows some verbs as candidates:

iphone Japanese input sample - 4

Here, 参ります is a verb I used previously, but 見 and 食べ are just common verbs—I’m guessing they’ve been weighted by the input function as likely for use in text messages (the phrase 国際通貨基金を食べ is somewhat unlikely in real life, unless you are Godzilla).

The iPhone also has an interesting kana-input mode, which uses an あかさたな grid with pie menus under each letter for the rest of the vowel-row. It looks like this:

iPhone Japanese input sample - 5

To enter an -a character, just tap it:

iPhone Japanese input sample

To enter a character from a different vowel-line, slide your finger in the appropriate direction on the pie-menu that appears and release:

iPhone Japanese input sample - 7

You can also get at characters from a different vowel-line using that hooked arrow, which iterates through them. I haven’t figured out what that forward arrow is for. It’s usually disabled, and only enabled momentarily after tapping in a new character. Tapping it doesn’t seem to have any effect.

This method offers the same error-forgiveness and predictive power as Japanese via QWERTY. I don’t find it to be faster than QWERTY though, but perhaps that’s just because I’m not used to it.

One thing I haven’t found is a way to edit the text-expansion dictionary directly. This would be very handy. I’m sure there are a few more tricks in store.

Also, a fun trick you can use on your Mac well as on an iPhone to get at special symbols: enter ゆーろ to get €. Same with ぽんど、やじるし、ゆうびん, etc.

Update Apparently the mysterious forward-arrow breaks you out of iterating through the options under one key, as explained here. Normally if you press あ あ, this would iterate through that vowel-line, and produce い. But if you actually want to produce ああ, you would type あ→あ (Thanks, Manako).

Obligatory iPhone rhapsodizing

The day after the iPhone 3G was released, I got one. So did Gwen. It’s very nice. I feel like I’ve entered the future. It’s not fair to compare it to any other cellphone I’ve ever used—the difference is almost as stark as the one between the Mac I’m typing this on and a vintage 1983 DOS computer. I played with a friend’s Palm phone recently, and that was perhaps on the order of Windows 3.1 by comparison. Others have spilled gallons of electrons writing about this thing, so I’ll just offer a few random observations.

Out of the box, it is the source of enough wonder and delight to keep you going for quite a while, but the big deal now is that there’s an official path for independent developers to put software on it, which multiplies its value. The fact that these apps will be able to tie into location data, the camera, the web, etc, suggests any number of interesting possibilities. More than any other gadget I’ve played with in a long time, the iPhone seems full of promise and potential—and not just through software. Having a nice screen, good interface, reasonably powerful processor, and interesting ancillary functions suggests all kinds of hardware hookups to me. Two that I would really like to see:

  1. A car stereo that uses the iPhone as its faceplate. I imagine a home-screen alternative with direct access to four functions: GPS, phone, music, and radio (controlling a radio built into the stereo via USB).
  2. A bike computer mount. With the right interfaces, an iPhone as a bike computer could do a lot of interesting things: capture location-data breadcrumbs, capture performance data (heart-rate monitor, power monitor, cadence), capture photos and voice memos for ride logs. This would be a boon to bike racers and tourists alike.

One of the glaring problems everyone mentions with the iPhone is the lack of cut-and-paste. This is a problem, but another one that sticks out for me is the lack of a keystroke expander. There’s already predictive text input built in, so this wouldn’t be a new feature–there just needs to be a front end to the predictive-text library so that users can set up explicit associations between phrases and triggers. If any developers out there is listening, I’ve got my credit card ready.

Here’s a little interface quirk with the iPhone: One of the few physical controls on the device is a volume rocker switch. When viewing Youtube videos (which are always presented in landscape view), the rocker is on the bottom, with down-volume to the right, up-volume to the left. Check out this screenshot of what happens when you change volume using the rocker switch:
iphone volume control screenshot
The volume HUD appears, showing a volume “thermometer” on the bottom. Here’s what’s quirky: as you press the left rocker, the thermometer advances towards the right, and vice versa. This is counter-intuitive. The obvious way to avoid this would be for Youtube videos to be presented 180° rotated from their current position (that is, with the rocker on top), but for whatever reason, they only appear in one orientation. This is an extremely minor issue, but it stands out when the interface generally shows great attention to detail and emphasis on natural interaction.

iPhone announcement as cultural event

Apparently it comes as news to nobody that Apple announced the second-generation iPhone yesterday. This is interesting.

I’ve got plenty of friends who were aware of the rumored announcement for weeks before it came. And not just pathetic geeks who spend all their spare time huddled over Apple rumor sites–these are regular people who use technology but aren’t obsessed with it. One such friend referred to her own phone as a “Fisher-Price Phone,” which cracks me up. A few hours after the announcement, another friend dropped me a line asking “so are you going to buy an iPhone now?”

I’m guessing most of these people heard from their nerdier friends the rumors that a new iPhone was imminent. It’s not unusual that nerds would know the rumors, or that they’d discuss the rumors about the new phone with less nerdy friends, but it is interesting that so many people would have heard it, been interested enough to actually file it away mentally, and bring it up in conversation unprompted. That a rumor about an announcement to be made at a developers conference, would just become part of the zeitgeist.

Incidentally, yes, I am going to buy an iPhone now. T-Mobile’s service has been going down the crapper lately. I’m conflicted (to put it mildly) about doing business with AT&T, but in this case I’ll compromise my principles for teh shiny.

Time Machine to NAS: Not quite there

I recently upgraded my Mac to Leopard, whose marquee feature is Time Machine, a nice backup mechanism.

I already had a NAS box. I originally got this primarily as a backup target. It’s got a half-terabyte hard drive in it, and it supports AFP, so it seems like a logical target for Time Machine backups. And apparently in the betas of Leopard, it was possible to use a hard drive attached to an Airport Extreme as a Time Machine target. This was disabled in the shipping version, but there’s a simple hack to re-enable it. Which I applied: as it happens, this made also it possible to use Time Machine with my NAS box.

One critical difference between my NAS box and a hard drive hanging off an Airport Express is the disk format. Time Machine requires an HFS+ disk. My box is using something else. Time Machine actually deals with this cleverly by creating a disk-image file on the target drive, but that’s also the root of the problem: Mounting this disk image over the network (even GigE) gets slower and slower as the file gets bigger and bigger. I had set up a very stripped-down backup profile (home directory only, no media files), but still, after a couple of weeks, it had gotten to 42 GB and took forever to mount. Eventually it took so long to mount that Time Machine would stop waiting for it and give up.

So until I get a Time Capsule or something, I’m using my previous backup app, Synk. Even after, it might be worth it to use Synk to back up my media files, which don’t need quite the obsessive hourly backup that Time Machine offers.

Clicking it old-school

Datadesk 101e keyboard

I am typing this post from my spanking new, and yet very old (in computer terms) Datadesk 101e keyboard. This keyboard is so old it has an ADB port instead of USB—I need to use an adaptor to hook it up to my Mac.

I love it.

I used Datadesk keyboards for years, but when I bought my current computer, my old one was looking especially crusty, and I felt like it was time to enter the modern era. I’d read good things about the Matias Tactile Pro, and so I decided to get one of them. I was never entirely happy with it. Some combinations of keys and modifier keys were simply dead, making some of my preferred MS Word shortcuts impossible. Matias even addresses this issue, saying in short, “all keyboards have this problem.” (I never had that problem with the 101e.)

After a few years of service, my Matias keyboard was starting to misbehave, and it was looking appallingly crusty. So I decided to replace it with the keyboard I really wanted all along, another 101e.

Since no online retailer carries these keyboards anymore, I called Datadesk directly, and spoke with someone who’s apparently in a position of responsibility there. We had a long and interesting (if you’re a Mac nerd) conversation about the history of Apple computers. He tried to talk me out of ordering the 101e, since it doesn’t have USB. I told him I had an adaptor. He laughed, and found there were still about a dozen new-old stock 101es on hand. So he sold me one.

He also told me that the people at Datadesk have been kicking around the idea of updating the 101e for the modern age, but aren’t sure whether to update the electronics to USB and give it slightly updated cosmetics without changing the plastics (which he said would be pretty easy), or to undertake a more extensive physical makeover (which would be a bigger commitment). I think either one would be a viable option.

I’m a keyboard snob. I like keys that have a long stroke and solid action. Not many keyboards these days offer that. And frankly, I’m surprised that more people aren’t keyboard snobs. Until we get direct neural hookups, keyboards are going to remain the primary text input device for many of us. We tap on them thousands of times a day, and even a tiny improvement multiplied out over thousands of repetitions per day add up to a pretty big improvement. It’s a mystery to me that well-engineered aftermarket computer mice are as popular as they are, but not keyboards.

Although most keyboards sold today are cost-engineered disposable crap with lousy feel, there is clearly a market for keyboards with quality engineering. The Matias, despite my problems with it, is much better than most. There’s also the even more retro PC Keyboard, and the intimidating Das Keyboard.

Compared to the Tactile Pro, the 101e is much quieter, though still louder than most modern keyboards. It weighs much more: it stays where you set it on your desk. It’s bigger in every dimension. I don’t mind the fact that it takes up a little more desk real-estate, but it would be nice if the total height were a little lower; a rounded front edge on the space bar would also make it more comfortable to use. But I’m very happy with it. When you push down on a key, it goes straight down. With the Matias, sometimes the keys felt like they were trying to veer off to one side.

If you’re a snob about keyboards and don’t mind using a Griffin iMate, get one of the 12 11 remaining 101es. Or perhaps let Datadesk know that you’d be interested in getting an updated version of the 101e.

Update: Numbers don’t lie (even if statistics do). My best score at keybr.com was about 48 WPM with my old keyboard. 64 WPM with my new one. And I don’t even touch-type.

Another update: According to a fellow old-school keyboardista, although there are other USB-ADB adaptors out there, they can cause problems, so you really want to use the Griffin iMate.

Yet another update: Gruber and Benjamin discuss old keyboards on an episode of The Talk Show, and make sidelong references to the 101e, although do not mention it by name.

A still further update: NPR recently did a story on a kindred keyboard, the Unicomp, which carries on the old IBM Model M.

MacBook Air reaction

The interesting thing about the MBA (heh) is that it is intended as an “outrigger” computer. While it could be barely self-sufficient, the idea seems to be that anyone owning one would have a bigger computer somewhere else. That’s a reasonable assumption and the outrigger market is a reasonable one to serve. But if that was Apple’s starting point, they’ve made some weird choices.

  • Price: $1800 is a big commitment for a secondary computer.
  • Size: It’s small, but it’s not that small; its footprint is big enough that it clearly bothers a lot of people. And for that matter, it seems that they could have shaved an inch off the width and half-inch off the depth without cutting into screen or keyboard.
  • Power: It’s not exactly high-spec, but it’s pretty high-spec.

There is an emerging trend of cheap and cheerful devices that aren’t practical as fully functioning standalone computers, but are fine for web-surfing, media playback, and lightweight work. Things like the Nokia N810 or the Asus Eee. Apple seems to be borrowing the outrigger aspect of these devices without picking up on their other features—low-power CPU, small screen, limited keyboard, etc—features that make them less than workhorses, but easier to schlepp around and longer running. The MBA is a more or less full-power serious work machine and fashion statement that isn’t quite self-sufficient but doesn’t quite embrace its second-computer status either.

It’s been widely speculated that Apple would, eventually, introduce something that would fit somewhere between a laptop and the iPhone. Like a tablet. It may be that the iPhone is Apple’s tablet, but the choices behind the MBA leave room at the low end of the market for something else. Some people are already filling that void by installing OS X on the Asus Eee. I don’t think the MBA is going to be it for a lot of people.

Leopard initial reactions

Rather than buying a new computer, I’m updating my old one right now, and installed Leopard yesterday.

Normally when I install a major upgrade, I do a “clean install”—reconstructing my old environment by manually importing old files and recreating preferenecs is admittedly laborious, but it gives me a chance to re-examine what’s on my hard drive and jettison stuff I never use. I cloned my boot drive to an external drive, and selected the erase-and-install option in the Leopard installer. After that finished, it offered to import my old setup from my external drive. For some reason, I chose this option, and regretted it, as it faithfully imported every bit of cruft from my old system, some of which caused Leopard to lock up. Apart from that, I have to admit it did a sterling job—every jot and tittle was in place. It would be nice if I had more control over what got imported and what did not.

Tried again with the clean install, followed by manual copying of specific folders and files. I had a little trouble importing my old Mail folders, and discovered that I had to export my Address Book data (using Address Book running on a different computer) before I could import it to Leopard. And then I discovered one of those annoyances only a geek could love. For whatever reason, my short user name was now adamjrice. It has always been adamrice in the past, and this change was, of course, unacceptable. The path to my $HOME directory had changed similarly. One new and appreciated feature in Leopard is that it’s actually easy to change this: right-click on your username in the Accounts prefpane sidebar and it gives you the “advanced options” to fix this. Nice. However, it makes this change by creating a new $HOME directory with defaults, not by moving the old one, and instantly, silently migrating you to that. This causes weird and unwanted results. My advice: if you are going the clean-install route, check to make sure you are happy with your short user name before you do any customization. Fix it if need be, and log out/relog.

Other than these breaking-in pains, so far I’m happy. My computer is noticeably faster (not just subjectively—apps open faster, and Second Life, a poky pig, ran at about 2x the framerate making it almost tolerable), although this may have as much to do with blowing out some crufty haxies as anything else. Network throughput likewise seems to be faster, but I haven’t measured this.

QuickLook is probably worth the price of admission all by itself, especially if you can get plugins for the files you use the most. Last night, Gwen was trawling through a directory full of EPSs with meaningless names. Even though she’s still running Tiger, I mounted her drive, installed a QuickLook plugin for EPS, and was able to browse most (not all) of those files with previews in a couple of minutes. Big win. Coverflow in Finder, which seems like a frill, is useful in the same way QuickLook is, especially when, say, trawling through a directory full of meaninglessly-named EPS files.

As others have mentioned, Spotlight has gone from sucking to not-sucking. I reiterate that fact simply because the transformation is so stark.

So far, I’m calling this a success.

Technology on the bike

2007 hasn’t been a good year for cycling, at least not for me. I recently made a birthday resolution that in 2008 I would ride more.

Along with riding less, I’ve paid less attention to bike technology, which is something that’s always interested me. Lately I’ve been paying a little more atention. I just read about a prototype bike-computer/rear-view camera called Cerevellum. This looks like a great idea—plug-in modules for different functionality. One idea I’ve never seen implemented on a bike is a crash camera. Now, this might be more of a concern for me than most people, but the overhead would be slight and the potential benefits considerable.

I envision the system including rearward-looking and forward-looking cameras, an accelerometer, and some flash-memory storage. It wouldn’t need much—just enough to capture about a minute’s worth of video and audio (about 180 MB for good-quality uncompressed video—chump change by today’s standards). If the accelerometer detected any sudden movement (indicative of a crash), the cameras would save the preceding 30 seconds and the following 30 seconds. That should be enough to capture license plate numbers, the circumstances of the crash, etc. A manual trigger to save would also make sense, as would manual still photography capture. Equipment like this does exist for cars, but nothing miniature enough for a bike. With 1 GB memory cards as common as dirt, one could record several minutes of footage per ride, as well as a lot of still photos, which could be interesting in ways other than crash documentation.

Word annoyance du jour

screenshot of MS Word with two open documents

There are lots of reasons to be annoyed with Word. I’ve just discovered another one.

Look at the screenshot above. It shows two open documents in Word. Which one is the active window? (too small? You can see the full size image.)

Trick question. Neither is active. Even though these are the only two documents open in Word, neither is active. Keystrokes will not be sent to either one. Note how the title bar of the left window makes it appear to be active, while the scroll bar of the right window suggests it is active.

This condition arises erratically when the command-` systemwide shortcut is used to cycle through the windows of the current app. It only seems to happen when windows of other apps are also visible. I’m working on a job where I’m copying timecodes from the left window into the right, and the fact that I can’t use command-` to cycle is slowing me down measurably—almost as much as taking time out to blog about the problem is doing.

Google Crowdsourcing Machine Translation

Screenshot of google translation crowdsourcing interface

I clicked through a link from a gadget site to a machine-translated press release for a new car-stereo head unit. I noticed that when my cursor hovered over a block of text, one of those floating mock-windows that are so popular in web2.0 appeared. It permits readers to enter their own translation for that sentence or chunk of text.

This is interesting, and something I hadn’t noticed before. It raises all kinds of interesting questions. Most obviously, how do they vet these reader-submitted translations? But it’s fascinating as a machine-translation paradigm. There are two general approaches to MT: one is basically lexical and grammatical analysis and substitution: diagramming sentences, dictionary lookup, etc. The other is “corpus based”, that is, having a huge body of phrase pairs, where one can be substituted for the other. And there is a hybrid between the two, that uses the corpus-based approach, but with some added smarts that permits a given phrase to serve as a pattern for novel phrases not found in the corpus (this is also pretty much how computer-assisted translation, or CAT, works). I wonder how these crowdsourced submissions work back into the MT backend—if they’re used strictly in a corpus-based translation layer, or if they get extrapolated into patterns. I’m skeptical that they’re getting a significant number of submissions through this system, but if they did, the range of writing styles, language ability, and so on that would be feeding into the system would seem to make it incredibly complicated. And perhaps a huge jump forward in improvement over older MT systems…but perhaps a huge clusterfuck of unharmonized spammy nonsense.