Update to â€œsurvey of iPhone bike-computer appsâ€
I’ve updated my earlier post to include discussion of a couple more apps: Outdoor Pro and OutFront.
I’ve updated my earlier post to include discussion of a couple more apps: Outdoor Pro and OutFront.
It is widely rumored that Apple will be introducing some kind of tablet gadget about half a day after I write these words. It is also widely rumored that a key aspect of this introduction will be deals with major print-media publishers, who will be offering electronic versions of their books and periodicals on the mythical tablet.
Ben Hammersley has been writing about electronic media and the future of journalism, but more from a different angleâ€”from the original act of creating stories. But I don’t doubt he’s been thinking about the production side too.
What tool will that be and where will it come from? I doubt it will be Adobe’s GoLive, although that might work. I suspect (assuming all these other suppositions are correct) that Apple will be announcing their own software, taking another dig at Adobe. If all this is correct, it’s going to become an important software market.
And while Apple gets dinged (often justifiably) for a walled-garden approach to their products and services, in this case a win for Apple would be a win for the public interest. A publication format based on existing standards lowers the barrier to entry for other players; if Amazon decides they want to support this new format in the Kindle, they’ll just need to ensure they’ve got a standards-compliant HTML engine on it, and publishers will just retarget the Kindle with the same output. The formats may involve some kind of quirky or proprietary wrappers, but these would get laid on at the last step in the production process. It would be trivial to re-wrap the same payload for multiple devices. For any of these devices to succeed, of course, is another matter entirely.
I’ve written before about the iPhone’s potential and drawbacks as a bike computer. And there are a lot of bike-computer apps available for it right now. Let’s take a look at them.
I’ve gone on a bit of a kick lately and tried out four different ones. There are one or two others that I haven’t gotten around to yet. I hope to eventually, and will report on them in this space when I do.
Executive summary: Rubitrack for iPhone and Cyclemeter are clearly oriented towards performance cyclists; right now I’d give the nod to Cyclemeter. GPSies seems almost like a toy, but might be of use to hikers. Motion-X is for GPS otaku.
The “unofficial Apple weblog” had a post calling on readers to submit their wishlist for future iPhone OS features, which got me to thinking.
Multitasking is an obvious shortcoming on the iPhone right now. Multitasking is possible: some of Apple’s own apps run in the background, and there are jailbreak apps that allow apps to run in the background and for the user to switch between running apps. But Apple does not allow app-store apps to run in the background at all, presumably because of performance and battery-life problems.
I believe that multitasking on the iPhone can be broken down into two functional categories: apps that you want to run persistently in the background, and what I’m calling “interruptors”: brief tasks that only take a few seconds to complete, and where you don’t want to break out of your current app. I’m concerning myself with the latter case here.
A jailbreak Twitter app, qTweeter, has the kernel of an approach to presenting these interruptors: it pulls down from the top of the screen like a windowshade, and is accessible any time.
This approach could be generalized to present multiple apps in what I’m calling a pulldock. There could be one pulldock that pulls down from the top, another that pulls up from the bottom, to present up to eight interruptors.
I envision these interruptors being stripped-down interfaces to existing apps or services, such as twittering or text messaging, that would appear in some kind of HUD-like view superimposed over the running app. Interruptors should be lightweight enough that they wouldn’t overburden the phone. I can also imagine new ways of passing information between a regular app and an interruptorâ€”such as launching a camera interruptor while in the mail app as a way to take a photo and insert it into a mail message, which would save a few steps.
Here’s a screencast:
Yeah, there’s a lot of “umms” and sniffing in there. It’s the first screencast I’ve ever done. The visuals were done in Keynote using the template from Mockapp.
A couple of nights ago, Gwen used the phrase “Googling for something on America’s Test Kitchen” instead of “searching forâ€¦”, which just reinforces that Google has become a synonym for search.
Google search results are often polluted by irrelevant links to commercial websites like bizrate and dealtime, though. Wouldn’t it be nice if there were a way to avoid that? There is: use Give me back my Google.
It would be even nicer if you could search via GmbmG right from the search field in your browser. And in fact you can, but you’ll need to set it up first
Safari does not let you customize your search field out of the box, but there are some hacks like Glims that add this capability. Once you’ve done that, you’ll need to add GmbmG to Glims as a custom search engine and teach it the specific search syntax that GmbmG uses. It is:
http://www.givemebackmygoogle.com/forward.php?search= search key
These browsers support something called the “open search description document,” which makes adding a new search engine dead-simple. I have no idea how this works in IE, but in Firefox, just install this plugin (which I created, not the creator of GmbmGâ€”the plugin is currently listed as experimental, but it’s perfectly innocuous, I promise) and it will add that site to the list of search engines your browser uses.
I have become slightly obsessed with the idea of using an iPhone as a bike computer. What follows will be of little interest to anyone except gadget-nerd cyclists.
Years ago, Apple promoted the Mac as a “digital hub” for media. Today we take for granted that computers can be used as hubs for media.
Two of the numerous points covered in Apple’s recent demo of version 3.0 of the iPhone OS was that Apple was finally giving developers access to the port, as well as unblocking bluetooth. These points have largely gone unremarked, but in the long run, I suspect they’ll be especially significant. I think Apple is positioning the iPhone to be a mobile hub.
Right now, the iPhone/iPod Touch has a sharper display, more processing power, and better input affordances than many of the gadgets that we deal with on a day-to-day basis. I predict that some manufacturers will take note of this and start producing headless products that will only work with an iPhone snapped onto the front to take care of these functions and/or provide new functions. This could be a nice moneyspinner for Apple, because of their “iPod tax” on products marked as compatible, and because it would make the iPhone that much more appealing, thus increasing demand for it. It could also be a profitable niche for manufacturers to exploit, since they could sell a product that is more functional than conventional equivalents but cheaper to build.
What is not clear so far is whether a new port connection can trigger an app on the iPhone. This would be helpfulâ€”if not necessaryâ€”to create a seamless experience. Ideally one would simply snap one’s phone onto one’s car stereo in order to put it into car-stereo modeâ€”the phone would recognize what it was connected to, and an application would have been registered to automatically launch when that connection is made.
Looks like I’m not the only person to think about this. See iPhone 3.0 As the Accessory to â€¦? and PC 1.0, iPhone 3.0 and the Woz: Everything Old is New Again
Gwen recently got a new Macbook, and I’m thinking mighty hard about getting a new iMac. One controversial feature of both is the glossy screen.
These days, most laptops and many desktop monitors have glossy screensâ€”Apple held the line for a long time, but has gone over to the shiny side (with the exception of the 17″ Macbook Pro, where the anti-glare option costs $50 extra). Glossy screens look greatâ€”better than the anti-glare screensâ€”but only when there’s no glare. That’s an environmental condition that is hard to control, especially when out and about with a laptop. In the presence of direct light, the glare from the screen can make the screen image almost invisible, and generates a lot of eyestrain. Add-on anti-glare films do exist, but seem to get negative reviews, and strike me as an imperfect solution.
There is another way, and I’m surprised nobody has tried it yet: museum glass. This is a specialty product that I’ve only seen at framing shops. Its appearance is startling: the glass is just invisible. No glare, no visible matte coating, nothingâ€”the only way you can tell it’s there is by poking at it and getting fingerprints on it. It’s worth going to a framing shop just to check it out. As far as I’m aware, it’s never been used for computer monitors, and I don’t doubt it would be a somewhat spendy upgrade, but it’s one that I suspect many buyers would gladly spring forâ€”I know I would.
I know that the glass on the iMac is removable, with some difficulty. I wonder if it would be possible to get a piece of museum glass cut to fit it.
Alex Payne recently wrote The Case Against Everything Buckets, which earned a rebuttal from Buzz Andersen.
Alex Payne’s post is ranty and prescriptivist, but there’s a nub of a good point buried in there: “Computers work best with structured dataâ€¦With an Everything Bucket, you â€¦ miss out on opportunities to do interesting things with data”
What Alex Payne means by an “everything bucket” is a notebook-style application that you dump all your random notes, clippings, web links, pictures, etc into. There are a lot of independent software developers making interesting apps that fall in this general category. I don’t use one myself, mostly because I don’t need to manage big piles of notes.
I’ve always gravitated towards structured dataâ€”I put my contacts’ info in my address book, my links on delicious, and so on. And this can pay dividendsâ€”on a Mac, if you use the Address Book, other apps know where to look for your contact info and can do “interesting things” with itâ€”like sync it to your phone, or check whether incoming e-mail is from someone you know. That’s what Alex Payne means by “interesting things.”
Here’s what’s funny, though: the distinction between the everything bucket and structured data may be a false dichotomy. That is to say, there’s still a difference in how you would get to the endpoint, but you’re still getting to the same endpoint of being able to do interesting things with your data. Those two paths are what Mark Pilgrim referred to as million-dollar markup vs milllion-dollar search.
Macs today (and also about ten years ago, right before the switch to OS X) come with “data detectors,” which will notice when a chunk of unstructured text contains something that looks like, say, a date, and will offer to create an iCal entry based on it.
Long before that, Simson Garfinkel wrote an app called SBook that looks like an everything bucket, but also attempts to do interesting things with your data. This is pretty much limited to contacts and related notes, but the idea is there.
Google searches can recognize mathematical formulas to give the results, personal names to give their contact details, musical groups to give their discographies, and so on.
If the software is smart enoughâ€”perhaps with a little coaxing from a personâ€”to recognize the structure into which a chunk of data might fit, it shouldn’t really matter whether everything gets tossed into an everything bucket or meticulously sorted into multifaceted, hierarchical, schematized structures. The tools aren’t quite there yet, but there’s no technical reason it wouldn’t work.
Right now, though, it doesn’t work, and the benefits of those interesting things outweigh whatever cognitive load is associated with context-switching between different containers for different kinds of data.
I recently tweeted that I was experimenting with OmegaT, a translation-memory tool. When asked by one of its proponents how I liked it, I responded
@brandelune do not like omegaT. really only works with plain text. ugly. burdened w/ typical java on mac shortcomings. not customizable.
That barely begins to cover what I don’t like about OmegaT. I’ve been thinking about what I would like in a translation tool for a while now. My desires break down into two categories: the translation-memory engine, and the environment presented to the translator.
I’ve been looking for a good job-tracking and invoicing program for a long time. I’ve looked at just about everything, and nothing suits my particular needs. My needs are a little differentâ€”almost none of my work is billed by time but piecework instead (support for this kind of thing is often an afterthought) and I need support for multiple currencies (this is rare).
I’ve tried using Apple’s Numbers spreadsheet app. Spreadsheets in general have one big thing in their favor: they impose no assumptions on you. The flipside of that is that you have to build everything from scratch. There are a few aspects of job tracking where you’d prefer for your program to have made those assumptions for you. A bigger problem I had with Numbers in particular is that it corrupted my job log spreadsheet, which is a colossal PITA. A more philosophical problem is that spreadsheets are basically flat-file databases, and a proper job tracker really needs a relational database behind it.
Watching Gwen figure her work on a letterpress printing project was a lesson in how very different the job-tracking requirements for two solo freelancers can be. Ideally, one app would have the flexibility to meet these different needs. I’ve been giving the subject some thought, and what follows is my attempt to crystallize them.
Check out the two screenshots crops above. Observe that in each, one instance of the word has been flagged as misspelled, and the other has not.
I ran into this problem while working on a translation job. The client had instructed me to overtype an existing Japanese document in order to preserve the formatting. After nosing around for a minute, I discovered that the upper line was marked as US English, and the lower line was marked as UK English (I prefer the UK spelling in this case). Not sure how those language settings got into a Japanese document.
iTunes is overloaded.
iTunes first came out in 2001. At the time, you could use it to rip CDs, burn CDs, play ripped files, and organize those files. You could also use it to copy files to an MP3 player (the iPod didn’t exist at that time).
The personal computing landscape has changed a lot since then, and so has iTunes. It’s being called on to do a lot more.
I’ve said before that the Mac works best when you “drink Apple’s kool-aid,” that is, organize your contacts in Apple’s address book, your appointments in iCal, etc, because these apps act as a front-end to databases that other apps can easily tap into. The same goes for iTunes: Apple nudges you into using it to organize not only your music, but also your video files, and the iTunes database becomes almost like a parallel file system for media files. iPhoto is the manager for, well, photos.
When I first got an iPod a couple years ago, it seemed a bit odd that I could sync my contacts and calendars to itâ€”through iTunes. While it makes sense to manage one’s iPod through iTunes, there was already a subliminal itch of cognitive dissonance.
With the iPhone, that cognitive dissonance is breaking out into a visible rash. The media types that it manages now includes applications for the iPhone. iPhoto is the mechanism for selecting photos to copy to the iPhone, and either it or Image Capture is used to download photos taken with the iPhone’s camera. Curiously, there’s no way to get photos off the iPhone within iTunes; this feels like an oversight, or perhaps someone in Apple was feeling a bit of that itch as well, and felt unwilling to load up iTunes with another function even further from its central purpose.
While I’m not aware of any yet, at some point there will be apps from independent developers that need to exchange files between the desktop and the iPhone other than those handled by iTunesâ€”it’s easy to imagine word-processing files, PDFs, presentation decks, etc, being copied back and forth. It’s not clear how that will happen. It could all happen via the Internet, although that would be indirect both physically and in terms of the user’s experience. For large files, it would be annoying, and for people without unlimited-data plans, potentially expensive. Apple does offer programmers a bundle of functions called “sync services,” but this requires the desktop application be written to support syncing in the first place. For a lot of the file transfers I envision, syncing wouldn’t be the appropriate mechanism. There’s not even a way to get plain text files from Apple’s own Notes app off the iPhone. It’s widely speculated that cut and paste are absent from the iPhone because Apple hasn’t figured out a good interface for it. I suspect it’s the same thing here: they haven’t figured out a good, general mechanism for moving files between iPhone and desktop.
At some point, Apple is going to have to re-think the division of labor in its marquee apps, to separate organizing files from manipulating or playing them.
I don’t expect to do a lot of Japanese text entry on my iPhone, but I’m glad that I have the option, and I’ve been enjoying playing around with that feature.
Any Japanese text-entry function is necessarily more complex than an English one. In English, we pretty much have a one-to-one mapping between the key struck and the letter produced. Occasionally we need to insert Ã¥Ã§cÃ©Ã±ted characters, but the additional work is minimal. In Japanese, the most common method of input on computers is to type phonetically on a QWERTY keyboard, which produces syllabic characters (hiragana) on-screenâ€”type k-u to get the kana ã, which is pronounced ku; after you’ve typed in a phrase or sentence, you hit the “convert” key (normally the space bar), and software guesses what kanji you might want to use, based on straight dictionary equivalents, your historical input, and some grammar parsing. So for the Japanese for “International,” you would type k-o-k-u-s-a-i; initially this would appear on-screen as ã“ãã•ã„, and then after hitting the convert key, you would see å›½éš› as an option. Now, that’s not the only word in Japanese with that pronunciationâ€”the word for “government bond” sounds exactly the same and would be typed the same on a keyboard. To access that, you’d go through the same process, and after å›½éš› appeared on-screen, you’d hit the convert key again to get its next guess, which would be å›½å‚µ, the correct pair of kanji. Sometimes there will be more candidates, in which case a floating menu will appear on-screen.
The first exposure most English speakers have had to the problem of producing more characters than your input device has keys has come with cellphones. T9 input on keypad is a good analogue to Japanese input on QWERTY: you type keys that each represent three letters, and when you hit the space key, T9 looks up the words you might have meant, showing a floating menu.
The iPhone, of course, uses a virtual QWERTY keyboard for English input, which is pretty good, especially considering the lack of tactile feedback and tiny keys. It guesses what word you might be trying to type based on adjacent keys. It does not (as far as I can tell) give multiple options, and isn’t very aggressive about suggesting finished words based on incomplete words. For English, at least, I’m guessing Apple decided that multiple candidates for a given input are too confusing. In general, the trend for heavy English text input on mobile devices seems to be towards small QWERTY keyboards, despite the facility some people have with T9. I’m wondering how many people are put off by the multiple-candidate aspect of T9, and if that’s why Apple omitted that aspect, or if it’s simply that not enough English-speakers are accustomed to dealing with multiple candidates.
Japanese input on the iPhone is different. It is aggressive about suggesting complete words and phrases. It does show multiple options (which is necessary in Japanese, and which Japanese users are accustomed to). In fact, it suggests kanji-converted phrases based on incomplete, incorrect kana input. Here’s an example based on the above:
Here, I typed k-o-k-u-a-a-i (note the intentional typo), which appears as ã“ãã‚ã‚ã„. It shows a bunch of candidates, including the corrected and converted å›½éš›, a logical alternative å›½å†…, and some much longer ones, like å›½éš›é€šè²¨åŸºé‡‘â€”the International Monetary Fund. Since it has more candidates than it has room to display, it shows a little â†’ which takes you to an expanded candidates screen. Just for grins, I will accept å›½éš›é€šè²¨åŸºé‡‘ as my preferred candidate. Here’s another neat predictive trick: immediately after I select that candidate, it shows sentence-particle candidates like ã« ãŒ, etc.
Let’s follow that arrow and see what other options it shows:
I’m going to select ã‚’ as my candidate. It immediately shows some verbs as candidates:
Here, å‚ã‚Šã¾ã™ is a verb I used previously, but è¦‹ and é£Ÿã¹ are just common verbsâ€”I’m guessing they’ve been weighted by the input function as likely for use in text messages (the phrase å›½éš›é€šè²¨åŸºé‡‘ã‚’é£Ÿã¹ is somewhat unlikely in real life, unless you are Godzilla).
The iPhone also has an interesting kana-input mode, which uses an ã‚ã‹ã•ãŸãª grid with pie menus under each letter for the rest of the vowel-row. It looks like this:
To enter an -a character, just tap it:
To enter a character from a different vowel-line, slide your finger in the appropriate direction on the pie-menu that appears and release:
You can also get at characters from a different vowel-line using that hooked arrow, which iterates through them. I haven’t figured out what that forward arrow is for. It’s usually disabled, and only enabled momentarily after tapping in a new character. Tapping it doesn’t seem to have any effect.
This method offers the same error-forgiveness and predictive power as Japanese via QWERTY. I don’t find it to be faster than QWERTY though, but perhaps that’s just because I’m not used to it.
One thing I haven’t found is a way to edit the text-expansion dictionary directly. This would be very handy. I’m sure there are a few more tricks in store.
Also, a fun trick you can use on your Mac well as on an iPhone to get at special symbols: enter ã‚†ãƒ¼ã‚ to get â‚¬. Same with ã½ã‚“ã©ã€ã‚„ã˜ã‚‹ã—ã€ã‚†ã†ã³ã‚“, etc.
Apparently the mysterious forward-arrow breaks you out of iterating through the options under one key, as explained here. Normally if you press ã‚ã€€ã‚, this would iterate through that vowel-line, and produce ã„. But if you actually want to produce ã‚ã‚, you would type ã‚â†’ã‚ (Thanks, Manako).
The day after the iPhone 3G was released, I got one. So did Gwen. It’s very nice. I feel like I’ve entered the future. It’s not fair to compare it to any other cellphone I’ve ever usedâ€”the difference is almost as stark as the one between the Mac I’m typing this on and a vintage 1983 DOS computer. I played with a friend’s Palm phone recently, and that was perhaps on the order of Windows 3.1 by comparison. Others have spilled gallons of electrons writing about this thing, so I’ll just offer a few random observations.
Out of the box, it is the source of enough wonder and delight to keep you going for quite a while, but the big deal now is that there’s an official path for independent developers to put software on it, which multiplies its value. The fact that these apps will be able to tie into location data, the camera, the web, etc, suggests any number of interesting possibilities. More than any other gadget I’ve played with in a long time, the iPhone seems full of promise and potentialâ€”and not just through software. Having a nice screen, good interface, reasonably powerful processor, and interesting ancillary functions suggests all kinds of hardware hookups to me. Two that I would really like to see:
One of the glaring problems everyone mentions with the iPhone is the lack of cut-and-paste. This is a problem, but another one that sticks out for me is the lack of a keystroke expander. There’s already predictive text input built in, so this wouldn’t be a new feature–there just needs to be a front end to the predictive-text library so that users can set up explicit associations between phrases and triggers. If any developers out there is listening, I’ve got my credit card ready.
Here’s a little interface quirk with the iPhone: One of the few physical controls on the device is a volume rocker switch. When viewing Youtube videos (which are always presented in landscape view), the rocker is on the bottom, with down-volume to the right, up-volume to the left. Check out this screenshot of what happens when you change volume using the rocker switch:
The volume HUD appears, showing a volume “thermometer” on the bottom. Here’s what’s quirky: as you press the left rocker, the thermometer advances towards the right, and vice versa. This is counter-intuitive. The obvious way to avoid this would be for Youtube videos to be presented 180Â° rotated from their current position (that is, with the rocker on top), but for whatever reason, they only appear in one orientation. This is an extremely minor issue, but it stands out when the interface generally shows great attention to detail and emphasis on natural interaction.
Apparently it comes as news to nobody that Apple announced the second-generation iPhone yesterday. This is interesting.
I’ve got plenty of friends who were aware of the rumored announcement for weeks before it came. And not just pathetic geeks who spend all their spare time huddled over Apple rumor sites–these are regular people who use technology but aren’t obsessed with it. One such friend referred to her own phone as a “Fisher-Price Phone,” which cracks me up. A few hours after the announcement, another friend dropped me a line asking “so are you going to buy an iPhone now?”
I’m guessing most of these people heard from their nerdier friends the rumors that a new iPhone was imminent. It’s not unusual that nerds would know the rumors, or that they’d discuss the rumors about the new phone with less nerdy friends, but it is interesting that so many people would have heard it, been interested enough to actually file it away mentally, and bring it up in conversation unprompted. That a rumor about an announcement to be made at a developers conference, would just become part of the zeitgeist.
Incidentally, yes, I am going to buy an iPhone now. T-Mobile’s service has been going down the crapper lately. I’m conflicted (to put it mildly) about doing business with AT&T, but in this case I’ll compromise my principles for teh shiny.
I recently upgraded my Mac to Leopard, whose marquee feature is Time Machine, a nice backup mechanism.
I already had a NAS box. I originally got this primarily as a backup target. It’s got a half-terabyte hard drive in it, and it supports AFP, so it seems like a logical target for Time Machine backups. And apparently in the betas of Leopard, it was possible to use a hard drive attached to an Airport Extreme as a Time Machine target. This was disabled in the shipping version, but there’s a simple hack to re-enable it. Which I applied: as it happens, this made also it possible to use Time Machine with my NAS box.
One critical difference between my NAS box and a hard drive hanging off an Airport Express is the disk format. Time Machine requires an HFS+ disk. My box is using something else. Time Machine actually deals with this cleverly by creating a disk-image file on the target drive, but that’s also the root of the problem: Mounting this disk image over the network (even GigE) gets slower and slower as the file gets bigger and bigger. I had set up a very stripped-down backup profile (home directory only, no media files), but still, after a couple of weeks, it had gotten to 42 GB and took forever to mount. Eventually it took so long to mount that Time Machine would stop waiting for it and give up.
So until I get a Time Capsule or something, I’m using my previous backup app, Synk. Even after, it might be worth it to use Synk to back up my media files, which don’t need quite the obsessive hourly backup that Time Machine offers.
I am typing this post from my spanking new, and yet very old (in computer terms) Datadesk 101e keyboard. This keyboard is so old it has an ADB port instead of USBâ€”I need to use an adaptor to hook it up to my Mac.
I love it.
I used Datadesk keyboards for years, but when I bought my current computer, my old one was looking especially crusty, and I felt like it was time to enter the modern era. I’d read good things about the Matias Tactile Pro, and so I decided to get one of them. I was never entirely happy with it. Some combinations of keys and modifier keys were simply dead, making some of my preferred MS Word shortcuts impossible. Matias even addresses this issue, saying in short, “all keyboards have this problem.” (I never had that problem with the 101e.)
After a few years of service, my Matias keyboard was starting to misbehave, and it was looking appallingly crusty. So I decided to replace it with the keyboard I really wanted all along, another 101e.
Since no online retailer carries these keyboards anymore, I called Datadesk directly, and spoke with someone who’s apparently in a position of responsibility there. We had a long and interesting (if you’re a Mac nerd) conversation about the history of Apple computers. He tried to talk me out of ordering the 101e, since it doesn’t have USB. I told him I had an adaptor. He laughed, and found there were still about a dozen new-old stock 101es on hand. So he sold me one.
He also told me that the people at Datadesk have been kicking around the idea of updating the 101e for the modern age, but aren’t sure whether to update the electronics to USB and give it slightly updated cosmetics without changing the plastics (which he said would be pretty easy), or to undertake a more extensive physical makeover (which would be a bigger commitment). I think either one would be a viable option.
I’m a keyboard snob. I like keys that have a long stroke and solid action. Not many keyboards these days offer that. And frankly, I’m surprised that more people aren’t keyboard snobs. Until we get direct neural hookups, keyboards are going to remain the primary text input device for many of us. We tap on them thousands of times a day, and even a tiny improvement multiplied out over thousands of repetitions per day add up to a pretty big improvement. It’s a mystery to me that well-engineered aftermarket computer mice are as popular as they are, but not keyboards.
Although most keyboards sold today are cost-engineered disposable crap with lousy feel, there is clearly a market for keyboards with quality engineering. The Matias, despite my problems with it, is much better than most. There’s also the even more retro PC Keyboard, and the intimidating Das Keyboard.
Compared to the Tactile Pro, the 101e is much quieter, though still louder than most modern keyboards. It weighs much more: it stays where you set it on your desk. It’s bigger in every dimension. I don’t mind the fact that it takes up a little more desk real-estate, but it would be nice if the total height were a little lower; a rounded front edge on the space bar would also make it more comfortable to use. But I’m very happy with it. When you push down on a key, it goes straight down. With the Matias, sometimes the keys felt like they were trying to veer off to one side.
If you’re a snob about keyboards and don’t mind using a Griffin iMate, get one of the
12 11 remaining 101es. Or perhaps let Datadesk know that you’d be interested in getting an updated version of the 101e.
Numbers don’t lie (even if statistics do). My best score at keybr.com was about 48 WPM with my old keyboard. 64 WPM with my new one. And I don’t even touch-type.
According to a fellow old-school keyboardista, although there are other USB-ADB adaptors out there, they can cause problems, so you really want to use the Griffin iMate.
Gruber and Benjamin discuss old keyboards on an episode of The Talk Show, and make sidelong references to the 101e, although do not mention it by name.
NPR recently did a story on a kindred keyboard, the Unicomp, which carries on the old IBM Model M.
The interesting thing about the MBA (heh) is that it is intended as an “outrigger” computer. While it could be barely self-sufficient, the idea seems to be that anyone owning one would have a bigger computer somewhere else. That’s a reasonable assumption and the outrigger market is a reasonable one to serve. But if that was Apple’s starting point, they’ve made some weird choices.
There is an emerging trend of cheap and cheerful devices that aren’t practical as fully functioning standalone computers, but are fine for web-surfing, media playback, and lightweight work. Things like the Nokia N810 or the Asus Eee. Apple seems to be borrowing the outrigger aspect of these devices without picking up on their other featuresâ€”low-power CPU, small screen, limited keyboard, etcâ€”features that make them less than workhorses, but easier to schlepp around and longer running. The MBA is a more or less full-power serious work machine and fashion statement that isn’t quite self-sufficient but doesn’t quite embrace its second-computer status either.
It’s been widely speculated that Apple would, eventually, introduce something that would fit somewhere between a laptop and the iPhone. Like a tablet. It may be that the iPhone is Apple’s tablet, but the choices behind the MBA leave room at the low end of the market for something else. Some people are already filling that void by installing OS X on the Asus Eee. I don’t think the MBA is going to be it for a lot of people.
Rather than buying a new computer, I’m updating my old one right now, and installed Leopard yesterday.
Normally when I install a major upgrade, I do a “clean install”â€”reconstructing my old environment by manually importing old files and recreating preferenecs is admittedly laborious, but it gives me a chance to re-examine what’s on my hard drive and jettison stuff I never use. I cloned my boot drive to an external drive, and selected the erase-and-install option in the Leopard installer. After that finished, it offered to import my old setup from my external drive. For some reason, I chose this option, and regretted it, as it faithfully imported every bit of cruft from my old system, some of which caused Leopard to lock up. Apart from that, I have to admit it did a sterling jobâ€”every jot and tittle was in place. It would be nice if I had more control over what got imported and what did not.
Tried again with the clean install, followed by manual copying of specific folders and files. I had a little trouble importing my old Mail folders, and discovered that I had to export my Address Book data (using Address Book running on a different computer) before I could import it to Leopard. And then I discovered one of those annoyances only a geek could love. For whatever reason, my short user name was now
adamjrice. It has always been
adamrice in the past, and this change was, of course, unacceptable. The path to my
$HOME directory had changed similarly. One new and appreciated feature in Leopard is that it’s actually easy to change this: right-click on your username in the Accounts prefpane sidebar and it gives you the “advanced options” to fix this. Nice. However, it makes this change by creating a new $HOME directory with defaults, not by moving the old one, and instantly, silently migrating you to that. This causes weird and unwanted results. My advice: if you are going the clean-install route, check to make sure you are happy with your short user name before you do any customization. Fix it if need be, and log out/relog.
Other than these breaking-in pains, so far I’m happy. My computer is noticeably faster (not just subjectivelyâ€”apps open faster, and Second Life, a poky pig, ran at about 2x the framerate making it almost tolerable), although this may have as much to do with blowing out some crufty haxies as anything else. Network throughput likewise seems to be faster, but I haven’t measured this.
QuickLook is probably worth the price of admission all by itself, especially if you can get plugins for the files you use the most. Last night, Gwen was trawling through a directory full of EPSs with meaningless names. Even though she’s still running Tiger, I mounted her drive, installed a QuickLook plugin for EPS, and was able to browse most (not all) of those files with previews in a couple of minutes. Big win. Coverflow in Finder, which seems like a frill, is useful in the same way QuickLook is, especially when, say, trawling through a directory full of meaninglessly-named EPS files.
As others have mentioned, Spotlight has gone from sucking to not-sucking. I reiterate that fact simply because the transformation is so stark.
So far, I’m calling this a success.