Adam Rice

My life and the world around me

Category: technology (page 1 of 11)

Smartphones, image processing, and spectator sports

I’ve done a couple of translations recently that focus on wireless communications, and specifically mention providing plentiful bandwidth to crowds of people at stadiums. Stadiums? That’s weirdly specific. Why stadiums in particular?

My hunch is that this is an oblique reference to the 2020 Tokyo Olympics. OK, I can get that. 50,000 people all using their smartphones in a stadium will place incredible demands on bandwidth.

Smartphones are already astonishingly capable. They shoot HD video. Today’s iPhone has something like 85 times the processing power of the original. I can only wonder what they’ll be like by the time the Tokyo Olympics roll around.

So what would a stadium full of people with advanced smartphones be doing? Probably recording the action on the field. And with all that bandwidth that will supposedly be coming online by then, perhaps they’ll be live-streaming it to a service like Periscope. It’s not hard to imagine that Youtube will have a lot of live-streaming by then as well.

This by itself could pull the rug out from underneath traditional broadcasters. NBC has already paid $1.45 billion for the rights to broadcast the 2020 Olympics in the USA. But I think it could be much, much worse for them.

In addition to more powerful smartphones, we’ve also seen amazing image-processing techniques, including the ability to remove obstructions and reflections from images, to correct for image shakiness, and even to smooth out hyperlapse videos. Youtube will stabilize videos you upload if you ask nicely, and it’s very effective. And that’s all happening already.

So I’m wondering what could be done in five more years with ten or so smartphones distributed around a stadium, recording the action, and streaming video back to a central server. The server could generate a 3D representation of the scene, use the videos to texture-map the 3D structure, and let the viewer put their viewpoint anywhere they wanted. Some additional back-end intelligence could move the viewpoint so that it follows the ball, swings around obstructing players, etc.

So this could be vastly more valuable than NBC’s crap story-inventing coverage. It might be done live or nearly live. It would be done by people using cheap personal technology and public infrastructure. The people feeding video to the server might not even be aware that their video is contributing to the synthesized overall scene (read your terms of service carefully!).

If that happened, the only thing you’d be missing would be the color commentary and the tape delay. Smartphones could kill coverage of sporting events.

Of course, the Olympics and other spectator sports are big businesses and won’t go down without a fight. At the London Olympics, a special squad of “brand police” [had] the power to force pubs to take down signs advertising “watch the games on our TV,” to sticker over the brand-names of products at games venues where those products were made by companies other than the games’ sponsors, to send takedown notices to YouTube and Facebook if attendees at the games have the audacity to post their personal images for their friends to see, and more. What’s more, these rules are not merely civil laws, but criminal ones, so violating the sanctity of an Olympic sponsor could end up with prison time for Londoners. Japan could do much the same in 2020. But if these videos were being served by a company that doesn’t do business in Japan, the show could go on. More extreme measures could be taken to block access to certain IP addresses, deep-packet inspection, etc. Smartphones could use VPNs in return. It could get interesting.

Old-school information management

Applied Secretarial Practice

I recently picked up the book “Applied Secretarial Practice,” published in 1934 by the Gregg Publishing Company (the same Gregg of Gregg shorthand). It’s fascinating in so many ways—even the little ways that language has changed. Many compound words were still separate, e.g. “business man.” The verb “to emphasize” seemingly did not exist, and is always expressed as “to secure emphasis.” And the masculine is used as the generic third-person pronoun rigorously, even when referring to secretaries, who were universally women at that time.

There’s a whole chapter on office equipment, most of which is barely recognizable today, of course. The dial telephone was a fairly recent innovation at that time, and its use is explained in the book.

But what really strikes me is that, out of 44 chapters, 8 are on filing. You wouldn’t think that filing would be such a big deal (well, I wouldn’t have thought it). You would be wrong. What with cross-references, pull cards, rules for alphabetizing (or “alphabeting” in this book) in every ambiguous situation, different methods of collation, transfer filing, etc, clearly, there’s a lot to it.

It got me thinking about how, even though I have pretty rigorous file nomenclature and folder hierarchies on my computer, I’m not organizing my files with anything like the level of meticulous care that secretaries back then practiced as a matter of course. For the most part, if I want to find something on my computer (or anywhere on the Internet), I can search for it.

And that reminded me of a post by Mark Pilgrim from years ago, Million dollar markup (linking to the Wayback Machine post, because the author has completely erased his web presence). His general point was that when you control your own information, you can use “million dollar markup” (essentially metadata and structure) to make that information easier to search or manipulate; a company like Google has “million dollar search” to deal with messy, disorganized data that is outside their control. Back in 1934, people had no choice but to apply million-dollar markup to their information if they wanted to have any hope of finding it. The amount of overhead in making a piece of information retrievable in the future, and retrieving it, is eye-opening.

Consider that to send out a single piece of business correspondence, a secretary would take dictation from her boss, type up the letter (with a carbon), perhaps have her boss sign it, send the original down to the mailroom, and file the copy (along with any correspondence that letter was responding to). It makes me wonder what would have been considered a reasonable level of productivity in 1934. I’ve already sent 17 pieces of e-mail today. And written this blog post. And done no extra work to ensure that any of it will be retrievable in the future, beyond making sure that I have good backups.

Economics of software and website subscriptions

It’s a truism that people won’t pay for online media, except for porn. That’s a little unfair. I’m one of many people who has long had a pro account on flickr, which costs $25/year. Despite flickr’s ups and downs, I’ve always been happy to pay that. It also set the bar for what I think of as a reasonable amount to pay for a digital subscription: I give them $25, they host all photos that I want to upload, at full resolution. Back when people still used Tribe.net, they offered “gold star” accounts for $6/month, which removed the ads and gave you access to a few minor perks, but mostly it was a way to support the website. The value-for-money equation there wasn’t quite as good as with flickr, in my opinion, but I did have a gold-star account for a while.

Looking around at the online services I use, I see there are a few that are offering some variation on premium accounts. Instapaper offers subscriptions at $12/year, or about half of my flickr benchmark. The value for money equation there isn’t great—the only benefit I would get is the ability to search saved articles—but it’s a service I use constantly, and it’s worth supporting. Pinboard (which has a modest fee just to join in the first place) is a bookmarking service that offers an upgraded account for $25/year; here, the benefit is in archiving copies of web pages that you bookmark. I can see how this would be extremely valuable for some people, but it’s not a high priority for me. I use a grocery-list app on my phone called Anylist that offers a premium account for $10/year; again, the free app is good enough for me, and benefits of subscribing don’t seem all that relevant.

In terms of value for money, none of these feel like great deals to me. Perhaps because the free versions are as good as they are, or perhaps because the premium versions don’t offer that much more, or some combination of the two. But I use and appreciate all these services, and maybe that’s reason enough that I should subscribe.

At the other end of the scale, there’s Adobe, which has created quite a lot of resentment by converting its Creative Suite to a subscription model, for the low, low price of $50/month. This offends designers on a primal level. It’s like carpenters being required to rent their hammers and saws. The thing is that $50/month is a good deal compared to their old packaged product pricing, assuming that you would upgrade roughly every two years. The problem is that the economic incentives are completely upside down.

Once upon a time, Quark XPress was the only game in town for page layout, and then Adobe InDesign came along and ate their lunch. Quark thought they had no competition, and the product stagnated. Now Adobe Creative Cloud is pretty much the only game in town for vector drawing, photo manipulation, and page layout.

With packaged software, the software company needs to offer updates that are meaningful improvements in order to get people to keep buying them. Quark was slow about doing that, which is a big part of the reason that people jumped ship. With the subscription model, Adobe uses the subscription as a ransom: when customers stop subscribing, they lose the ability to even access their existing files. Between the ransom effect and the lack of meaningful competition, Adobe has no short-term incentive to keep improving their product. In the long term, a stagnant product and unhappy customers will eventually encourage new market entrants, but big American companies are not noted for their long-term perspective.

I think that’s the real difference here, both psychologically and economically: I can choose to subscribe to those smaller services, or choose not to. They all have free versions that are pretty good, and if any of them wound up disappearing, they all have alternatives I could move to. With Adobe, there are no alternatives, and once you’re in, the cost of leaving is very high.

Word processors and file formats

I’ve always been interested in file formats from the perspective of long-term access to information. These have been interesting times.

To much gnashing of teeth, Apple recently rolled out an update to its iWork suite—Pages, Numbers, and Keynote, which are its alternatives to the MS Office trinity of Word, Excel, and Powerpoint. The update on the Mac side seems to have been driven by the web and iPad versions. Not only in the features (or lack thereof), but in the new file format, which is completely unrelated to the old one. The new version can import the files from the old one, but it’s definitely an importation process, and complex documents will break in the new apps.

The file format for all the new iWork apps, Pages included, is based on Google’s protocol buffers. The documentation for protocol buffers states

However, protocol buffers are not always a better solution than XML – for instance, protocol buffers would not be a good way to model a text-based document with markup (e.g. HTML), since you cannot easily interleave structure with text. In addition, XML is human-readable and human-editable; protocol buffers, at least in their native format, are not. XML is also – to some extent – self-describing. A protocol buffer is only meaningful if you have the message definition (the .proto file).

Guess what we have here. Like I said, this has been driven by the iPad and web versions. Apple is assuming that you’re going to want to sync to iCloud, and they chose a file format optimized for that use case, rather than for, say, compatibility or human-readability. My use case is totally different. I’ve had clients demand that I not store their work in the cloud.

What’s interesting is that this bears some philosophical similarities to the Word file format, whose awfulness is the stuff of legend. Awful, but perhaps not awful for the sake of being awful. From Joel Spolsky:

The first thing to understand is that the binary file formats were designed with very different design goals than, say, HTML.

They were designed to be fast on very old computers.

They were designed to use libraries.

They were not designed with interoperability in mind.

New computers are not old, obviously, but running a full-featured word processor in a Javascript interpreter inside your web browser is the next best thing; transferring your data over a wireless network is probably the modern equivalent of a slow hard drive in terms of speed.

There is a perfectly good public file format for documents out there, Rich Text Format or RTF. But curiously, Apple’s RTF parser doesn’t do as good a job with complex documents as its Word parser—if you create a complex document in Word and save it as both .rtf and .doc, Pages or Preview will show the .doc version with better fidelity. Which makes a bit of a joke out of having a “standard” file format. Since I care about file formats and future-proofing, I saved my work in RTF for a while. Until I figured out that it wasn’t as well supported.

What about something more basic than RTF? Plain text is, well, too plain: I need to insert commentary, tables, that sort of thing. Writing HTML by hand is too much of a PITA, although it should have excellent future-proofing.

What about Markdown? I like Markdown a lot. I’m actually typing in it right now. It doesn’t take long before it becomes second nature. Having been messing around with HTML for a long time, I prefer the idea of putting the structure of my document into the text rather than the appearance.

But Markdown by itself isn’t good enough for paying work. It has been extended in various ways to allow for footnotes, commentary, tables, etc. I respect the effort to implement all the features that a well-rounded word processor might support through plain, human-readable text, but at some point it just gets to be too much trouble. Markdown has two main benefits: it is highly portable and fast to type—actually faster than messing around with formatting features in a word processor. These extensions are still highly portable, but they are slow to type—slower than invoking the equivalent functions in a typical WYSIWYG word processor. The extensions are also more limited: the table markup doesn’t accommodate some of the insane tables that I need to deal with, and doesn’t include any mechanism for specifying column widths. Footnotes don’t let me specify whether they’re footnotes or endnotes (indeed, Markdown is really oriented toward flowed onscreen documents, where the distinction between footnotes and endnotes is meaningless, rather than paged documents). CriticMarkup, the extension to Markdown that allows commentary, starts looking a little ungainly. There’s a bigger philosophical problem with it though. I could imagine using Markdown internally for my own work and exporting to Word format (that’s easy enough thanks to Pandoc), but in order to use CriticMarkup, I’d need to convince my clients to get on board, and I don’t think that’s going to happen.

I can imagine a word processor that used some kind of super-markdown as a file format, let the user type in Markdown when convenient, but added WYSIWYG tools for those parts of a document that are too much trouble to type by hand. But I’m not holding my breath. Maybe I should learn LaTeX.

PDFs, transparencies, and Powerpoint

This post is for anyone who gets frustrated trying to place vector art in a Powerpoint deck.

Gwen had a project to produce a set of ppt templates, using vector art provided by the client. Copying from Illustrator and pasting into Powerpoint, it looked fine, but saving and reopening the file showed that the vector art had been rasterized—badly.

We tried a few variations on this. Saving as PDF and placing the PDF had the same result. Saving as EMF and placing that did keep it as vector artwork, but the graphic wound up being altered in the process.

Other graphics created in Illustrator could be pasted or placed just fine, so there had to be something about this particular graphic. Although it was relatively simple, it included a couple of potential gotchas: it had one element with transparency set on it, and another with a compound path.

It was pretty easy to release the compound path and reconstruct around it—a big O with the center knocked out to expose the background. I’m pretty sure that wasn’t the problem, but it wound up helping anyhow, as I’ll discuss.

Dealing with the transparency was a little more of an issue: a transparent chevron floated over a couple of different solid colors, including the big O. To fix this, I used Pathfinder’s Divide tool to segment that chevron into separate pieces for each color it was floating over, and then set solid colors for each segment rather than relying on changing the opacity. Experimentation showed that the transparency was what triggered the rasterization.

Reproducing this process showed some artifacts in Powerpoint if the compound path was still present, so that wasn’t the problem, but it was a problem. Admittedly, this was only feasible because the image was simple, and the transparent element only covered three solid color fields, with no gradients or pattern fills—it would still be possible with those complications, but it would take a lot longer to approximate the original appearance.

Update: And if I was better at Illustrator, I would have realized the “Flatten Transparency…” command does exactly this, in one step. That would be the way to go.

This experiment was performed using two Macs, with Excel 2011 and both the CS4 and Creative Cloud versions of Illustrator.

Phone report

Gwen and I decided to update to the new iPhone 5, and along with that, I decided to switch carriers to Verizon. We’d previously been with AT&T, and Verizon was the one service that neither one of us had ever tried.

AT&T has notoriously bad service in San Francisco and New York from what I understand, but I had never had any trouble with them in Austin—except when there’s a big event in town that brings an influx of tens of thousands of visitors (and they’ve actually gotten pretty good about dealing with that). They do have lousy service out in the sticks—when I was riding the Southern Tier, I went a couple of days at a time without a signal. Verizon has better coverage in remote areas, including the site where Flipside is held, and now that I’m on the LLC, it will be more important for people to be able to reach me easily out there.

But so far, Verizon in Austin is not so great. I had no signal at all when I was inside Breed & Co on 29th St the other day. And Gwen had no data signal at Central Market on 38th St. And sound quality on voice calls seems to be worse than AT&T’s (this could be the phone itself, but I suspect it’s a voice codec issue). Usually, when I am getting a signal, it’s with LTE data, which is very fast. So there’s that.

And while I always felt that AT&T regarded me as an adversary, Verizon seems to regard me as a mark, which is even more galling than the poorer coverage. Immediately after signing up, I started getting promotional text-message spam from them. Apparently this can be disabled if you do the electronic equivalent of going into a sub-basement and shoving aside a filing cabinet marked “beware of the panther.” We also have those ARPU-enhancing “to leave a callback number…” messages tacked onto our outgoing voicemail greetings; some research showed that there are ways to disable this that vary depending on what state you live in (!), but none of them have worked for me so far. I’ve put in a help request. And every time I log into their website (mostly to put in help requests to deal with other annoying aspects of their service), they pop up some damn promotion that’s irrelevant to me. Like “get another line!”. Out of all the mobile carriers, the only one that I liked dealing with was T-mobile—but they’ve got the poorest coverage in Austin (I had to walk 2 blocks away from Gwen’s old place to get a signal), or anywhere else for that matter. As a friend who worked in the mobile-phone industry for years put it “They all suck.”

No complaints about the phones. I haven’t really tried out some of the new hardware features, like Bluetooth 4.0. The processor is much faster. The screen is noticeably better than on the iPhone 4, in addition to being bigger. People bitch about Apple’s Maps app. In Austin, I haven’t had any trouble with it, and in any case, Maps+ is available to give you that Google Maps feeling (in Iceland, I found that neither Apple Maps nor Google Maps had a level of granularity down to the street address—the best they could do was find the street).

Building the Flipside Ticket Exchange for 2012

I’m on the admin team for the Flipside Ticket Exchange, better known as Bob’s List, also known as !Bob’s List, also known as Not Not Bob’s List since !Bob moved on to better things. This year it is running on a WordPress install. I configured that setup, and am writing down what I did so I don’t have to try to remember it.
Continue reading

The universal design critic

In just the past half day, a lot has been said about Steve Jobs. I’m not sure I have anything unique to add, but I’ve been using Macs continuously since the first one I owned, which was one of the original 128K models, so I can’t let his passing go without comment.

Many of the people praising Steve Jobs have focused on the way that he and Apple have provided them with the tools to do their job, the way they have demystified technology and made it elegant and fun. And I agree with all that. But Steve Jobs and Apple have had a more subtle and deeper effect on us than that.

One of Jobs’ greatest talents was as a critic, particularly of design. He didn’t design Apple’s hardware or software, but he had strong, detailed opinions on all of it, which he would forcefully deliver when anything failed to live up to his very high expectations. So it’s no surprise that Apple has delivered consistently well-designed products, but they’ve also delivered design-oriented products. The very first Mac had multiple fonts and typographic controls, could mix pictures with text. Even the screen resolution of 72 dpi was chosen to parallel the point-size system.

We take these sorts of thing for granted today. They would have happened eventually, but they happened when they did because of Steve Jobs and Apple.

Today, we know what a font is, and many of us have opinions on which ones are better than others. We look more critically at industrial design and engineering. There are even movies and shorts about fonts and industrial design. By putting exemplars of good design into the marketplace and making them accessible to regular people, and by giving his competition a higher mark to aim for, Steve Jobs has transmitted some small part of his critical acuity and insistence on quality to the rest of us.

When Jobs resigned as CEO about 6 weeks ago, John Gruber wrote

The company is a fractal design. Simplicity, elegance, beauty, cleverness, humility. Directness. Truth. Zoom out enough and you can see that the same things that define Apple’s products apply to Apple as a whole. The company itself is Apple-like…Jobs’s greatest creation isn’t any Apple product. It is Apple itself.

Zoom out farther.

Getting the message

New technology creates new social phenomena, etiquette problems being one of them. Caller ID is not a new technology, but at some point in the past few years, its ubiquity—especially with cellphones, which have better text displays than landline phones—has created one of these etiquette problems.

Traditionally (where by “traditionally,” I mean “ten years ago”), when Alice calls Bob and gets Bob’s voicemail, Alice leaves a message at least saying “it’s Alice, call me back.” But over the last few years, we’ve seen a different approach. Charlie calls Bob, gets Bob’s voicemail, and just hangs up. Charlie knows that Bob has caller ID and will be able to see that Charlie called—Charlie figures that’s all the information Bob needs to return the call.

Bob may have the same approach as Charlie, in which case this is fine. But Bob may figure that if Charlie had anything that needed a response, then Charlie would have left a message. Bob doesn’t return the call and eventually hears again from Charlie, who indignantly asks “why didn’t you call me back?” There’s a mismatch in expectations. Neither one is right or wrong, necessarily, but the mismatch can create friction.

I’m reminded of the distinction between ask culture and guess culture, although in this context, it might be more accurate to say it’s a difference between tell culture and guess culture.

Or perhaps it’s just a matter of etiquette that we as a society haven’t quite sorted out yet. I was talking about this at dinner with some friends who are all around my age—we all agreed that people should leave messages. There might be an age component to this.

iPhone, Google apps for your domain, and Google “my maps”

I’m documenting this publicly just in case anyone ever runs across the same problem I’ve been having.

I occasionally plot out routes on Google Maps and save them under My Maps. Curiously, there is no convenient way to get My Maps onto my iPhone. This leads to the slightly ridiculous situation where I would need to print out maps to bring with me.

Google Earth for the iPhone purportedly will access My Maps, but I couldn’t get that to work, because I sign into Google through Google Apps for Your Domain; there’s a web-based sign-in inside Google Earth, and when I attempted to sign in through the GAfYD (launched through the Google Search app, which otherwise works great), it would throw an error.

I have discovered if I log into Google Earth through the non-GAfYD interface, using my full e-mail address and the usual password, it works.

From what I understand, there’s only a limited ability to show personal maps in in the Maps app on the iPhone, and this can only be done via some hackery. Google Earth seems like the best option for accessing personal maps.

Older posts

© 2017 Adam Rice

Theme by Anders NorenUp ↑