Smartphones, image processing, and spectator sports

I’ve done a couple of translations recently that focus on wireless communications, and specifically mention providing plentiful bandwidth to crowds of people at stadiums. Stadiums? That’s weirdly specific. Why stadiums in particular?

My hunch is that this is an oblique reference to the 2020 Tokyo Olympics. OK, I can get that. 50,000 people all using their smartphones in a stadium will place incredible demands on bandwidth.

Smartphones are already astonishingly capable. They shoot HD video. Today’s iPhone has something like 85 times the processing power of the original. I can only wonder what they’ll be like by the time the Tokyo Olympics roll around.

So what would a stadium full of people with advanced smartphones be doing? Probably recording the action on the field. And with all that bandwidth that will supposedly be coming online by then, perhaps they’ll be live-streaming it to a service like Periscope. It’s not hard to imagine that Youtube will have a lot of live-streaming by then as well.

This by itself could pull the rug out from underneath traditional broadcasters. NBC has already paid $1.45 billion for the rights to broadcast the 2020 Olympics in the USA. But I think it could be much, much worse for them.

In addition to more powerful smartphones, we’ve also seen amazing image-processing techniques, including the ability to remove obstructions and reflections from images, to correct for image shakiness, and even to smooth out hyperlapse videos. Youtube will stabilize videos you upload if you ask nicely, and it’s very effective. And that’s all happening already.

So I’m wondering what could be done in five more years with ten or so smartphones distributed around a stadium, recording the action, and streaming video back to a central server. The server could generate a 3D representation of the scene, use the videos to texture-map the 3D structure, and let the viewer put their viewpoint anywhere they wanted. Some additional back-end intelligence could move the viewpoint so that it follows the ball, swings around obstructing players, etc.

So this could be vastly more valuable than NBC’s crap story-inventing coverage. It might be done live or nearly live. It would be done by people using cheap personal technology and public infrastructure. The people feeding video to the server might not even be aware that their video is contributing to the synthesized overall scene (read your terms of service carefully!).

If that happened, the only thing you’d be missing would be the color commentary and the tape delay. Smartphones could kill coverage of sporting events.

Of course, the Olympics and other spectator sports are big businesses and won’t go down without a fight. At the London Olympics, a special squad of “brand police” [had] the power to force pubs to take down signs advertising “watch the games on our TV,” to sticker over the brand-names of products at games venues where those products were made by companies other than the games’ sponsors, to send takedown notices to YouTube and Facebook if attendees at the games have the audacity to post their personal images for their friends to see, and more. What’s more, these rules are not merely civil laws, but criminal ones, so violating the sanctity of an Olympic sponsor could end up with prison time for Londoners. Japan could do much the same in 2020. But if these videos were being served by a company that doesn’t do business in Japan, the show could go on. More extreme measures could be taken to block access to certain IP addresses, deep-packet inspection, etc. Smartphones could use VPNs in return. It could get interesting.

Leave a Comment

Your email address will not be published. Required fields are marked *