Adam Rice

My life and the world around me

Tag: crowdsourcing


Brent Simmons writes about gamification, saying

you could look at this trend and say, “As software gets simpler, it gets dumbed-down — even toddlers can use iPads. Users are now on the mental level of children, and we should design accordingly. What do children like? Games.”

I’ve been thinking about gamification a little for a while now, and I think it’s actually more sinister than that. Look at a website like Stack Overflow. They’ve got it set up with this treadmill of meaningless rewards to keep you engaged in the site, asking and answering question. In addition to increased ad impressions (which is cynical enough, the sole point of a game like Farmville, which has no rewards that I recognize as such), your labor makes the site more valuable: a good “answer site” like Ask Metafilter (which is a cool community, not an exploitative business play) gets very high Google rankings—Stack Exchange clearly want to cash in on that action getting strong Google rankings for their own site, leading to more pageviews, and the circle of life continues. For your efforts you get a gold star. A virtual gold star. But they’ve figured out that points and achievements activate some hindbrain reward center that they cynically play off of.

In my own vocation of translation, there’s been an increasing trend toward uncompensated crowdsourcing (another hot-button word) as an alternative to professional work, and I fully expect to see gamification tactics applied to that as well before long.

Google Crowdsourcing Machine Translation

Screenshot of google translation crowdsourcing interface

I clicked through a link from a gadget site to a machine-translated press release for a new car-stereo head unit. I noticed that when my cursor hovered over a block of text, one of those floating mock-windows that are so popular in web2.0 appeared. It permits readers to enter their own translation for that sentence or chunk of text.

This is interesting, and something I hadn’t noticed before. It raises all kinds of interesting questions. Most obviously, how do they vet these reader-submitted translations? But it’s fascinating as a machine-translation paradigm. There are two general approaches to MT: one is basically lexical and grammatical analysis and substitution: diagramming sentences, dictionary lookup, etc. The other is “corpus based”, that is, having a huge body of phrase pairs, where one can be substituted for the other. And there is a hybrid between the two, that uses the corpus-based approach, but with some added smarts that permits a given phrase to serve as a pattern for novel phrases not found in the corpus (this is also pretty much how computer-assisted translation, or CAT, works). I wonder how these crowdsourced submissions work back into the MT backend—if they’re used strictly in a corpus-based translation layer, or if they get extrapolated into patterns. I’m skeptical that they’re getting a significant number of submissions through this system, but if they did, the range of writing styles, language ability, and so on that would be feeding into the system would seem to make it incredibly complicated. And perhaps a huge jump forward in improvement over older MT systems…but perhaps a huge clusterfuck of unharmonized spammy nonsense.

© 2017 Adam Rice

Theme by Anders NorenUp ↑