A lightweight documentation system

Background

Not long after I took on my current role in AAR LLC, I inherited the task of producing the “binders” that the organization prints up every event cycle–basically, operations manuals for the event.

There was a fair amount of overlap between these binders, and I recognized that keeping that overlapping content in sync would become a problem. I studied documentation technologies and techniques, and learned that indeed, this is considered a Bad Thing, and that “single sourcing” is the solution–this would require that the binders be refactored into their constituent chapters, which could be edited individually, and compiled into complete documents on demand.

The standard technology for this is DITA, but that involves a lot of overhead. It would be hard for me to maintain by myself, and impossible to hand off to anyone else. What I’ve come up with instead is still a bit of a work in progress. It still has a bit of a tech hurdle to overcome–it does involve using the command line–but should be a lot more approachable.

The following may seem like a lot of work. It’s predicated on the idea that it will pay off by solving a few problems:

  • You are maintaining several big documents that have overlapping content
  • You want to be able to publish those documents in more than one format (web, print, ebook)
  • You want to be able to update your materials easily, quickly, and accurately.

The following is Mac-oriented because that’s what I know.

Installation

Install Homebrew

Homebrew is a “package manager” for MacOS. If you’ve never used the command-line shell before, or have never installed shell programs, this is your first step. Think of it as an App Store for shell programs. This makes installing and updating other shell apps much easier

To install, open the Terminal app and paste in

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Important: Don’t paste in shell commands you find on the Internet unless you know what you’re doing or you really trust me. But this is exactly what the nice folks at Homebrew will tell you to do.

Install Pandoc

Pandoc is a swiss-army knife tool for converting text documents from one form to another. In the Terminal app, paste in

brew install pandoc

Homebrew will chew on that for a while and finish.

Install GPP

GPP is a “generic preprocessor,” which means that it substitutes one thing for another in your files. In the Terminal app, paste in

brew install gpp

Again, Homebrew will chew on that for a while and finish.

Learning

Learn some shell commands

You’ll at least need to learn the cd and ls commands.

This looks like a pretty good introductory text.

Learn Markdown

Markdown was created by John Gruber to be a lightweight markup language–a way to write human-readable text that can be converted to HTML, the language of web pages. If you don’t already know the rudiments of HTML, the key thing to remember about it is that it describes the structure of a document, not its appearance. So you don’t say “I want to this line to be in 18-pt Helvetica Bold,” you say “I want this line to be a top-level heading.” How it looks can be decided later.

Since then, others have taken that idea and run with it. The author of Pandoc, John MacFarlane, built Pandoc to understand an expanded Markdown syntax that adds a bunch of useful features, such as tables, definition lists, etc. The most basic elements of Markdown are really easy to learn; it has a few less-intuitive expressions, but even those are pretty easy to master, and there are cheat-sheets all over the Internet.

Markdown is plain text, which means you can write it in anything that lets you produce text, but if you do your writing in MS Word (aside: please don’t), you need to make sure to save as a .txt file, not a .doc or .docx file. There are a number of editors specifically designed for Markdown, that will show a pane of rendered text side-by-side with what you’re typing; there’s even a perfectly competent online editor called Dillinger that you can use.

I’ve gotten to the point where I do all my writing in Markdown, except when required to use some other format for my work. There are a lot of interesting writing apps that cater to it, writing is faster, files are more portable and smaller.

Organization

Refactor files and mark them up

Getting your files set up correctly is going to be more work than any other part of this. You’ll need to identify areas of overlap, break those out into standalone documents, decide on the best version of those (assuming they’re out of sync), and break up the rest of the monolithic documents into logical chunks as well. I refer to the broken-up documents as “component files.”

Give your files long, descriptive names. For redundancy, I also identify the parent documents in braces right in the filename, eg radio_channel_assignments_{leads}_{gate}.md. Using underscores instead of spaces makes things a little easier when working in the shell. Using md for the dot-extension lets some programs know that this is a Markdown file, but you could also use txt.

Then you’re going to mark these up in Markdown. If your files already have formatting in MS Word or equivalent, you’re going to lose all that, and you’ll need to make some editorial decisions about how you want to represent the old formatting (remember: structure, not appearance). Again, this will be a fair bit of work, but you’ll only need to do it once, and it will pay off.

Organize all these component files in a single directory. I call mine sources.

Create Variables

This is optional, but if you know that there are certain bits information that will change regularly, especially bits that appear repeatedly throughout your documents, save yourself the trouble of finding and replacing them. Instead, go through your component files and insert placeholders. Use nomenclature that will be obvious to anyone looking at it, like THE_YEAR or FLAVOR_OF_THE_MONTH. You don’t need to use all caps, but that does make the placeholders more obvious. You cannot use spaces, so use underscores, hyphens, or camelCasing.

Now, create a document called variables.txt. Its contents should be something like this:

#define THE_YEAR 2018
#define FLAVOR_OF_THE_MONTH durian
…

And so on. Each of these lines is a command that GPP will interpret and will substitute the first term with the second. This lets you make all those predictable changes in one place. Save this in your sources directory.

You can get into stylistic problems if you begin a sentence with a placeholder that gets substituted with an uncapitalized replacement. There may be a good solution, but I haven’t figured it out. You should be able to write around this in your component docs.

Create BOMs

In order to rebuild your original monolithic documents from these pieces, you’ll want to create what I call a bill of materials (BOM) for each target document. This defines what the constituent component files are, and when you run GPP, the BOM tells GPP to assemble its output file from those component files.

I like to keep each BOM in a separate directory that’s at the same level as my sources directory (This gives me a separate directory to contain my output files.), so my directory tree looks like this:

My Project
     gate
        gate-bom.txt
     leads
        leads-bom.txt
     sources
        variables.txt
        radio_channel_assignments_{leads}_{gate}.md
        …

The contents of each BOM file will look something like this:

#include ../sources/variables.txt
#include ../sources/radio_channel_assignments_{leads}_{gate}.md
…

Because the BOM file is nested in a directory adjacent to the sources directory, you need to “surface” and then “dive down” into the adjacent directory. The leading ../ is what lets you surface, and the sources/ is what lets you dive down into a different directory.

Compilation & conversion

So you’ve got your files refactored and marked up, you’ve got your variables set up, you’ve got your BOMs laid out. Now you want to get back what you had before. Now it’s time for the command line.

Open the Terminal app, type cd followed by a space, drag the folder containing the BOM file you want to compile into the Terminal window (this will insert the correct path), and hit “return”. Use the ls command to confirm that the only file you can see is the BOM file you want to compile.

Now it’s time to run the following command:

gpp source-bom.txt -o source.md

This says “tell GPP to read in the file source-bom.txt, run all the commands in it, and create an output file based on it called source.md”. Make whatever filename substitutions are appropriate. The output file will be in the same directory as the BOM file. This will be a big Markdown file that is assembled from all the component files in the BOM, with all the variable substitutions performed.

Now that you have a Markdown file, the world is your oyster. Some content-management systems can interpret Markdown directly. WordPress requires the Jetpack plugin, but that’s easily installed. So depending on how you’ll be using that document, you may already be partly done.

If you want to convert it to straight HTML, or to an MS Word doc, or anything else, now it’s time to run Pandoc. Again, in the Terminal app, type this in:

pandoc source.md -s -o source.html

This says “tell Pandoc to read in the file source.md and create a standalone (-s) output file called source.html”. Pandoc will create HTML files lacking headers and footers if you leave out the -s. It figures out what kind of output file you want from the dot-extension, and can also produce MS Word files and a host of others. It uses its own template files as the basis for its output files, but you can create your own template files and direct Pandoc to use those instead.

I do my print documents in InDesign, and Pandoc can produce “icml” files that InDesign can flow into a template. Getting the styles set up in that template takes some trial and error, but again, once you’ve got it the way you like it, you don’t need to mess with it again.

Shortcomings and prospects

The one thing this approach lacks is any kind of version control. In my case, I create a directory for every year, and make a copy of the source directory and the rest inside the year directory. This doesn’t give me true version control–I rely on Time Machine for that–but it does let me keep major revisions separate. Presumably using Git for my sources would give me more granular version control.

Depending on what your output formats are going to be, placed images can be a bother. I haven’t really solved this one to my satisfaction yet. You may want to use a PDF-formatted image for the print edition and a PNG-formatted image for the web; Pandoc does let you set conditionals in your documents, but I haven’t played with that yet.

In fact, I haven’t really scratched the surface of everything that I could be doing with GPP and Pandoc, but what I’ve documented here gives me a lot of power. I’ve also recently learned of a different preprocessor called Panda, which subsumes GPP and can also turn specially formatted text into graphical diagrams using other shell programs, such as Graphviz. I’m interested in exploring that.

The walled gardens of shit

Over a century ago, King Gillette pioneered the razors and blades business model. The DMCA led to a new twist on this: companies have been trying to force you to buy their blades in particular by slapping microchips on them–even when those things don’t really have any need of a microchip–because that makes it illegal to reverse engineer.

This gave us the Keurig coffee machine, which has been successful, but has been deservedly criticized–even by its inventor–for its wastefulness. Keurig attempted to add DRM to their pods, although that backfired.

Catering to the herd mentality of the investor class (“It’ll be like Amazon, but for X!” “It’ll be like Facebook, but for X!” “It’ll be like Uber, but for X!”), this has led to…

The Juicero, a massively over-engineered $400 (marked down from $700) gadget that squeezed $8 DRM-laden bags of fruit pulp into juice. It flopped.

Then the Teaforia, a $400 gadget (marked down from $1000) that makes tea from DRM-laden pods that cost $1 each or more. It flopped.

Now this thing, a spice dispenser that uses DRM-laden spice packets that cost about $5 a pop (spices obviously vary in prices, and it’s not clear how much comes in one of their packets, but I just bought 4 tbsp of cinnamon for $0.35).

These Keurig imitators represent an intersection of at least two bad trends: the Internet of Shit, where stuff that has no need of ensmartening is gratuitously connected to the Internet–a logical consequence of sticking unnecessary DRM-enabling chips on things, with those chips getting cheaper and more powerful–and the walled gardens of yore, like AOL–which companies like Facebook and Google have been attempting to reconstruct on top of the Internet ever since. So now we’ve got walled gardens of shit, filling up with their own waste products. Happily, the market seems to be rejecting these.

Big-number cheat sheet and BetterTouchTool

BetterTouchTool is one of my favorite Mac utilities. A real sleeper: originally it just let you create new trackpad gestures (or remap existing ones), and that was useful enough on its own, but it’s been beefed up with more and more interesting features. One feature I just discovered is that it can display a floating window with any HTML you want. This is a perfect way to show my Big Number Cheat Sheet, which is handy for checking your work when dealing with, well, big Japanese numbers.

To use this, open up BTT, add a new triggering event (can be triggered by a key command or text string, trackpad, whatever), and add the action Utility Actions > Show Floating Web View/HTML menu. Give it a name, set it to a width of 500, height of 750, and paste the following in directly. (Posting this online introduces a space between the opening < and !DOCTYPE — that should be deleted.) Be sure to enable “show window buttons” and/or “close when clicking outside” or the window won’t go away.

< !DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
    <title> </title>
    <style> 
        body {
        background-color: #fff;
        font-family: helvetica;
        font-size: 14/18;
        }
        table {
        border-collapse: collapse;
        }
        tr, td, th {
        border: none;
        }
        tr {
        border-bottom: 1px solid #ddd;
        }
        table tr td:nth-child(1), table tr th:nth-child(1) {
        width: 7em;
        padding: 0.5em;
        text-align: right;
        }
        table tr td:nth-child(2), table tr th:nth-child(2) {
        width: 12em;
        padding: 0.5em;
        text-align: left;
        }
        table tr td:nth-child(3), table tr th:nth-child(3) {
        padding: 0.5em;
        text-align: left;
        }
        tr:hover {
        color: #ddd;
        background-color: #333;
        }
    </style>
</head>
<body>
<h1>
    Big number cheatsheet 
</h1>
<table>
    <tr>
        <th> 和 </th>
        <th> English </th>
        <th> Number </th>
    </tr>
    <tr>
        <td> 一万 </td>
        <td> ten thousand </td>
        <td> 10,000 </td>
    </tr>
    <tr>
        <td> 十万 </td>
        <td> one hundred thousand </td>
        <td> 100,000 </td>
    </tr>
    <tr>
        <td> 百万 </td>
        <td> one million </td>
        <td> 1,000,000 </td>
    </tr>
    <tr>
        <td> 千万 </td>
        <td> ten million </td>
        <td> 10,000,000 </td>
    </tr>
    <tr>
        <td> 一億 </td>
        <td> one hundred million </td>
        <td> 100,000,000 </td>
    </tr>
    <tr>
        <td> 十億 </td>
        <td> one billion </td>
        <td> 1,000,000,000 </td>
    </tr>
    <tr>
        <td> 百億 </td>
        <td> ten billion </td>
        <td> 10,000,000,000 </td>
    </tr>
    <tr>
        <td> 千億 </td>
        <td> one hundred billion </td>
        <td> 100,000,000,000 </td>
    </tr>
    <tr>
        <td> 一兆 </td>
        <td> one trillion </td>
        <td> 1,000,000,000,000 </td>
    </tr>
    <tr>
        <td> 十兆 </td>
        <td> ten trillion </td>
        <td> 10,000,000,000,000 </td>
    </tr>
    <tr>
        <td> 百兆 </td>
        <td> one hundred trillion </td>
        <td> 100,000,000,000,000 </td>
    </tr>
    <tr>
        <td> 千兆 </td>
        <td> one quadrillion </td>
        <td> 1,000,000,000,000,000 </td>
    </tr>
    <tr>
        <td> 一京 </td>
        <td> ten quadrillion </td>
        <td> 10,000,000,000,000,000 </td>
    </tr>
</table>
</body>
</html>

Oregon-Washington trip 2017

Hiking the Tom McCall Preserve

Gwen and I spent a couple of weeks in Oregon and Washington at the end of 2017. Following are some random highlights:

Portland OR

  • Japanese gardens. Someone suggested we go with a guide. There was a guide starting a tour right after we got there, but we quickly discovered that we’d rather take in the gardens on our own. Going in the winter turned out to be for the best, as the gardens are incredibly popular and crowded during the warmer months. We were almost able to pretend we were alone there in spots, which is more what they’re about.
  • Bollywood Theater. Casual Indian restaurant. Really good.
  • Paxton Gate. Shop that specializes in skeletons, mounted animals, etc. We already have a bat from them.
  • Powell’s Books. Covers a city block.
  • Bread and Ink Cafe. Nothing really unusual about it, just solid hot food on a cold day, and our waiter bore an uncanny resemblance to the character Mike Ehrmantraut from Breaking Bad.
  • Sweedeedee. While staying at our AirB&B, we wound up chatting with a neighbor as he was walking his dog and we were heading out to breakfast. He recommended this place for a “real Portland experience.” Mission accomplished. They didn’t tell me the name of the pig that provided my bacon, but it was straight out of Portlandia.
  • Tin Shed. Neighborhood joint near where we were staying.
  • Peculiarium. A ridiculous wunderkammer. Good for a brief diversion and getting a photo on Krampus’ lap.
  • Noble Rot. Fancy. I had the burger, which was the most humble thing on the menu. It was damned good.
  • I think Gwen found three different gluten-free bakeries in Portland, which is not all that surprising.
  • We wanted to visit Multnomah Falls, but it was inaccessible due to a fire back in September that left the soil unstable. We drove on without much of a plan and entirely by accident wound up at Tom McCall Preserve, which had no facilities to identify it as a park, but had a good hiking trail and an amazing view of the Columbia River gorge. We saw a road-construction crew pull over, jump out, and start taking pictures while we were there, which I thought was interesting–I figured they already would have seen everything. There was also a model and a photographer doing a photoshoot there.

McMinnville OR

  • Evergreen Aviation & Space Museum. While in Portland, I overheard that the Spruce Goose was in a museum not far away, and convinced Gwen we had to go. It was out of our way, and not a cheap museum to visit, but worth it. The docents are all ex Air Force and will bend your ear for as long as you’ll let them. The Spruce Goose itself is unbelievable in the most literal way: you look at it and you can’t believe it’s real. Your mind rejects it. They let you walk into the cargo area, which is surprisingly small. The museum also has an SR-71, which is surprisingly long and seems like alien technology, airplanes (or reproductions) from the beginning of flight to present, rockets (including a complete Atlas rocket), demounted jet and piston engines and rocket motors, a Mercury capsule, and a Gemini capsule. You can get right up next to the Mercury capsule and look into it. I found it remarkably affecting–looking at it up close, I could see it was just a tin can, and I thought about the men who voluntarily climbed into that tin can on top of a missile, and the aspirations and pride of a nation that was invested in that tin can.

Astoria

I can’t say much for Astoria. The one thing that had attracted us to the town was the Museum of Whimsy, which we found out just a few days before we arrived was closing for the season (insert sad trombone sound).

We had dinner at the Buoy Beer Company, where we had fried oysters, among other things. The place is touristy, like the rest of the town, and is distinguished by some glass floor panels giving a view of sea lions.

We did visit the Astoria Column, which was interesting in itself, but more interesting for the view of the surrounding area it affords, which is amazing. People buy balsawood toy airplanes at the gift shop below and launch them from the top, which is fun but ridiculously moopy.

Port Angeles

We visited Port Angeles not so much for the town itself but for its proximity to the Olympic National Forest/Olympic National Park. We did manage to get in an 8-mile hike along the Spruce Railroad trail, which was beautiful, but that day ended with surprisingly heavy snowfall, so the next day we hunkered down and caught up on House of Cards.

Seattle

We had a micro-apartment AirB&B in the Capital Hill neighborhood. Like, as small as the apartment I had when I was in Japan, but with much worse space utilization. The listing didn’t exactly lie, but it showed views that we think were only visible from the rooftop deck. The unit had no kitchen, although there was a communal kitchen on the ground level for the 12 or so units in the building.

We didn’t have a lot of time to take in Seattle, and part of that time was dedicated to getting together with Gwen’s cousins (which was enjoyable, but not a recommendation for the general public). One place we happened across was Ada’s Technical Books and Cafe. As I said to Gwen, it’s either a bad thing or a good thing or we don’t have a place like it in Austin. We both could have spent all day browsing there.

One of the high points was visiting the Seattle Art Museum, which was showing a massive Andrew Wyeth retrospective.

We had dinner one night a Blueacre Seafood, which was spendy but good.

I took some pictures, too

900 miles

At the end of July, I had a routine doctor’s visit. Got on the scale. Clocked in at 170 lb. I hadn’t weighed that much since 1991. So I got back on my bike.

I still remember when I was six years old and my father took the training wheels off my bike and convinced me to ride it. I was terrified. He ran alongside me as I rode around the block. (My little sister, in contrast, took her training wheels off by herself, leaned her bike against our father’s truck, climbed aboard, and rode off.)

After that I got the hang of it, and bikes became an important part of my life. I started going on long-distance rides when I was 13. I did a little bike touring in high school, and I competed in some triathlons and bike races starting right after I graduated high school.

I didn’t have a bike during the time I lived in Japan, and when I was living in Chicago for a couple of years after that, I had my road bike but didn’t use it much (thus the 170 lb).

After I moved back to Austin in 1992, I got back into riding, and it was a great time to be a road cyclist in Austin—there were a bunch of then-pointless and unused roads that were like a playground for cyclists—360, Southwest Parkway, Bee Cave, and so on. I hardened up and could motor all day. On one occasion, I rode the 165 miles to a friend’s place in Houston in 9 hours flat. Some months later, I did it again, 20 minutes faster.

In 2000, a lot of stuff in my life changed, and I found myself cycling less and less, but in 2010, I started riding regularly again as I prepared for my Southern Tier ride, which I completed in October that year.

But that wrecked me—my upper body was emaciated when I finished. I remember at the end of the ride struggling to lift my 30-lb bike over my head. I decided I needed some kind of a whole-body workout. I signed up for a bootcamp class, and stuck with it until it petered out several years later. I never found a replacement that interested me, so I was back to a relatively inactive lifestyle (thus the 170 lb).

As of today, I’ve logged 900 miles since that doctor’s visit. I’ve clawed back a fair amount of lost fitness, and lost the weight I wanted to lose. But I’ve got a long way to go before I’ve got the level of fitness I had when I was younger—if in fact it’s possible to attain that again. It would have been better all around if I had stayed more active.

Life on Mars

Take a look at the lawman, beating up the wrong guy

Oh man, look at those cavemen go

It’s a freaky show

Is there life on Mars?

Now watch this and have a good cry.

The next fight

As left-leaning people hunker down for the Trumpocalypse, we naturally think about the 2020 election. I don’t think Trump is going to serve the duration of his first term—I think he’s going to hate being president and will resign partway through—but I could be wrong. So let’s suppose that the Democratic nominee will be running against Trump. What will that look like? It will look bad for the Democrats.

On the Trump side:

  • Trump feels completely unconstrained by normal rules of political behavior or ethics, as evidenced by his refusal to release his tax returns, his refusal to divest from his businesses, and his nepotistic appointments.
  • Trump has had Roger Stone and Paul Manafort as campaign consultants (the two have been business partners). Roger Stone was literally part of Nixon’s dirty tricks team. Paul Manafort has helped burnish the reputation of dictators around the world, and is one of Trump’s connections to Russia.
  • Trump did not run on policy, he ran on personality, and to the extent that he offered policy ideas, he was staking out some ground that would normally belong to the Democrats anyhow.
  • Trump’s supporters will put up with the basest behavior from their candidate, sometimes denying the evidence of their lying eyes, or saying it’s not so bad. Trump himself thinks he can get away with murder.

On the Democrat’s side:

  • Many Democratic voters were understandably upset with Hillary Clinton when the hardball tactics her campaign used against Bernie Sanders in the 2016 election were brought to light (apparently with Russian help). It’s not clear whether this alienated enough potential HRC voters to swing the election, but considering how close the election was, it’s plausible.
  • Democratic voters expect some kind of Democratic-looking policies from their candidates.

Where this leaves us:

  • We can expect Trump to make use of every power, legal and illegal, at his disposal when running against the Democratic nominee, and he’ll have people on tap with relevant experience.
  • It is demonstrably impossible for Trump to alienate his supporters, but we’ve seen it’s easy for Democrats to alienate theirs.
  • Democratic candidates will fight personality with policy, using at least some policies that Trump has already staked out as his own.
  • Democratic voters will expect their candidates to behave like the primary campaign is the Spring Cotillion, but will need someone prepared to play hardball in the general election. The two probably cannot be reconciled.

Adventures in car-shopping

About a month ago, Gwen was rear-ended in a hit-and-run. She was fine. The car wasn’t.

Our car is a 2002 model, and the damage was extensive enough that our insurer doesn’t consider it to be worth repairing. They’ll let us keep the car and give us a payout that is considerably less than blue book for the car in its pre-crunched condition, or scrap it and give us the salvage value in addition to that payout. It’s been mechanically well-maintained, is still perfectly drivable, and is repairable. We can probably sell the car–even in its current condition–for more than salvage value.

The options we are considering are

  1. Continue to drive the car as-is.
  2. Fix the car and continue to drive the car.
  3. Sell the car and apply the money toward the downpayment on a new or new-to-us car.

So we recently made a list of all the cars that we think might serve our needs and wants, and yesterday we went for test drives. A lot of test drives.

We hit five dealerships and drove a total of nine different cars.

Dealerships

Dealerships are funny. Each one had a completely different vibe.

Honda

We started at Honda, and the dealer we met with there had been in car sales for a long time, was friendly and talkative and generally pretty sympatico. Some of that was probably an act, or a knack for finding something in each person to relate to. He spent a good few minutes with us before we went out on a test drive, asking what we were interested in, and also spent a lot of time selling us on the dealership, which I thought was a little weird. Once we did get in the car, had an established routine that he clearly liked to follow. The dealership was physically huge and had an elaborate system of storing car keys in a vault. The place had the aura of a well-oiled machine.

Volkswagen

The experience could not have been more different than at Honda. Not in a bad way. While the Honda dealership was sophisticated and heavy on procedure, the VW dealer asked us what kind of car we wanted to drive, pulled it up, and gave us the keys. Didn’t ride along with us, didn’t even make a copy of our licenses. We could have driven off to Mexico. The dealer answered our questions but didn’t put any of the sell on.

Mazda

We had a very young and green dealer who only learned how to drive a stick after he started selling cars, about six months ago. He was clearly following a script and had a hard time getting off of it, even when we told him what didn’t apply to us. He was the only guy who said “I need to talk to my manager,” and kept us waiting kind of a long time while he did that.

Ford

The only woman we dealt with in our shopping adventure. It became apparent pretty quickly that Ford didn’t have anything that would work for us, but she humored us anyhow. She also had the most classic patter of salesman bullshit I’ve ever heard–reciting stuff that was maybe not exactly false, but mischaracterized so dramatically that it might as well be, and pitched in a way that it shows she assumes the customer is an idiot who can’t see through any of it.

Subaru

Despite being owned by the same company that owned the Honda dealership, this one seemed casual in terms of their internal procedures. At one point we asked to test-drive a certain model, and the dealer helping us came out with a handful of keys to try. At one point I made a comment about “not buying anything today” and the dealer got defensive about not applying any pressure. Which he didn’t.

Cars

The only category that really interests me is the compact wagon, and it’s almost nonexistent in the USA, where the SUV has almost completely eclipsed it. So we wound up looking at other things that were compact, seemed wagon-ish, and got good mileage.

Honda HRV

This model is new to the USA, and one I hadn’t heard of until my sister mentioned that she was considering getting one. It’s built on the Fit platform, and is a micro SUV. I am extremely reluctant to buy anything that might be mistaken for an SUV, but this was pretty benign. Gets good mileage, very civilized to drive, nice interior. The second-most cargo space of anything we drove. Gwen’s favorite of the cars we drove.

Honda Fit

Surprisingly good for such a small car, but doesn’t have enough cargo space for us. I was especially impressed by the linear acceleration with the CVT.

VW Golf Wagon

I’m intrigued by the diesel, but it costs more than the gas version, and because of the amount we drive, it would take us 12 years to break even with its increased fuel efficiency. This has the most cargo space of anything we looked at, struck me as the most luxurious in terms of ride and cabin appointments, and the most solid build quality. The diesel version gets the best mileage of anything we looked at. My favorite by a long stretch, but also the most expensive thing we looked at by a fair margin.

Mazda3

Fun to drive and seems like a good value but we want more cargo space. If they made a wagon version of this, it would be a contender.

Ford C-Max

Gwen’s feet couldn’t touch the floor when she had the driver’s seat adjusted so she could actually drive it. We didn’t make it past that.

Ford Focus

Chintzy, not enough cargo space. Good looking from the outside though.

Ford Focus ST

A hoot to drive. Powerful engine, firm ride. This is a real sleeper of a sports car and I like that. Poor mileage. Like its plain Focus sibling, not enough cargo space and still chintzy. Hilariously, it pipes a pre-recorded “throaty engine growl” into the cabin when accelerating.

Subaru Crosstrek

Tried both the manual and CVT version: the CVT is actually better, which is not something I expected to discover. Compared to the other cars we drove, this felt slightly dated, and the drive quality was uncouth without being fun.

Smartphones, image processing, and spectator sports

I’ve done a couple of translations recently that focus on wireless communications, and specifically mention providing plentiful bandwidth to crowds of people at stadiums. Stadiums? That’s weirdly specific. Why stadiums in particular?

My hunch is that this is an oblique reference to the 2020 Tokyo Olympics. OK, I can get that. 50,000 people all using their smartphones in a stadium will place incredible demands on bandwidth.

Smartphones are already astonishingly capable. They shoot HD video. Today’s iPhone has something like 85 times the processing power of the original. I can only wonder what they’ll be like by the time the Tokyo Olympics roll around.

So what would a stadium full of people with advanced smartphones be doing? Probably recording the action on the field. And with all that bandwidth that will supposedly be coming online by then, perhaps they’ll be live-streaming it to a service like Periscope. It’s not hard to imagine that Youtube will have a lot of live-streaming by then as well.

This by itself could pull the rug out from underneath traditional broadcasters. NBC has already paid $1.45 billion for the rights to broadcast the 2020 Olympics in the USA. But I think it could be much, much worse for them.

In addition to more powerful smartphones, we’ve also seen amazing image-processing techniques, including the ability to remove obstructions and reflections from images, to correct for image shakiness, and even to smooth out hyperlapse videos. Youtube will stabilize videos you upload if you ask nicely, and it’s very effective. And that’s all happening already.

So I’m wondering what could be done in five more years with ten or so smartphones distributed around a stadium, recording the action, and streaming video back to a central server. The server could generate a 3D representation of the scene, use the videos to texture-map the 3D structure, and let the viewer put their viewpoint anywhere they wanted. Some additional back-end intelligence could move the viewpoint so that it follows the ball, swings around obstructing players, etc.

So this could be vastly more valuable than NBC’s crap story-inventing coverage. It might be done live or nearly live. It would be done by people using cheap personal technology and public infrastructure. The people feeding video to the server might not even be aware that their video is contributing to the synthesized overall scene (read your terms of service carefully!).

If that happened, the only thing you’d be missing would be the color commentary and the tape delay. Smartphones could kill coverage of sporting events.

Of course, the Olympics and other spectator sports are big businesses and won’t go down without a fight. At the London Olympics, a special squad of “brand police” [had] the power to force pubs to take down signs advertising “watch the games on our TV,” to sticker over the brand-names of products at games venues where those products were made by companies other than the games’ sponsors, to send takedown notices to YouTube and Facebook if attendees at the games have the audacity to post their personal images for their friends to see, and more. What’s more, these rules are not merely civil laws, but criminal ones, so violating the sanctity of an Olympic sponsor could end up with prison time for Londoners. Japan could do much the same in 2020. But if these videos were being served by a company that doesn’t do business in Japan, the show could go on. More extreme measures could be taken to block access to certain IP addresses, deep-packet inspection, etc. Smartphones could use VPNs in return. It could get interesting.

Flipside essays

For Burning Flipside 2015, the organization had some tickets left over after the normal ticket distribution. We decided to sell these in what I call a “bonus round,” but we decided that anyone who wanted one of these tickets needed to demonstrate some commitment. Our normal ticket-distribution process is kind of a pain in the ass. Without including some hoops to jump through, access to a ticket in the bonus round ticket would be easier than in the normal distribution, and would give the appearance of rewarding flakiness. So for the batch of tickets that I sold in the bonus round, I required that requesters “write me an essay about what you hope to get out of the experience. If you have been to Flipside, you can write about what you hope to get out of this year that you haven’t experienced before, or write about an experience you had that was particularly meaningful to you.”

Following are the essays that I have permission to share, anonymized when requested.

Old-school information management

Applied Secretarial Practice

I recently picked up the book “Applied Secretarial Practice,” published in 1934 by the Gregg Publishing Company (the same Gregg of Gregg shorthand). It’s fascinating in so many ways—even the little ways that language has changed. Many compound words were still separate, e.g. “business man.” The verb “to emphasize” seemingly did not exist, and is always expressed as “to secure emphasis.” And the masculine is used as the generic third-person pronoun rigorously, even when referring to secretaries, who were universally women at that time.

There’s a whole chapter on office equipment, most of which is barely recognizable today, of course. The dial telephone was a fairly recent innovation at that time, and its use is explained in the book.

But what really strikes me is that, out of 44 chapters, 8 are on filing. You wouldn’t think that filing would be such a big deal (well, I wouldn’t have thought it). You would be wrong. What with cross-references, pull cards, rules for alphabetizing (or “alphabeting” in this book) in every ambiguous situation, different methods of collation, transfer filing, etc, clearly, there’s a lot to it.

It got me thinking about how, even though I have pretty rigorous file nomenclature and folder hierarchies on my computer, I’m not organizing my files with anything like the level of meticulous care that secretaries back then practiced as a matter of course. For the most part, if I want to find something on my computer (or anywhere on the Internet), I can search for it.

And that reminded me of a post by Mark Pilgrim from years ago, Million dollar markup (linking to the Wayback Machine post, because the author has completely erased his web presence). His general point was that when you control your own information, you can use “million dollar markup” (essentially metadata and structure) to make that information easier to search or manipulate; a company like Google has “million dollar search” to deal with messy, disorganized data that is outside their control. Back in 1934, people had no choice but to apply million-dollar markup to their information if they wanted to have any hope of finding it. The amount of overhead in making a piece of information retrievable in the future, and retrieving it, is eye-opening.

Consider that to send out a single piece of business correspondence, a secretary would take dictation from her boss, type up the letter (with a carbon), perhaps have her boss sign it, send the original down to the mailroom, and file the copy (along with any correspondence that letter was responding to). It makes me wonder what would have been considered a reasonable level of productivity in 1934. I’ve already sent 17 pieces of e-mail today. And written this blog post. And done no extra work to ensure that any of it will be retrievable in the future, beyond making sure that I have good backups.

Colors

Translating a document that involves optics, I ran into what I immediately recognized as ROYGBIV in Japanese:

紫、青、水色、緑、黄、オレンジ、赤

(actually that’s VIBGYOR, but the point is the same)

I had never really stopped to consider how ROYGBIV might be expressed in Japanese, but it’s an interesting question, because the Japanese word that is ordinarily rendered in English as “blue”, 青 ao, can mean blue or green. “Vegetables” in Japanese can be 青物; a green light is an 青信号. And here it’s being pushed further down the spectrum, away from green, to stand for “indigo.”

The color that holds blue’s place in the above list is 水色, “water-colored.”

And that makes me wonder, what is the color “indigo” anyhow? How is it different from blue? Why do we have two color words for what’s basically the same thing? Apparently I’m in good company—according to Wikipedia, Asimov said “It is customary to list indigo as a color lying between blue and violet, but it has never seemed to me that indigo is worth the dignity of being considered a separate color. To my eyes it seems merely deep blue.” Wikipedia has quite a bit more to say about the color indigo: “According to Gary Waldman, ‘A careful reading of Newton’s work indicates that the color he called indigo, we would normally call blue; his blue is then what we would name blue-green or cyan.'”

Curiouser and curiouser. It seems as if 青 would be a good fit for Newton’s usage of blue, and ç´º would correspond better to indigo, but it’s also interesting to observe how the meaning of simple color words has apparently shifted in English. And 水色 is a pretty good fit for what Newton meant by “blue.”

The Wikipedia article is also interesting in that it explains why there are seven colors in ROYGBIV in the first place, when our modern color models are based on three primary colors (RGB or CMY) with secondaries and tertiaries in between: it was an arbitrary decision to force the colors to correspond with the seven notes of the Western musical scale.

Showing lots of options in Contact Form 7

Contact Form 7 is a widely used WordPress plugin that offers a fair amount of flexibility, but I needed for it to do something it didn’t want to do: I wanted to display a form with a lot of options and make it look good. Normally one would do this using radio buttons or a dropdown menu. Because scrolling through long dropdown menus is especially annoying, I opted for radio buttons. But then I wound up with a long blob of radio buttons I wanted something with some structure: columns and headings.

CSS to the rescue. The following is not perfect (there are some limitations on selectors that prevented me from getting as fancy as I wanted), but I consider it a big improvement.

First, use the “Generate tag” option to create a radio button. Set an ID for the radio-button set. In this example, it is recipientlist. Enable the “Wrap each item with <label> tag?” option. This CSS is dependent on the tagging that option produces. You could rewrite this to work the other way, but it’s friendlier to turn that on anyhow.

Second, create your list of options. You probably want to do this in a text editor, not the CF7 editing window. Organize your options into sections, and insert new lines with section headings where appropriate. Begin each section with the same word–in the example shown below, I’m using “Area”. If you’re using piped options, it doesn’t matter what comes after the pipe for these section headings. Paste your completed list into the “Choices” field

Third, gank the following CSS:

<style>#recipientlist {
display: block;
-webkit-columns: 3 150px;
-moz-columns: 3 150px;
columns: 3 200px;
}
#recipientlist .wpcf7-list-item {
display: block;
margin: .25em 0;
}
.wpcf7-list-item input[value^=Area] {
display: none;
}
.wpcf7-list-item input[value^=Area] + .wpcf7-list-item-label {
font-weight: bold;
}</style>

and paste that right into the CF7 “Form” field, right at the top. Change #recipientlist to whatever ID you are using for your button set. Change Area to whatever you are using as the lead-in text for your section headings. Do not introduce any extra line breaks into this, as WordPress will add tags that you don’t want. Do not try inserting this into your theme’s styles.css document or via other style-injecting mechanisms, as parts of this will get overridden and you will be sad.

Your radio-button tag should look something like this:

[radio recipient id:recipientlist use_label_element *lots of options follow*…]

Copy that code and paste it wherever you think it belongs in your form. The style element should be at the top though.

You may want to play around with the number and width of columns, depending on your theme and the content you’re presenting.

Here’s a before and after.

Before:
before

After:
after

This isn’t perfect–there’s no way to control where the column breaks occur, or to show the sections as blocks of equal height. And it’s not semantically correct HTML. But it’s an improvement.

[Nerd mode=on]
There are four CSS rules here. The first one forces the set of buttons to show as a separate line, and breaks it into columns. The second forces each button to display on its own line. The third picks up buttons that begin with the section-identifying word “Area”; hiding the radio button and setting some margins. The fourth styles the label text for those section heading buttons.
[Nerd mode=off]

Four years

Four years ago today, I was smack in the middle an adventure: a transcontinental bike ride.

When I finished that ride, my body was wasted: I had lost at least 15 pounds. You could see my ribs through my back. I decided it was time to find a more whole-body workout. I started doing a boot-camp workout with Gwen. I wouldn’t say that I enjoyed it, exactly, but it was definitely good for me. After a few months, I had finally resolved some weak spots left over from my broken pelvis, and had built up core strength that I’d really never had before. I was pretty regular about it for the next 3½ years or so, going three times a week, occasionally taking a month off when life got crazy. Boot camp completely displaced cycling for me. I didn’t do any serious riding after I got home from my big ride, only commuting around town.

A couple of months ago, that boot camp class ceased to exist as such when the trainer started a gym; he offers something similar at the gym, but I realized I don’t want to go to a gym. I was also missing riding. Today I went out with a friend for my first ride in four years. My neck’s a little stiff, and I was tired earlier than I should be, but it was good to get out there.

I still need to do some kind of whole-body workout. But I need to keep riding my bike.

Burner-anxiety dream

You know those school-anxiety dreams where you show up in class, and discover there’s a test you weren’t expecting, or work-anxiety dreams that are similar? I had a burner-anxiety dream last night. I almost never remember my dreams, and this one seems funny enough to bear writing down.

In my dream, Gwen and I had gone to Burning Man as a last-minute thing (which should have tipped me off that I was dreaming). We were in a theme camp that had a TV playing a videotape…which is a little weird, but hey, it’s Burning Man—what isn’t weird? I was putting a bandana on my head, and realized I had left my belt pouch behind.

Then I realized I hadn’t brought even one change of clothes.

Then I realized I hadn’t brought any water.

Then I realized I hadn’t brought any food.

Then I realized we hadn’t had our tickets checked at Gate (another tip-off that I was dreaming), and I was pretty sure we didn’t have those either.

Gwen and I got into a bit of an argument over whether we should try to go back Reno to provision, or try to skate by as sparkle ponies, or just give up and go home. Since I was pretty sure we didn’t have our tickets, I was doubtful that we’d be able to get back in.

Then I woke up. It felt real while I was dreaming it.

Economics of software and website subscriptions

It’s a truism that people won’t pay for online media, except for porn. That’s a little unfair. I’m one of many people who has long had a pro account on flickr, which costs $25/year. Despite flickr’s ups and downs, I’ve always been happy to pay that. It also set the bar for what I think of as a reasonable amount to pay for a digital subscription: I give them $25, they host all photos that I want to upload, at full resolution. Back when people still used Tribe.net, they offered “gold star” accounts for $6/month, which removed the ads and gave you access to a few minor perks, but mostly it was a way to support the website. The value-for-money equation there wasn’t quite as good as with flickr, in my opinion, but I did have a gold-star account for a while.

Looking around at the online services I use, I see there are a few that are offering some variation on premium accounts. Instapaper offers subscriptions at $12/year, or about half of my flickr benchmark. The value for money equation there isn’t great—the only benefit I would get is the ability to search saved articles—but it’s a service I use constantly, and it’s worth supporting. Pinboard (which has a modest fee just to join in the first place) is a bookmarking service that offers an upgraded account for $25/year; here, the benefit is in archiving copies of web pages that you bookmark. I can see how this would be extremely valuable for some people, but it’s not a high priority for me. I use a grocery-list app on my phone called Anylist that offers a premium account for $10/year; again, the free app is good enough for me, and benefits of subscribing don’t seem all that relevant.

In terms of value for money, none of these feel like great deals to me. Perhaps because the free versions are as good as they are, or perhaps because the premium versions don’t offer that much more, or some combination of the two. But I use and appreciate all these services, and maybe that’s reason enough that I should subscribe.

At the other end of the scale, there’s Adobe, which has created quite a lot of resentment by converting its Creative Suite to a subscription model, for the low, low price of $50/month. This offends designers on a primal level. It’s like carpenters being required to rent their hammers and saws. The thing is that $50/month is a good deal compared to their old packaged product pricing, assuming that you would upgrade roughly every two years. The problem is that the economic incentives are completely upside down.

Once upon a time, Quark XPress was the only game in town for page layout, and then Adobe InDesign came along and ate their lunch. Quark thought they had no competition, and the product stagnated. Now Adobe Creative Cloud is pretty much the only game in town for vector drawing, photo manipulation, and page layout.

With packaged software, the software company needs to offer updates that are meaningful improvements in order to get people to keep buying them. Quark was slow about doing that, which is a big part of the reason that people jumped ship. With the subscription model, Adobe uses the subscription as a ransom: when customers stop subscribing, they lose the ability to even access their existing files. Between the ransom effect and the lack of meaningful competition, Adobe has no short-term incentive to keep improving their product. In the long term, a stagnant product and unhappy customers will eventually encourage new market entrants, but big American companies are not noted for their long-term perspective.

I think that’s the real difference here, both psychologically and economically: I can choose to subscribe to those smaller services, or choose not to. They all have free versions that are pretty good, and if any of them wound up disappearing, they all have alternatives I could move to. With Adobe, there are no alternatives, and once you’re in, the cost of leaving is very high.

Good reads, 2013

The following are some of the best stories, articles, essays, blog posts, etc, that I read during 2013. They weren’t necessarily written in 2013. I’m including them in roughly the order I encountered them.

Circle of Useful Knowledge

Gwen’s parents brought her a book from a library sale in their small town, The Circle of Useful Knowledge, published in 1888. It’s filled with bizarre recipes for cocktails mixed in 10-gallon quantities, tips on animal husbandry, etc.

I’m posting extracts from it in a separate blog, titled Circle of Useful Knowledge. I’m going to try to post a couple of entries a day. Enjoy.

Word processors and file formats

I’ve always been interested in file formats from the perspective of long-term access to information. These have been interesting times.

To much gnashing of teeth, Apple recently rolled out an update to its iWork suite—Pages, Numbers, and Keynote, which are its alternatives to the MS Office trinity of Word, Excel, and Powerpoint. The update on the Mac side seems to have been driven by the web and iPad versions. Not only in the features (or lack thereof), but in the new file format, which is completely unrelated to the old one. The new version can import the files from the old one, but it’s definitely an importation process, and complex documents will break in the new apps.

The file format for all the new iWork apps, Pages included, is based on Google’s protocol buffers. The documentation for protocol buffers states

However, protocol buffers are not always a better solution than XML – for instance, protocol buffers would not be a good way to model a text-based document with markup (e.g. HTML), since you cannot easily interleave structure with text. In addition, XML is human-readable and human-editable; protocol buffers, at least in their native format, are not. XML is also – to some extent – self-describing. A protocol buffer is only meaningful if you have the message definition (the .proto file).

Guess what we have here. Like I said, this has been driven by the iPad and web versions. Apple is assuming that you’re going to want to sync to iCloud, and they chose a file format optimized for that use case, rather than for, say, compatibility or human-readability. My use case is totally different. I’ve had clients demand that I not store their work in the cloud.

What’s interesting is that this bears some philosophical similarities to the Word file format, whose awfulness is the stuff of legend. Awful, but perhaps not awful for the sake of being awful. From Joel Spolsky:

The first thing to understand is that the binary file formats were designed with very different design goals than, say, HTML.

They were designed to be fast on very old computers.
…
They were designed to use libraries.
…
They were not designed with interoperability in mind.

New computers are not old, obviously, but running a full-featured word processor in a Javascript interpreter inside your web browser is the next best thing; transferring your data over a wireless network is probably the modern equivalent of a slow hard drive in terms of speed.

There is a perfectly good public file format for documents out there, Rich Text Format or RTF. But curiously, Apple’s RTF parser doesn’t do as good a job with complex documents as its Word parser—if you create a complex document in Word and save it as both .rtf and .doc, Pages or Preview will show the .doc version with better fidelity. Which makes a bit of a joke out of having a “standard” file format. Since I care about file formats and future-proofing, I saved my work in RTF for a while. Until I figured out that it wasn’t as well supported.

What about something more basic than RTF? Plain text is, well, too plain: I need to insert commentary, tables, that sort of thing. Writing HTML by hand is too much of a PITA, although it should have excellent future-proofing.

What about Markdown? I like Markdown a lot. I’m actually typing in it right now. It doesn’t take long before it becomes second nature. Having been messing around with HTML for a long time, I prefer the idea of putting the structure of my document into the text rather than the appearance.

But Markdown by itself isn’t good enough for paying work. It has been extended in various ways to allow for footnotes, commentary, tables, etc. I respect the effort to implement all the features that a well-rounded word processor might support through plain, human-readable text, but at some point it just gets to be too much trouble. Markdown has two main benefits: it is highly portable and fast to type—actually faster than messing around with formatting features in a word processor. These extensions are still highly portable, but they are slow to type—slower than invoking the equivalent functions in a typical WYSIWYG word processor. The extensions are also more limited: the table markup doesn’t accommodate some of the insane tables that I need to deal with, and doesn’t include any mechanism for specifying column widths. Footnotes don’t let me specify whether they’re footnotes or endnotes (indeed, Markdown is really oriented toward flowed onscreen documents, where the distinction between footnotes and endnotes is meaningless, rather than paged documents). CriticMarkup, the extension to Markdown that allows commentary, starts looking a little ungainly. There’s a bigger philosophical problem with it though. I could imagine using Markdown internally for my own work and exporting to Word format (that’s easy enough thanks to Pandoc), but in order to use CriticMarkup, I’d need to convince my clients to get on board, and I don’t think that’s going to happen.

I can imagine a word processor that used some kind of super-markdown as a file format, let the user type in Markdown when convenient, but added WYSIWYG tools for those parts of a document that are too much trouble to type by hand. But I’m not holding my breath. Maybe I should learn LaTeX.

Bike-share systems and the poor

This morning there was a story on NPR about bike sharing, specifically how it doesn’t do a good job of serving the poor. There are basically three reasons for this:

  1. The bike stations are not located in areas most useful to poor people;
  2. You need a debit card or credit card to use the system;
  3. Bike-share programs are expensive.

The story got me thinking about all the ways it’s expensive to be poor, and they’re certainly illustrated in this example.

To get a debit card, you need a bank account. To get a bank account, you usually need to scrape together $100 for an opening balance. This is not a huge hurdle to overcome, but if you never have $100 left at the end of your pay period, it’s going to take planning, and if life throws you a curveball before you’ve got that $100 saved up, you’re back to square one.

I looked at the prices for bike-share programs. Chicago’s Divvy has two price structures: yearly memberships and day rates. $70/year or $7/day, plus usage: in both cases you get 30-minute trips for free, but if you’ve got a longer bike trip than that, you get dinged $1.50 or $2.00 per 30 minutes. Austin’s nascent bike-share system has a similar breakdown, but is slightly more expensive.

So if you’re poor, the annual plans are probably out just because of the upfront costs, even though on a per-day basis, they’re a much better deal. If anything, you’re on the daily plan (Austin also has a weekly plan), although again, this presupposes you’ve got a bank account.

What about getting your own bike? You can get a beater bike on Craigslist. There are bikes listed there right now in the $20–50 range, so if you’re poor, the break-even point for rent vs own comes quickly—within one pay period. If you could afford the daily bike rental, you could afford to buy a bike. If you’re going to use a bike for commuting to and from work, it would be a no-brainer. It would also be a no-brainer for someone with more discretionary income who wants to commute by bike.

So given that anybody with even marginal math skills could figure out that ownership beats rental for routine, day-to-day bike usage, what’s the use-case for rental? It’s for when you’re out of your routine. Non-routine uses are hard to predict—it seems redundant to point that out. That makes the best placement of bike stations problematic.

Another obvious use case is tourism, and from what I’ve seen in Chicago and San Antonio, the placement of bike stations clearly targets tourists.

I don’t think it would be a bad idea for bike-sharing systems to be more accessible to the poor, but as long as those systems are run by private companies trying to turn a profit, it’s going to be difficult to balance that equation. Organizations like the Yellow Bike Project can do more to improve bike mobility for the poor right now, by providing them with their own bikes, teaching them how to maintain bikes, and giving them access to shop space.