Plenty of nerdy couples read in bed at night. Really nerdy couples might even exchange comments on their respective books. But in a sci-fi writer’s house, such nerdiness achieves whole new dimensions:
“This is a flawed discussion of quantum computing,” I announced to the dark room.
“It’s bedtime,” my Laddie mumbled from the pillow beside me, drifting off in the glow of his own Kindle.
“But they described D-Wave’s systems as quantum computing. It’s really quantum annealing.”
I rolled over and scowled at the offending paragraph. How dare he not indulge my late-night indignation when he was the one who’d recommended the book! A family member who works in Silicon Valley had given him a copy of After On, by Rob Reid, as a (presumably) semi-satirical glimpse of startup culture. The book follows a scrappy trio of entrepreneurs, whose acquisition by a social media juggernaut embroils them in the emergence of a sentient artificial intelligence (AI).
Like some über-modern spin on the epistolary novel, character scenes are interspersed with hilarious Amazon product reviews, blog articles by pop culture critics and conspiracy theorists, and excerpts of intentionally bombastic speculative fiction authored by one of the characters. All these fragments eventually tie into the main storyline, creating a fun game for readers like me who need a puzzle element in their fiction. In fact, I enjoyed these asides more than the core they support. I suspect Reid did, too, since he seemed to put a lot more effort into the sidebars than the story.
Throughout the book, I struggled to differentiate between poor craft and deliberate metafictional technique. Much like his fictional AI, Reid can manage small-scale scenarios well enough, but expanding the scope—as he does by the end, with nefarious government schemes and plucky ploys that teeter on the edge of implausibility—things spiral out of control. So did his ambition outstrip his ability, or was he intentionally structuring his narrative to parallel the AI’s machinations?
In another example, the books “tells” a lot about its characters up front, baldly describing their personalities and histories. It’s not my preferred way of getting to know a cast, but acceptable from an omniscient narrator. However, other characters’ details, which the narrator must surely know along with the others, are withheld in a presumptive effort to generate suspense. Is this authorial inconsistency, or a nod to how the AI selectively manipulates facts?
Then there’s the overall lack of character development. We’re told up front who people are, and they mostly act the way we expect, like chess pieces moving in their prescribed patterns. Is this two-dimensionality simple laziness on Reid’s part, or intended to reflect how the AI sees people as stageable playthings? There’s a fine line between eccentric genius and remorseless hack, and I’m not sure on which side this book falls.
If the former, awareness of it (dare I say sentience?) breeds an off-putting smugness that pervades the book. Colorful descriptions slip from witty to overwrought. The cast is mostly hip archetypes who subject one another to lengthy lectures for the reader’s benefit, on topics ranging from quantum computing (see above) to the origins of modern terrorism. I read tons of non-fiction: I don’t need an in-line amateur Ted Talk from a character parroting Wikipedia. Such indulgences bog down what little plot there is, like bloatware in what could have been a slick, lightweight-but-fast operating system. In app parlance that After On‘s characters would surely appreciate, it spoils the user experience.
Although it does possess a few quirky charms, After On reads like a 500-page Medium op-ed by a paranoid entrepreneur overdosing on Red Bull and TechCrunch: a fictionally-framed forecast of what could happen when social media, big data, and AI collide…
That’s exactly what one character attempts to do with his rotten dramatic scenarios! Mediocrity or meta? After On presents a twisted literary Turing Test (which could itself be some deliberate meta madness—gahh)! And test parameters dictate that if we can’t tell the difference, we have to assume it’s legitimate.