As a kid I was the sort of nerd who got serious about quiz bowl. My senior year of high school I was on a team that advanced to the state playoffs. In college, at a Big Ten university, I was on a team that traveled the Midwest playing other teams of fast-twitch buzzer-mashers.
Whereas some players had deep recall of Russian novels or the periodic table, I tended to skate by on loose-ends trivia: pop culture, sports, the occasional lucky stab at U.S. history. By the time I was old enough to drink, I was a solid bar-trivia player. In a weekly pub game, I once nailed down a win by correctly naming the capital of Uganda (Kampala) on the final question. A different night, a new teammate and I simultaneously blurted the answer “apogee” to a question about the moon’s orbit. Smitten, I asked her out, and we dated for the rest of the summer.
Like I say: nerd.
That was years ago, though, largely before Google even existed, long before everyone toted around wireless internet supercomputers that fit in our jeans. These days any worthwhile bar trivia night strives to be at least partially Google-proof because huge swaths of the world’s loose knowledge have been rounded up and cataloged by the most complex network of machines ever devised. The instant recall of facts, formerly a marker of elite intelligence or at least the image of it, has become an affectation. You want to know the capital of Uganda? Two keywords in a search bar is all you need to get the answer faster than you could even ask the question. Quick recall is now a parlor trick, like grabbing a live fly out of midair, or uncapping a beer bottle with a folded dollar bill. An intelligence predicated on stockpiling facts is outmoded, naïve. Look what happened in the past 20 years to card catalogs, road atlases and Rolodexes. The databanking that got you through multiple-choice tests no longer secures your relevance. Just ask a phone book.
But these are heady days to examine the way you think, if you’re willing: Neuroscience and the rise of artificial intelligence (more on that later) have given us new insights into the interplay between the mind and the brain, two interlocked (but sometimes competing) parts of ourselves. For those of us who long conflated a facile memory with actual smarts, though, analyzing our own thought habits is about as enticing as counting carbs or auditing credit card bills. Some routines are so entrenched that drilling into them requires a confrontation with the ego. Especially if you’re the sort who considers yourself a good thinker. Which is most people, likely, in part because they give so little thought to the matter. If you weren’t good at thinking, well, wouldn’t that catch up with you? Surely, yes, of course, ergo, there’s no need to think about the matter further. Though if you did, being such a good thinker, would you not, assuredly, come up with a way to improve your thinking even further?
Related: How to Be a Better Thinker, Innovator and Problem Solver
In his new book Winning the Brain Game: Fixing the 7 Fatal Flaws of Thinking, Matthew E. May sets out a convincing case that no one much likes to examine the ways he or she thinks, in part because we’re so conditioned to cheap rewards for quick answers that we scarcely bother to do much real thinking at all. May explains that he’s the sort of guy who’s hired by companies large and small to stump workers and executives with brain teasers. This sounds like great work if you can get it, and the way he writes about these sessions—breezily, almost like a street magician recalling audiences he stumped—sounds like a guy who genuinely has hacked into something fundamental, and likely true about being a person in the 21st century: We have access to so much external knowledge, we’ve forgotten how to ask ourselves decent questions. School rewards answers—fast ones. Work rewards productivity, predicated usually on finding paths of least resistance.
May’s enduring thesis, and one that’s hard to debate, is that we’ve been conditioned by a lifetime of, essentially, trivia contests to mistake the regurgitation of facts for thinking. Rather, he argues, the rote recall of information—or the obligatory regurgitation of possible solutions at top speed—takes place somewhere outside the analytical mind, constituting a reaction less intellectual, more glandular in nature. “Our brains are amazing pattern machines: making, recognizing, and acting on patterns developed from our experience and grooved over time,” he writes. “Following those grooves makes us ever so efficient as we go about our day. The challenge is this: if left to its own devices, the brain locks in on patterns, and it’s difficult to escape the gravitational pull of embedded memory in order to see things in an altogether new light.”
This strikes me as likely true. Those of us who went through American schools have been conditioned to rely on those patterned responses, the fast responses, for decades. Looking back, the best quiz bowl players always buzzed in before the proctor finished reading the question.
***
In May’s day job, he prods groups, in any project, to reach for what he calls “elegant” solutions. Those, by and large, are the simplest, cheapest, least-intrusive, most-effective changes you can make on a system. Lesser solutions, he finds, tend to trade quality for speed. He insists that many of the reasons we fail to find elegant solutions are self-inflicted. We overthink a problem, or jump to conclusions, or decide after a few minutes of mumbly debate that we’ve come up with a solid B-minus answer, and we’re ready to move on to the next emergency. A less charitable author might describe those pitfalls as themselves lazy, but realistically they’re the shortcuts any person uses to navigate the zillion gnatlike tasks that drain our attention. You make these mistakes and compromises because your brain has evolved over eons to value functional near-facts over perfectly crystalline truths. And often, “good enough” is so-called for a reason. Duct tape and Taco Bell are revered for a reason.
He offers a version of a brain teaser in sessions with clients; the team he sketches in the book happened to be Los Angeles Police Department bomb technicians, the sort of group whose members regard themselves as unflappable thinkers and decision-makers. Here’s the scenario he poses: You run a fancy health club that in its shower stalls offers fancy shampoo, in big bottles that would retail for $50 at a salon. Unshockingly, these big bottles often go the way of a hotel bathrobe: Members take them home at a distressing rate, costing you. What solution can the bomb techs devise that will be unintrusive, cheap to free, and protect your inventory?
Yes, sure, you could switch to travel bottles, or force guests to check the shampoo out, but these will complicate operations at your otherwise immaculate and successful health club, so think harder. He says the employees at the real-life club that faced this problem figured out an unintrusive, simple solution that cost no money. It is one any bright child could devise. And yet the bomb techs didn’t arrive at it in their few minutes talking it over (and neither did I as I read the book). In a health club where people are stashing a big ol’ bottle of fancy shampoo in their gym bags on their way out, merely uncapping the bottles, it turns out, is one heck of a deterrent.
When groups tackle this problem, he writes, he sees all seven of the typical categories of thinking mistakes he lays out in the book. To summarize them as a holistic piece of advice for how to think smarter: Be more deliberate. Ask many questions before deciding on an answer. Do not accept a sloppy solution because it is easy. Do not talk yourself out of great ideas. Do not reject solutions because someone else came up with them.
Related: Top of Mind: 5 Ways to Get Ideas Flowing
All of this sounds rightly agreeable when laid out in those terms. No one thinks of herself as a sloppy thinker, but then, such is the tautology; a careful thinker would already know the pitfalls in his own process. Even then, history is littered with terrible ideas that lasted a very long time. As Carl Sagan wrote in his book Cosmos of Ptolemy, an astronomer of ancient Greece, “his Earth-centered universe held sway for 1,500 years, a reminder that intellectual capacity is no guarantee against being dead wrong.”
The more you force yourself to think slowly, the more likely your brain becomes to engage that gear.
It’s freeing to realize you’re probably, profoundly, deeply wrong about something you believe very much. Freeing, because it gives you permission to think intently on what exactly that might be. We’re all victims of our hard-wiring, you see, and May revels in citing studies in neuroscience and behavioral psychology to point to our flaws, as well as our ability to overcome those. “The brain is passive hardware, absorbing experience, and the mind is active software, directing our attention,” he writes. “But not just any software—it’s intelligent software, capable of rewiring the hardware. I could not have said that with confidence a few decades ago, but modern science is a wonderful thing.” This is, in a nutshell, the value of bothering to bother. The more you force yourself to think slowly, the more likely your brain becomes to engage that gear.
***
To engage your slow thinking, May builds his book largely as he sets up his seminars: around sinister Mensa-style riddles that make you aware of how inflexible you’ve let your brain become. Most are incredibly simple, which is what makes them so humbling. The favorite here is the classic Monty Hall Problem, a distillation of the crux of the show Let’s Make a Deal. In a book called Winning the Brain Game, it feels like a required stop.
The old game show climaxed with a logic puzzle folded into a game of chance. You, the contestant, were offered the choice of three doors. Behind one door was a fabulous prize—say, a car. Behind two doors were booby prizes—in the classic arrangement, goats. Whichever door you choose, the host, Monty Hall, will pause before revealing it. Then he’ll open one of the remaining two doors to show you a goat. He’ll ask: Do you want to stick with your original door, or switch?
Strangely, this innocuous question, raised many times over the years but notably in a 1991 Parade Magazine column, creates genuine havoc. May takes glee in recounting the fallout from the solution offered by columnist Marilyn vos Savant—that one should always switch doors. Professional mathematicians at the time wrote in to upbraid her for numerical illiteracy, insisting it was a 50/50 proposition. Even after vos Savant was vindicated and previously incensed Ph.D.s wrote in with mea culpas, the spat echoed for years. When The New York Times revisited the logic problem in 2008, for instance, the paper built an online video game for readers to play for goats and cars, to keep score over many tries. And sure enough, you click on enough doors, you learn to switch.
The reason could scarcely be simpler. When you choose one door, you leave two doors for Monty. At least one of those doors must by definition have a goat, and at the turn, he’ll always show you a goat—but then, you had to know he always has a goat to show. There’s a two-in-three chance that you didn’t pick the car when you chose your door. When he offers to trade the closed door for your closed door, he’s effectively giving you both of the doors you passed on with your original choice.
Two for one. A two-thirds chance of winning. By switching doors, you raise the possibility of winning a car by 100 percent. And still this strikes many people as counterintuitive. When you hold onto that first door, it somehow seems more likely to hold a car. The decision to stay, May writes, is easy, and lets you rest without scrutinizing the actual odds.
A Harvard University statistics professor Persi Diaconis told the Times reporter John Tierney in a 1991 story about the fracas: “Our brains are just not wired to do probability problems very well, so I’m not surprised there were mistakes.” Such a simple little trap is the Monty Hall Problem, and yet its very name was coined in a paper written about it for a 1976 paper in the journal American Statistician. This tiny puzzle is taken very seriously. Your intellectual capacity is no protection against being wrong.
Related: 5 Smart Ways to Increase Your IQ (Because It’s Not Set in Genetic Stone)
***
At some point in the near future, robots will handle a lot of the rote chores (and even deep intellectual efforts) that sap us on a given day. But even now, artificial intelligence (AI) researchers are grappling with the ways computer intelligence built to perform a specific job might hack that task, in a nearly human fashion, by rearranging its priorities to derive the largest reward under its programming. In a paper titled “Concrete Problems in AI Safety,” published in June, a team of AI researchers, including three from Google, forecast many of the pitfalls and workarounds that an AI bot (their hypothetical is a housecleaning robot) would devise to satisfy its assignments. Oddly, several of them sound like what any teacher or boss would have to consider if working with a petulant or nervous teenager. How do you keep robots from breaking things or getting in people’s way as they rush to finish a job? How do you keep it from asking too many questions?
The most human concern, to me, is how do we keep it from gaming the rewards system? “For example, if our cleaning robot is set up to earn reward for not seeing any messes, it might simply close its eyes rather than ever cleaning anything up,” the researchers write. “Or if the robot is rewarded for cleaning messes, it may intentionally create work so it can earn more reward.” It’s a complex question, one that examines much of what we take for granted as a basic social contract. Taken literally, though, it points to the problem of fixation, of setting monomaniacal goals. A cleaning robot that believes, say, that its use of bleach is a good measure for how much work it has done might simply bleach everything it encounters. “In the economics literature,” the AI researchers write, “this is known as Goodhart’s law: ‘When a metric is used as a target, it ceases to be a good metric.’” The stated goal, in other words, is rarely the actual goal.
Yet we all set goals, and May’s business is to help us figure out how to reach them. At times his framework betrays how accustomed he is to working for big corporate clients who no doubt respond best when employees and middle managers are told to ignore all limits on the way to greatness. He enrolls for this exercise a 60-something potato farmer named Cliff Young who, in 1983, entered an ultramarathon in Australia—a 542-mile run from Sydney to Melbourne. Shabbily attired, unsponsored and untrained, he nonetheless managed to beat a field of professional runners by 10 hours over five days. Why? Well, he apparently had become ludicrously fit by scampering around his farm chasing livestock over the years. But to May’s point, he simply had no idea the conventions of the sport held that runners should sleep six hours a night during the race. May writes: “In fact, his naïveté in all likelihood enabled him to win in the manner he did—because he didn’t know it ‘couldn’t be done,’ he was empowered to do it.”
That’s an amazing example, overlooking though it does the many, many, many things considered impossible because in fact they are, firmly, impossible. More inspiring to me, and probably to schlubs everywhere, is the embrace of our natural limits. You free up a lot of mental and emotional bandwidth to do great things when you stop chastising yourself for not being the Cliff Young in this analogy. Yeah, you might wind up running seven-minute miles for the better part of a week and become a folk hero straight from the farm. But more often, you’re going to be trying to figure out how not to make an arithmetic error or obvious typo in an email to a client when you’re in the 10th hour of your workday, wondering whether you should cook dinner or just say to hell with it and stop at said Taco Bell on the way home. We all bump up against our limits in different ways, and as it turns out, many of them are real.
Inevitably though, the simpler the problem you face, the more likely you are to get it right, and a small, correct thought can be infinitely more valuable than a large, incorrect one, even an incorrect one off by just a few degrees. The lesson I took from May’s analysis: Shrink your problems to a size that allows you to think clearly about them. Do this by first asking very good questions. Then, as you build to an answer, be aware of the pitfalls your brain invariably will stumble into as a clumsy instrument of human apprehension. No thought forms in a vacuum; most are derived from the leftover crumbs of old thoughts.
I experienced this recently when driving to a wedding shower in a suburb of Chicago I’d never visited. I turned onto the street of the home I was driving to, saw about 10 cars parked around a driveway and the adjoining street, and thought, This must be the place. It was inane of me to leap to that conclusion without so much as glancing at the house numbers. During a long day of travel, in an unfamiliar setting, I reached for an answer that would be comfortingly simple. But in part because I had May on my mind, I was fully prepared to notice why I was messing up, and to call myself on it.
“The brain is passive hardware, absorbing experience, and the mind is active software, directing our attention.“
Knowing when and why our brains take shortcuts (and why we let them) allows us to catch ourselves (our brains?) in the act. And it also hones our intuition around when we are, as May terms it, “downgrading” or “satisficing”—essentially, convincing ourselves to tap out early—or just staying in our usual rut.
It’s comforting to know that human intelligence, like the artificial intelligences we’re bringing into the world, is capable of being hacked. Most of what May proposes falls under the heading of habits to cultivate. One trick, though, sits right at hand for any stressful occasion. It begins with seeing oneself impartially, a tendency May traces to Adam Smith’s concept of an “impartial and well-informed spectator.” In our best moments, most of us hope to be that spectator for ourselves, and one way to accomplish that is to treat ourselves as a spectator. May cites a University of Michigan study that found people who addressed themselves in the second person or by their own name (You got this; Sam totally has this) to psych themselves up for a speech did better and felt less anxiety than people who used the first person (I got this). In a sense we are our best selves when we leave ourselves momentarily, look back in, and reassure everyone that, having done all you can, it’s going to be fine, so long as we take our time.
Related: 9 Easy Ways to Stay Mentally Sharp
This article originally appeared in the November 2016 issue of SUCCESS magazine.