Throwing Darts at the Ballot by William Poundstone

This year, like every election year, Presidential contenders are getting 24/7 media attention. Downticket candidates, not so much. I did a survey asking Americans to name as many of their elected representative as they could. The average person got only six.

Why should we care? Well obviously, a voter who does not know the name of a mayor or state assemblyperson is unlikely to know much else about her, such as the issues she ran on and any accomplishments, failures, or criminal convictions that would bear on a bid for re-election. There is a connection between voter knowledge and the quality of voter decisions.

The very nadir of voter ignorance may be judicial races. The media don’t cover them. Many voters encounter the names of judicial candidates, for the first and last time, as their stylus hovers over them on the ballot. Judges are usually nonpartisan, so there isn’t even a party affiliation to fall back on. “You know the most frightening thing about judicial elections?” asked political consultant Parke Skelton. “Eighty percent of the people actually pick someone.”

Consultants have to be mavens of voter ignorance. A recent Los Angeles Times piece notes the profusion of “sexy titles” on ballots. Judicial candidates submit a three-word description to appear on the ballot next to their name. Since most voters are completely ignorant of the candidates, the race is mainly about which three-word titles they like best. A “sexy title” is one that pushes the right emotional buttons, like “Violent crimes prosecutor.”

“The sexier the title, the better your chance of getting elected,” political consultant David Gould told the Times. He once brainstormed for a candidate by asking people “what do you hate the most?” The most popular answer was “people who hurt children,” so his candidate self-identified as a “Child molestation prosecutor.” He won.

Hold on a second—doesn’t the candidate have to fit the description? That’s the thing. Candidates are stretching the truth for the sake of a sexy title. Anyone who comes within 20 feet of involvement in a gang case is apt to turn up on the ballot as a “Gang murder prosecutor.”

“My prediction for next election,” Los Angeles County Judge Randoph M. Hammock said, “will be an ‘ISIS/terrorist prosecutor.’”

In my book Head in the Cloud I write of another factor: “foreign-sounding” names. In 1992 the well-respected California judge Abraham Aponte Khan lost to a virtually unknown challenger who had been rated “unqualified” by the Los Angeles County Bar Association. The challenger’s name was Patrick Murphy. He apparently won because the name Murphy sounded more “American” than Khan. The all-American Judge Murphy later resigned over allegations of money laundering and chronic absenteeism.

In 2006 Judge Dzintra Janavs, rated “exceptionally well qualified” by the Bar Association, lost to Lynn Diane Olson, who ran a Hermosa Beach, Calif., bagel store.

The sad fact is that voters choose names almost at random. This turns the ballot into a very effective psychological experiment in hidden bias. It doesn’t do such a good job of electing competent judges.

The Border Fence Principle by William Poundstone

Americans who support a border fence are less likely to know where the border is, exactly. They are more likely to give a wrong answer to the question, “What country is New Mexico in?”

Just so we’re all on the same page: New Mexico is in the U.S. of A. It has been a state the whole lifetime of virtually everyone drawing breath; its votes have counted in every presidential election since its 1912 admission; Breaking Bad was shot there.

But about 9 percent of adult Americans can’t answer this question. The most common “wrong” answer is of course Mexico. 

My upcoming book, Head in the Cloud, began with a question we're all asking ourselves. Is there any point in knowing facts, now that facts are so easy to look up?

You’ve got a mobile device in your pocket. Pull up the Wikipedia page on New Mexico. It has all sorts of facts that even the best-educated are unlikely to know. WiFi is a great equalizer. At very least, our omnipresent cloud has reduced the need for the sort of rote memorization that once a routine part of education. The important thing, many educators and tech pundits will tell you, is to acquire the skills need to use today’s (and tomorrow’s) digital tools effectively.

An alternate view, more traditional and elitist, is that there is a canon of knowledge the well-educated person needs to know. This is the thesis of the cultural literacy movement and of Common Core standards. Yet it’s a hard sell in our diverse society. Who decides what facts matter?

In Head in the Cloud, I take a different approach, one grounded in data analytics. I look at correlations between knowledge and behavior, politics, and life outcomes. These correlations are often surprising, and there is much evidence that map knowledge matters.

In 2014 Russian troops entered the Ukraine’s Crimean peninsula. Americans were debating what, if anything, to do about it. Three political scientists, Kyle Dropp, Joshua D. Kertzer, and Thomas Zeitzoff, ran a survey asking Americans to locate the Ukraine on a world map.

The researchers found that, the further a person’s guess was from the actual location of the Ukraine, the more likely it was that that person supported a U.S. military intervention in the Ukraine.

I ran a survey of assorted general knowledge questions, including finding a state or nation on a map. The survey also asked an opinion question: “There has been talk of building a border fence to prevent illegal immigration. On a scale of 0 to 10, how do you feel about this idea?”

The more factual questions a person answered correctly, the less likely that person was to favor a border fence. The correlation was impressively strong, even when holding educational level and age constant. It’s not just that the fence supporters were less educated. They knew less than others of the same educational level and age who did not support a border fence.

Those who couldn’t find places on a map were more likely to want a border fence. And here’s another question that was strongly connected to border fence support.

Scientists believe that early humans hunted dinosaurs such as Stegosaurus and Tyrannosaurus. True or false?

Those who said true wanted the border fence; those who said false didn’t.

Why are better-informed people more likely to oppose a border fence? I suspect the answer is that, whatever their feelings on illegal immigration, they know more facts that lead them to doubt the fence’s practicality. They know that the border, which might look “small” on a map, is actually very long and would cost a lot of money to build and maintain. They know that long tunnels have been built under the U.S.-Mexico border. A fence wouldn’t stop that, nor would it prevent people using ladders or rappels to get over it—unless the fence was guarded 24/7 at enormous expense. They know that the Great Wall of China was build to keep out the Mongols, who broke through and conquered China.

It’s human nature to avoid or minimize information that challenges deeply held beliefs. Thus the border fence supporters do not, for the most part, use their mobile devices to Google reasons why the idea won’t work. They already have an emotional commitment to the idea, based on promises of a simple solution to a complicated problem.

Meanwhile those with more contextual knowledge—a wide store of facts in their heads—were more skeptical of the border fence the first time they heard of it. They never committed to the idea, even if they were immigration hawks by ideology.

In short, the problem is that the people who could most benefit by looking up facts don’t know that they need to look up facts. You can’t Google a point of view. 

Show Me State Legislates Literacy by William Poundstone

My upcoming book, Head in the Cloud, looks at how mobile devices are devaluing knowledge. Why should we fill our heads with facts, when facts are so easy to look up?

One small though amusing example is the culture war over grammar. Some people go nuts over misspellings on menus and mispronunciations in business meetings. Others—mainly those making these errors—don't see what the big deal is. Falling into the former category is Missouri State Representative Tracy McCreery. She noticed that an alarmingly large proportion of her colleagues use the word physical when meaning fiscal. McCreery therefore introduced House Resolution 1220, a tongue-in-cheek attempt to legislate literacy.

This is one illustration of why knowledge in the cloud isn't a substitute for knowledge in your head. It's easy enough to look up the meaning of a word. But you're not going to do that unless you already have reason to believe that you're using the word incorrectly. You need to know enough grammar and usage to know what you don't know—or alternatively, you need to know enough literate people to be corrected. Apparently many Missouri legislators don't. 

The Art of Staying Rich by William Poundstone

Investment advisors typically focus on how to get rich. A neglected question is how do you stay rich?

In the familiar financial advice narrative, the investor saves and invests, ultimately achieving significant wealth—or at any rate a middle-class retirement "number." But how you draw down accumulated wealth can be as important as how you accumulated it, or even more so. That is the provocative thesis of Money magazine blogger Darrow Kirkpatrick in a recent post.

He looks at strategies for liquidating a portfolio over a 30-year period. That much may sound familiar. There have been countless studies on how much you can withdraw each year, without too much risk of running out of money. (Does the "4 percent rule" still work in today's low-return environment?) There have also been studies of tax efficiency. (Should you withdraw from a regular or Roth IRA first? Should you live off your savings a while in order to delay taking Social Security?)

These are important issues. Kirkpatrick looks at something else, something that generally ignored and which may be surprisingly important. He's asking whether a portfolio's stock or bonds should be liquidated in a given year. (Hereafter I'll use "stocks" as shorthand for equities, embracing stock mutual funds and ETFs. "Bonds" will be shorthand for Treasuries, high-grade corporate bonds, money market accounts, and "cash"—and funds buying them.)

Kirkpatrick's study uses historical data from 1928 onward. He assumed you retired in year X with $1 million (in dollars of the time). Your million was initially split 50-50 between S&P 500 stocks and 10-year Treasuries. Then you followed strategy Y, or tried to, for 30 years, dying in year X+30 with a portfolio worth Z.

His calculations assume the "4 percent rule." In the first year of retirement, you withdraw 4 percent of your million, namely $40,000. Each subsequent year, you raise that by the official inflation rate. 

Kirkpatrick isn't endorsing the 4 percent rule for today's retirees. He's just using it as a familiar benchmark. We already know that this "rule" would have performed well in the 20th century (since it was that historical record that motivated Bill Bengen to propose it, in 1993). 

How do you realize your 4 percent (or whatever) from a diversified portfolio of given tax status?

One strategy is "equal withdrawals." That means you liquidate investments in proportion to your holdings. Should you have a 50/50 portfolio (stocks/bonds) and require $40,000 to live on this year, then you would cash out $20,000 from your stock holdings and $20,000 from your bonds. This sounds sensible, and it's often assumed to be the strategy in many calculations.

Kirkpatrick compares the equal withdrawals strategy to five alternatives. One is a rebalancing strategy in which withdrawals are taken to (help) rebalance the portfolio to a fixed allocation. The value of rebalancing is generally acknowledged, so many real-world strategies approach the one Kirkpatrick tested. 

Kirkpatrick also back-tested three strategies that compare stock returns to bond returns. If stocks have outperformed bonds in the recent past, then you assume the opposite will be true in the coming year, and liquidate that year's income from stocks.

This is based on the idea that the pendulum always swings back. There is considerable evidence that when one asset class overperforms in a given time frame, it's likely to underperform in the next time frame. Kirkpatrick tested strategies using the last year's performance of stocks v. bonds, and also 3- and 7-year averages of these asset classes' returns.

Finally, he tried a strategy based on Robert Shiller's Cyclically Adjusted PE ratio (CAPE). The CAPE is a ten-year-moving average of U.S. stock prices divided by earnings. It's a measure of whether stocks are cheap or expensive, relative to earnings. In Kirkpatrick's scheme you liquidate stocks in years when they're "overvalued" (when the Shiller PE is above its long-term median). Otherwise you liquidate bonds. 

How is this different from the 1-, 3-, and 7-year strategies? First of all, it pays no attention to bonds at all. It doesn't matter whether interest rates are high or low; whether bonds have done great or lousy lately.

It also doesn't pay attention to whether stocks have done great or lousy. All that matters is where the Shiller PE ratio stands now, relative to history.

Kirkpatrick calculated two numbers for each of his tested strategies. One is the success rate: the percentage of years in which you could have retired for which you would have had enough money, following a given strategy (and the 4 percent rule), for 30 years of inflation-adjusted income. He also calculated the median portfolio value at the end of the 30-year period. Of course, if the strategy fails, the ending portfolio value is zero. 

The good news: All the strategies had a high success rate. They ranged from 79.3 percent to 91.4 percent. Again, this shouldn't come as a surprise, as we know the 4 percent rule would have worked well in the 20th century. 

But there was a huge difference in ending portfolio values. With the worst strategy it was $2.11 million. With the best strategy, it was more than three times that, $6.77 million.

You'll notice that all these values are much higher than the hypothetical million our investor started with. This is a paradox of the 4 percent rule (or any simple variation). You'll probably die richer than you've ever been (even adjusted for inflation). Yet there's still a worrisome possibility of ending up broke.

The best of Kirkpatrick's six strategies had the highest success rate (91.4 percent) and also the highest ending value ($6.77 million). It was the "CAPE Median" strategy, the one based on the Shiller PE.

Oddly enough, the second-best strategy by both criteria was "Equal Withdrawals"—you might say, the dumbest, most generic strategy. Its success rate (89.7 percent) was only a bit less than CAPE median, though the final portfolio value ($4.72 million) was 30 percent less.

This is another demonstration of the predictive power of Shiller's 10-year moving average on stock returns—over the historical record, anyway. There are no guarantees going forward, but it's impressive.

Another observation is that portfolio draw-down strategies rival accumulation ones in importance. Kirkpatrick's findings merits further study.

How to Pick Powerball Numbers by William Poundstone

A Powerball ticket costs $2 and has 1 in 292,201,338 chance of winning. The price and the odds are the same every week. What changes is the jackpot amount. For Wednesday’s drawing it’s $1.5 billion.

Do the math: The expected winning is $1.5 billion divided by 292,201,338, or $5.13. That's than the $2 the ticket costs!

There is a catch. You might have to share the jackpot with someone else. Last week about 400 million bought Powerball tickets. Notice that’s more than the 292 million possible numbers. As it happened, no one picked the right combination. If we assume a similar number of tickets will be sold this week, then each set of numbers is selected by 1.37 ticket buyers on average. In other words, if choose a random set of numbers, there’s a good chance you’ll be the only one picking that set. And even if there’s one other person with that set of numbers, that would cut your expectation in half, from $5.13 to about $2.57—which, you’ll notice, is still greater than the cost of ticket.

I haven’t even factored in the smaller prizes. They further increase a player’s expectation. In short it appears that, even with all the media hype and the hundreds of millions of other players, a Powerball ticket is currently a positive-expectation wager. That’s a rare thing, though it’s the not the first time it’s happened, either.

Should you buy a ticket then? Let me first say that there is a simple strategy that increases your chances considerably. You pick unpopular numbers.

The fact is that some numbers are far more frequently chosen than others. For instance “lucky” 7 is a poplar number. Should you include 7, you increase the chance that, should you win, you’ll be sharing a jackpot. Consequently smart players avoid 7.

You might think that “unlucky” 13 would be a savvy contrarian pick. Nope. Thirteen is a moderately popular choice, it turns out.

You’ll notice that the Powerball jackpot has gone uncollected for many weeks, despite the fact that there are currently more ticket buyers than available numbers. And there’s a good chance that, when the jackpot is won, it will be split by several people. This reflects the psychology of choosing numbers. Choices cluster on popular numbers.

There has been considerable research on which numbers are least popular (and thus best to pick). I’ll give a chart adapted from my book Rock Paper Scissors.

These are unpopular numbers—those relatively unlikely to lead to a shared jackpot. All you do is select your six numbers from this set at random. A low-tech way to do that is to write the numbers on index cards, shuffle them, and draw numbers.

Note that the “Powerball” pick has to be between 1 and 26. There are only three candidate numbers for that: 10, 20, and 29. Make sure to choose that at random too.

Now here’s why you shouldn’t quit your day job. Those who have read my book Fortune’s Formula will know about something called the Kelly criterion. By Kelly standards, Powerball remains a sucker’s bet—positive expectation or not.

Remember, the chance of winning at Powerball is 1 in 292,201,338. Nothing in the above system changes that.

With two drawings a week, you would have to play a ticket in every single drawing for about 2.8 million years, on average, to win your first jackpot. The policy of buying lottery tickets is a certain financial drain that almost certainly never pays off while you’re drawing breath. 

Anagram Movie Reviews, F.A.Q. by William Poundstone

My Twitter feed (@WPoundstone) is devoted almost entirely to anagram movie reviews. In case that's not self-explanatory, I rearrange the letters of a movie's title to get a short commentary on the movie. 


Let me answer a few questions I get from readers. First—

Why anagram movie reviews?

I joined Twitter in April 2009. I wasn't a big social media person, but the 140-character limit struck me as an interesting challenge. In the early years Twitter users were trying to figure out what to do with it (I guess they still are). My idea was anagram movie reviews. It fits in 140 characters, and there's a constant stream of new material.

My friend Larry Hussar and I had been playing this game long before Twitter. We would occasionally send each other movie title anagrams by e-mail. The premise wasn't original with us. We'd seen an article on it somewhere (in Games magazine?) The one I remember was THE TOWERING INFERNO = NOT WORTH FIRE ENGINE. That's definitely a classic to live up to.

I figured, Larry and I were already wasting time with this, so why not share the results with others? 

How do you come up those anagrams?

I cheat. Meaning, I use a website, One Across Anagram Search. You type in a word or phrase and it gives you every possible anagram. Sometimes hundreds—pick any you like. I am more an editor than a creator of anagrams.

OK, it's not quite that simple. Sometimes you have to play around, try an actor's name or slang that's not in the website's dictionary. Sometimes there so many potential anagrams that you have to filter them by a word or two you think you want to use (TOM and ASININE for MISSION IMPOSSIBLE—ROGUE NATION).

Larry does anagrams by hand, on paper—and even occasionally in his head. I lack that kind of patience. 

Which movies work best?

Well, it's easier to be funny when the movie is really bad. I do prioritize films where there's already critical blood in the water. (Rotten Tomatoes is useful in this regard.)

Long titles help, obviously. Some titles just happen to have an optimal combination of letters, allowing an unexpected profusion of anagrams. Examples were A MILLION WAYS TO DIE IN THE WEST (= SEE WILY INDIAN SHOOT LAME TWIT; HEY, WIT WAS ALL TOO MID-NINETIES, etc.) and STRAIGHT OUTTA COMPTON (TOO-IMPORTANT THUG CAST; I'M ATOP CUTTHROAT TONGS, etc.)

Why don't you use hashtags (#AnagramMovieReviews)?

In the early days of Twitter, I worried that hashtags might be confusing to new users. Though that's no longer the case, I still feel that 98 percent of hashtag usage is a typographic cry for attention, like using all caps or Comic Sans. (And I already use all-caps for movie titles and anagrams.) I don't imagine that many people want to search for anagram movie reviews, and those who do know they don't need a hashtag to do so. 

What's "Does the Dog Die (dot com)"?

It's an entirely earnest website intended to tell parents whether a movie shows the death of a pet—in case their kids would be upset. I've found that the site also reviews violent grown-up movies from the bizarre perspective of someone who only cares whether an animal dies. I occasionally post DTDD reviews as a parallel example of film-reviewing-under-absurd-constraint.

What don't you like about movies today?

Easy question. I don't like sequels with numbers in the title. Sometimes I can work it in but it always seems forced. 

On other other hand, I have no problem with the "pretentious" use of Roman numerals in sequel titles, as the I's are simply folded into the letter set.

What's your favorite anagram review?

I suppose that the anagrams that please me most are not necessarily the one that please my Twitter followers. I like this one, one of my shortest:

Too-Beautiful Data by William Poundstone

Those interested in the "hot hand" and sports streakiness should check out It's an easy-to-use, addictive data visualization site that generates charts and heat maps of NBA players' scoring.

Data visualization, an indispensable tool of science and business, is a double-edged sword. The human eye (and brain) are good at spotting patterns and trends in noisy data. They are often better at this than algorithms are. That's the premise of CAPTCHAs; it's why people keep seeing "alien artifacts" in rover images of Mars. The problem is, the human perceptual system is too good at pattern-finding. Sometimes perceived patterns are only mirages. (Below center right, a "perfectly round sphere" recently imaged on Mars.)

This phenomenon is good for bookies. When sports fans think they can predict winners with a little data visualization (but really can't) that boosts business and profits. But in science it can lead to bad conclusions, and in business to bad decisions. Maybe data visualization tools ought to come with a warning like those on rear view mirrors: PATTERNS AND TRENDS MAY BE LESS SIGNIFICANT THAN THEY APPEAR.

The Cat That Beat the Stock Market by William Poundstone

Stock-picking cat Orlando

Stock-picking cat Orlando

We've all heard the mild joke that a monkey, throwing darts at the financial pages, can pick stocks as well as a professional. In no small part it was this mental image that motivated the index fund industry. Lately a new claim is current: A cat's stock picks can beat the S&P 500 index.

There are have been a couple of stock-picking cats. Last year the Observer invited three financial pros to compete against a ginger cat named Orlando and a group of British students. The cat's random picks (made by having the cat throw a toy mouse at a marked grid of stocks) outperformed the portfolios of the pros and the kids. 

Bob, another stock-picking cat, watching a human friend on TV

Bob, another stock-picking cat, watching a human friend on TV

In the U.S., the best-known stock-picking cat is the recently deceased Bob, owned by PIMCO bond guru William H. Gross. Wrote Gross: "I often asked her [Bob was female] about her recommendations for pet food stocks, and she frequently responded—one meow for 'no,' two meows for a 'you bet.'"

It will be apparent that cats are replacing monkeys as the favored metaphor for "random" stock picking. (Cat people are free to find this demeaning.) In any case, hedge fund manager David Harding, of Winton Capital, recently weighed in on the matter on CNBC. He didn't mention cats, but he described a system of picking 50 stock "at random" and weighting them equally. "We tested the idea and [it] immediately did better than the S&P 500."

Harding says he's raised $1 billion to invest in this random-50-stocks scheme, and it's been outperforming the market.

Is it possible that random picks (a "cat") can consistently outperform a broad market index like the S&P 500? It is. But let me first point out that that Observer contest doesn't mean much. It tracked the three portfolios over a year. Obviously there's a strong element of luck in that time frame. Suppose you and I competed at the roulette table. After a night of gambling one of us will do better than the other, but it would be wrong to conclude that the winner has superior skill. Roulette is dumb luck, and in the short term, the stock market is very close to that. 

Note that the Observer folks stacked the deck in favor of getting a click-friendly "man bites dog" headline. There was a two-third chance that either the cat or the school kids would beat the pros.

The S&P index tracks the 500 largest American stocks. Its purpose is to gauge the market performance of the companies most popular with investors and most important to the American economy. The S&P index was never intended to set a cap on what returns investors may expect in a close-to-efficient market. 

The S&P index is market weighted. It does not, in other words, track the worth of a portfolio invested equally in all 500 included stocks. Instead it assume the portfolio is weighted according to market capitalization. Apple, for instance, has recently been over 4 percent of the index (not 1/500 or 0.2 percent). 

This means that every S&P index fund is strongly overweighted in Apple (and other big, popular companies like Exxon Mobil, Microsoft, and IBM). What's wrong with that? As Harding explains: 

"If you have the same expected returns from assets you should put the same weights on them to optimize the portfolio. So if you choose stocks at random and combine them, you will always beat S&P 500, or in 99.99 percent of cases."

Anyone who truly believes that all 500 S&P companies have equally good expected returns—as an efficient market theory diehard might—would want to invest equally in all. You should put 1/500 of your portfolio in each. By overweighting a few popular companies, you take on extra risk without getting anything in return.

Now maybe you don't believe the market is all efficient, all the time. I don't, either. Unfortunately, there is little reason to believe that popular stocks merit their popularity. The evidence is that big and popular companies do worse than small and obscure ones.

The most popular stock of the moment is likely to be overbid. It won't always be most popular. Apple is such a market darling right now that it's hard to believe that posterity will look back and marvel at how undervalued it was in 2014. That means Apple's long-term return expectations are likely to be less than average for S&P 500 stocks.

For that reason, a balanced portfolio of all 500 S&P stocks, each comprising 1/500 of total assets, might be expected to have slightly better return and slightly lower volatility than the official S&P 500 index or funds tracking it.

You can do better than that. It is well established that stocks of small companies have outperformed those of large companies over long periods. A "random" basket of stocks, not restricted to the 500 largest, thus might be expected to outperform the S&P index—assuming the small cap bias persists in the future.

For that reason I find Harding's claim easy to believe. Investing in 50 random stocks is marvelously simple. The one thing I don't get: why would sophisticated investors pay hedge fund fees (assuming they are) for a system they could duplicate themselves? (Then again, Harding's fund may have a few twists he's not talking about on CNBC.)