The Signal and the Noise
Books | Political Science / Political Process / Campaigns & Elections
4.1
(131)
Nate Silver
NEW YORK TIMES BESTSELLER • The groundbreaking exploration of probability and uncertainty that explains how to make better predictions in a world drowning in data, from the nation’s foremost political forecaster—updated with insights into the pandemic, journalism today, and polling One of The Wall Street Journal’s Ten Best Works of Nonfiction of the Year “Could turn out to be one of the more momentous books of the decade.”—The New York Times Book ReviewMost predictions fail, often at great cost to society, because experts and laypeople mistake more confident predictions for more accurate ones. But overconfidence is often the reason for failure. If our appreciation of uncertainty improves, our predictions can get better too. This is the “prediction paradox”: The more humility we have about our ability to make predictions, the more successful we can be in planning for the future.Drawing on his own groundbreaking work in sports and politics, Nate Silver examines the world of prediction, investigating how to seek truth from data. In The Signal and the Noise, Silver visits innovative forecasters in a range of areas, from hurricanes to baseball to global pandemics, from the poker table to the stock market, from Capitol Hill to the NBA. He discovers that what the most accurate ones have in common is a superior command of probability—as well as a healthy dose of humility.With everything from the global economy to the fight against disease hanging on the quality of our predictions, Nate Silver’s insights are an essential read.
AD
Buy now:
More Details:
Author
Nate Silver
Pages
576
Publisher
Penguin
Published Date
2015-02-03
ISBN
0143125087 9780143125082
Community ReviewsSee all
"As a guy who makes his living off of stats and predictions of complex systems, I knew that Silver’s “The Signal and the Noise” had to be on my list. It’s a popular treatment of complex topics like chaos theory, bayesian stats, risk vs. uncertainty, and the dangers of forecasting outside of your sample. Silver puts his argument in a historical context and uses a host of modern examples to make his points, including the recent housing crash, Moneyball, Deep Blue vs. Kasparov, and counter-terrorism. He also throws in earthquake prediction and epidemiology.<br/><br/>All in all, an entertaining book but not one with any mind-blowing insights for anyone who has been doing stats for a while.<br/><br/>A few interesting excerpts below:<br/><br/>################<br/><br/>If there is one thing that defines Americans—one thing that makes us exceptional—it is our belief in Cassius’s idea that we are in control of our own fates. Our country was founded at the dawn of the Industrial Revolution by religious rebels who had seen that the free flow of ideas had helped to spread not just their religious beliefs, but also those of science and commerce. Most of our strengths and weaknesses as a nation—our ingenuity and our industriousness, our arrogance and our impatience—stem from our unshakable belief in the idea that we choose our own course.<br/><br/>There are entire disciplines in which predictions have been failing, often at great cost to society. Consider something like biomedical research. In 2005, an Athens-raised medical researcher named John P. Ioannidis published a controversial paper titled “Why Most Published Research Findings Are False.”39 The paper studied positive findings documented in peer-reviewed journals: descriptions of successful predictions of medical hypotheses carried out in laboratory experiments. It concluded that most of these findings were likely to fail when applied in the real world. Bayer Laboratories recently confirmed Ioannidis’s hypothesis. They could not replicate about two-thirds of the positive findings claimed in medical journals when they attempted the experiments themselves.<br/><br/>But this book is emphatically against the nihilistic viewpoint that there is no objective truth. It asserts, rather, that a belief in the objective truth—and a commitment to pursuing it—is the first prerequisite of making better predictions. The forecaster’s next commitment is to realize that she perceives it imperfectly.<br/><br/>The most calamitous failures of prediction usually have a lot in common. We focus on those signals that tell a story about the world as we would like it to be, not how it really is. We ignore the risks that are hardest to measure, even when they pose the greatest threats to our well-being. We make approximations and assumptions about the world that are much cruder than we realize. We abhor uncertainty, even when it is an irreducible part of the problem we are trying to solve.<br/><br/>Nobody saw it coming. When you can’t state your innocence, proclaim your ignorance: this is often the first line of defense when there is a failed forecast. But Sharma’s statement was a lie, in the grand congressional tradition of “I did not have sexual relations with that woman” and “I have never used steroids.”<br/><br/>Human beings have an extraordinary capacity to ignore risks that threaten their livelihood, as though this will make them go away.<br/><br/>Uncertainty, on the other hand, is risk that is hard to measure. You might have some vague awareness of the demons lurking out there. You might even be acutely concerned about them. But you have no real idea how many of them there are or when they might strike. Your back-of-the-envelope estimate might be off by a factor of 100 or by a factor of 1,000; there is no good way to know. This is uncertainty. Risk greases the wheels of a free-market economy; uncertainty grinds them to a halt.<br/><br/>An American home has not, historically speaking, been a lucrative investment. In fact, according to an index developed by Robert Shiller and his colleague Karl Case, the market price of an American home has barely increased at all over the long run. After adjusting for inflation, a $10,000 investment made in a home in 1896 would be worth just $10,600 in 1996. The rate of return had been less in a century than the stock market typically produces in a single year.<br/><br/>There is a technical term for this type of problem: the events these forecasters were considering were out of sample. When there is a major failure of prediction, this problem usually has its fingerprints all over the crime scene.<br/><br/>Tetlock’s conclusion was damning. The experts in his survey—regardless of their occupation, experience, or subfield—had done barely any better than random chance, and they had done worse than even rudimentary statistical methods at predicting future political events. They were grossly overconfident and terrible at calculating probabilities: about 15 percent of events that they claimed had no chance of occurring in fact happened, while about 25 percent of those that they said were absolutely sure things in fact failed to occur. It didn’t matter whether the experts were making predictions about economics, domestic politics, or international affairs; their judgment was equally bad across the board.<br/><br/>On the basis of their responses to these questions, Tetlock was able to classify his experts along a spectrum between what he called hedgehogs and foxes. The reference to hedgehogs and foxes comes from the title of an Isaiah Berlin essay on the Russian novelist Leo Tolstoy—The Hedgehog and the Fox. Berlin had in turn borrowed his title from a passage attributed to the Greek poet Archilochus: “The fox knows many little things, but the hedgehog knows one big thing.”... Foxes, Tetlock found, are considerably better at forecasting than hedgehogs.<br/><br/>Academic experts like the ones that Tetlock studied can suffer from the same problem. In fact, a little knowledge may be a dangerous thing in the hands of a hedgehog with a Ph.D. One of Tetlock’s more remarkable findings is that, while foxes tend to get better at forecasting with experience, the opposite is true of hedgehogs: their performance tends to worsen as they pick up additional credentials. Tetlock believes the more facts hedgehogs have at their command, the more opportunities they have to permute and manipulate them in ways that confirm their biases. <br/><br/>“When the facts change, I change my mind,” the economist John Maynard Keynes famously said. “What do you do, sir?”<br/><br/>You are most likely to overfit a model when the data is limited and noisy and when your understanding of the fundamental relationships is poor; both circumstances apply in earthquake forecasting.<br/><br/>As the statistician George E. P. Box wrote, “All models are wrong, but some models are useful.”<br/><br/>Fisher’s interests were wide-ranging: he was one of the best biologists of his day and one of its better geneticists, but was an unabashed elitist who bemoaned the fact that the poorer classes were having more offspring than the intellectuals. (Fisher dutifully had eight children of his own.)<br/><br/>It is often possible to make a profit by being pretty good at prediction in fields where the competition succumbs to poor incentives, bad habits, or blind adherence to tradition—or because you have better data or technology than they do. It is much harder to be very good in fields where everyone else is getting the basics right—and you may be fooling yourself if you think you have much of an edge.<br/><br/>There is fairly unambiguous evidence, instead, that insiders make above-average returns. One disturbing example is that members of Congress, who often gain access to inside information about a company while they are lobbied and who also have some ability to influence the fate of companies through legislation, return a profit on their investments that beats market averages by 5 to 10 percent per year,33 a remarkable rate that would make even Bernie Madoff blush.<br/><br/>Efficient-market hypothesis is sometimes mistaken for an excuse for the excesses of Wall Street; whatever else those guys are doing, it seems to assert, at least they’re behaving rationally. A few proponents of the efficient-market hypothesis might interpret it in that way. But as the theory was originally drafted, it really makes just the opposite case: the stock market is fundamentally and profoundly unpredictable. When something is truly unpredictable, nobody from your hairdresser to the investment banker making $2 million per year is able to beat it consistently.<br/><br/>This book advises you to be wary of forecasters who say that the science is not very important to their jobs, or scientists who say that forecasting is not very important to their jobs! These activities are essentially and intimately related.<br/><br/>The dysfunctional state of the American political system is the best reason to be pessimistic about our country’s future. Our scientific and technological prowess is the best reason to be optimistic. We are an inventive people. The United States produces ridiculous numbers of patents,114 has many of the world’s best universities and research institutions, and our companies lead the market in fields ranging from pharmaceuticals to information technology. If I had a choice between a tournament of ideas and a political cage match, I know which fight I’d rather be engaging in—especially if I thought I had the right forecast.<br/><br/>The substantive variables fall into about a dozen major categories: growth (as measured by GDP and its components), jobs, inflation, interest rates, wages and income, consumer confidence, industrial production, sales and consumer spending, asset prices (like stocks and homes), commodity prices (like oil futures), and measures of fiscal policy and government spending. As you can see, this already gives you plenty to work with, so there is little need to resort to four hundred variables.<br/>"