Thinking
John Brockman
12 annotations • data
First annotation on .
Dedication
- There was Rumsfeld saying, "Oh, we don't need a big force. We don't need a big force. We can do this on the cheap," and there were other people—retrospectively, we can say they were wiser— who said, "Look, if you're going to do this at all, you want to go in there with such overpowering, such overwhelming numbers and force that you can really intimidate the population, and you can really maintain the peace and just get the population to sort of roll over, and that way actually less people get killed, less people get hurt. You want to come in with an overwhelming show of force." #7112 •
- If you don't want to have a riot, have four times more police there than you think you need. That's the way not to have a riot and nobody gets hurt, because people are not foolish enough to face those kinds of odds. But they don't think about that with regard to religion, and it's very sobering. #7123 •
- We found two things. One, it's very hard for political analysts to do appreciably better than chance when you move beyond about one year. Second, political analysts think they know a lot more about the future than they actually do. When they say they're 80 or 90 percent confident, they're often right only 60 or 70 percent of the time. #7119 •
- So we found three basic things: many pundits were hard-pressed to do better than chance, were overconfident, and were reluctant to change their minds in response to new evidence. That combination doesn't exactly make for a flattering portrait of the punditocracy. #7117 •
- One of the reactions to my work on expert political judgment was that it was politically naïve; I was assuming that political analysts were in the business of making accurate predictions, whereas they're really in a different line of business altogether. They're in the business of flattering the prejudices of their base audience and entertaining their base audience, and accuracy is a side constraint. They don't want to be caught making an overt mistake, so they generally are pretty skillful in avoiding being caught by using vague verbiage to disguise their predictions. They don't say there's a .7 likelihood of a terrorist attack within this span of time. They don't say there's a 1.0 likelihood of recession by the third quarter of 2013. They don't make predictions like that. What they say is that if we go ahead with the administration's proposed tax increase, there could be a devastating recession in the next six months. "There could be." #7122 •
- Pundits have been able to insulate themselves from accountability for accuracy by relying on vague verbiage. They can often be wrong, but never in error. #7115 •
- Forecasters who were more modest about what could be accomplished predictably were actually generating more accurate predictions than forecasters who were more confident about what could be achieved. We called these theoretically confident forecasters "hedgehogs." We called these more modest, self-critical forecasters "foxes," drawing on Isaiah Berlin's famous essay "The Hedgehog and the Fox." #7113 •
- Let's go back to this fundamental question of, what are we capable of learning from history, and are we capable of learning anything from history that we weren't already ideologically predisposed to learn? As I mentioned before, history is not a good teacher, and we see what a capricious teacher history is in the reactions to Nate Silver in the 2012 election forecasting—he's either a genius or he's an idiot. And we need to have much more nuanced, well-calibrated reactions to episodes of this sort. #7109 •
- If an organization has been recently clobbered for making a false-positive prediction, that organization is going to make major efforts to make sure it doesn't make another false positive. They're going to be so sure that they might make a lot more false negatives in order to avoid that. #7121 •
- One of the things I've discovered in my work on assessing the accuracy of probability judgment is that there is much more eagerness in participating in these exercises among people who are younger and lower in status in organizations than there is among people who are older and higher in status in organizations. It doesn't require great psychological insight to understand this. You have a lot more to lose if you're senior and well established and your judgment is revealed to be far less well calibrated than that of people who are far junior to you. #7114 •
- Things that bring transparency to judgment are dangerous to your status. You can make a case for this happening in medicine, for example. Insofar as evidence-based medicine protocols become increasingly influential, doctors are going to rely more and more on the algorithms—otherwise they're not going to get their bills paid. If they're not following the algorithms, it's not going to be reimbursable. When the health-care system started to approach 20 to 25 percent of the GDP, very powerful economic actors started pushing back and demanding accountability for medical judgment. #7124 •
- Hedgehogs are more likely to embrace fast and frugal heuristics that are in the spirit of blink. If you have a hedgehog-like framework, you're more likely to think that people who have mastered that framework should be able to diagnose situations quite quickly and reach conclusions quite confidently. Those things tend to co-vary with each other. #7110 •