Reflections on a Philosophy of Risk

By Bill Long on October 5, 2017

Imagine the following: You are a Division I football coach of a team ranked #25 in the country. You have a game with a top 10 opponent. You have a chance to blow the game wide open, winning by 20 or more points, but you only have a 70% chance of doing this. If your risk strategy doesn’t pan out, you lose the game (30% chance). But if you played more conservatively, you would have a 99% chance of winning, though the victory would be of 3 or fewer points. The rest of the schedule has you going against teams weaker than you. Which strategy do you pursue?

The answer depends on your philosophy of risk. Would you try to “go for broke” against a formidable foe, knowing that if you win you will turn many heads and vault high in the rankings, or would you play for the almost certain, though not overly impressive, victory?

A scene similar to this unfolded last year in the latest iteration of the “human vs computer” challenge. The computer, Google’s Alpha GO, faced off against the world reigning GO (Chinese chess) champion, South Korean Lee Sedol. Twenty years ago, in 1997, you may recall that the computer (IBM’s Deep Blue) defeated world chess champion Garry Kasparov, but chess is simple compared to GO. In GO there are 361 board spaces and an unlimited number of men on either side trying to surround an opponent or build a ring around a section of the 19x19 square board. A GO game typically has hundreds of moves, and trying to program a computer to consider all possibilities is more than daunting. Most experts felt that a possible victory of computer over human in GO was still a decade away.

Yet, the computer defeated Sedol 4-1. Commentators couldn’t figure out in real time why the computer made the moves it did but, in hindsight, a pattern emerged, a pattern confirmed by the engineers that wrote Alpha GO’s program. The key insight was the computer’s philosophy of risk. In an interview after the fact, the programmers revealed that even though the temptation to “bury” the opponent with th 70/30 risk calculus was appealing, they programmed the computer to make the choices to give it the 99% chance of winning—i.e., to just barely scrape out victories. The wins might not have been overly impressive, but they were victories. And that may have been why the computer beat the human in 2016, rather than in 2026.

You must understand your philosophy regarding risk before deciding how to address the risks Lance and Davon discussed above. That philosophy, and the pressing nature of the facts, help you decide which strategy to pursue.