Street Fighter Mathematics

The American Mathematical Society is celebrating April 2010 as Math Awareness Month and the theme for this year is Math and Sports. This was just the excuse I needed to write about the role of mathematics in what I consider an emerging sport: competitive video gaming. Acceptance of video games as a sport hasn’t reached popular acceptance yet, but the particular type of video game that I’ll be focusing on does have a real world analog which is a “sport” in the classical sense of the word. The “fighting game” genre, exemplified by Capcom‘s Street Fighter series, is the video game counter-part to mixed martial arts competitions. Despite the fantasy indulgences of video games, the mathematical concepts discussed here can be extended to real martial arts and a variety of other sports.

The basis for this discussion is a branch of mathematics referred to as Game Theory. Game Theory has a wide range of applications from economics and politics to evolutionary biology. At the heart of Game Theory are games, which have several key components. First, a game needs one or more players. Second, there needs to some choices available to these players. Finally, there needs to be a set of rules determining the pay-off for each player – such as whether a player wins or loses. The focus of Game Theory is on strategies, which are guidelines that a players uses for making choices in the game to obtain the most favorable outcome.

Ask any competitive player about the Game Theory of fighting games and they’ll tell you that the genre essentially boils down to Rock, Paper, Scissors. In Rock, Paper, Scissors, each of the 2 players simultaneously makes a hand gesture for one of the three choices. Rock beats Scissors, Scissors beats Paper, and Paper beats Rock. The goal of Rock, Paper, Scissors is to guess which gesture your opponent is going to throw and to play the gesture that beats its. Keep in mind that your opponent is going through the same line of reasoning, trying to guess which move you think they’re going to play so that they can play the move that beats your move!

Mathematically, games like Rock, Paper, Scissors are represented by what’s called a pay-off matrix. For a 2 player game, the choices for player 1 might be represented by different rows in the matrix and the choices for player 2 would be the different columns. The cells of the matrix contain a pair of numbers representing the pay-off for each player. For Rock, Paper, Scissors, we can describe the pay-off by using a 1 if the player wins, a -1 if the player loses, and a 0 if the players tie. This type of game is what we call a Zero-sum game, because every win for one player corresponds to a loss for the other player. The pay-off matrix would look like this:

P2: Rock P2: Paper P2: Scissors
P1: Rock (0,0) (-1,1) (1,-1)
P1: Paper (1,-1) (0,0) (-1,1)
P1: Scissors (-1,1) (1,-1) (0,0)

For example, consider the case where player 1 throws Rock and player 2 throws Scissors. We look for the row associated with Rock for player 1, row 1, and the column associated with Scissors for player 2, column 3. The contents of the cell at row 1 column 3 is the ordered pair (1,-1), which signifies that player 1 gets a win (+1) and player 2 gets a loss (-1). Furthermore, this game is symmetric because the options and pay-offs are the same for each player. Rock, Paper, Scissors is often used as a sort of “coin toss” because each player has equal odds of winning. If both players throws a random choice each round, player 1 should win about 1/3 of the time, player 2 should win about 1/3 of the time and the remaining 1/3 result in a tie. Rock, Paper, Scissors is a perfectly balanced game. The only way of winning consistently is to be able to anticipate your opponent’s move in advance.

Fighting games are much more complex than Rock, Paper, Scissors, due in part by the large number of choices available to each player. For a game like Street Fighter II, with a 8-way joystick and 6 buttons, there are 576 possible inputs at any given point in time. This is far too many choices for me to list in a table, so for the moment lets focus on a simpler example. Consider a very simple fighting game with only 3 moves: an overhead attack, a high attack, and a low attack. These three attacks are analogous to the choices in Rock, Paper, Scissors. An overhead attack beats a low attack, a low attack beats a high attack, and a high attack beats an overhead attack. The pay-off matrix for this hypothetical fighting game might look like this:

P2: Overhead P2: High P2: Low
P1: Overhead (0,0) (-10,10) (10,-10)
P1: High (10,-10) (0,0) (-10,10)
P1: Low (-10,10) (10,-10) (0,0)

Normally fighting games determine the result of a game by starting each player with a set amount of health and this is reduced when hit by the opponent. Perhaps each player starts with 100 health and each attack deducts 10 health from the other player. However, it’s important not to think of these values as adding or subtracting health but as the net benefit towards that player. Dealing damage to the opponent makes the opponent closer to losing and consequently making the dealer that much closer to winning. You might think of this net benefit like “tug-of-war”, like where each move is a step in the direction of winning. Even the simple act of changing position in a fighting game can have benefits towards one player or another!

Given that different characters have different strengths and weaknesses, another level of Rock, Paper, Scissors tends to emerge at the character selection level. For example, we might see a game where long range characters tend to have an advantage over short range characters, medium range characters tend to have an advantage over long range characters, and short range characters tend to have an advantage over medium range characters. However, the characters might not all be perfectly balanced with one another.

Indeed, it is often the case where some characters might be better or worse than others when you look at all possible match-ups. In fighting games, these are referred to as Tier Rankings. Consider this example for Street Fighter IV. Each character plays against each other for 10 games, and the number of wins are recorded in the corresponding cell in the match-up table. The ranking is determined by how often a character wins across all possible opponents. In Street Fighter IV, playing Sagat is at an advantage against any given character, while Dan is at a disadvantage against any given character.

In these examples so far, the rules of the game work the same for both players – the pay-off matrices are symmetric. The ability to choose different characters in a fighting game leads us away from symmetric pay-offs into asymmetric pay-offs. Let’s consider a variation of our 3-move fighting game in which the pay-offs are a little bit different:

P2: Overhead P2: High P2: Low
P1: Overhead (-5,5) (-10,10) (10,-10)
P1: High (10,-10) (0,0) (-5,5)
P1: Low (-15,15) (10,-10) (5,-5)

Here, player 2 has a stronger overhead attack but at the expense of a weaker low attack. A question that a Game Theorist would ask in such a game is “what is the optimal strategy for each player under these conditions?”. One way of looking at this is by applying the Minimax rule. Each player is assumed to minimize potential losses then maximize potential gains. Let’s take a look at the best and worst case scenario for each player:

P2: Overhead P2: High P2: Low Min Loss for P1 Max Win for P1
P1: Overhead (-5,5) (-10,10) (10,-10) -10 10
P1: High (10,-10) (0,0) (-5,5) -5 10
P1: Low (-15,15) (10,-10) (5,-5) -15 10
Min Loss for P2 -10 -10 -10
Max Win for P2 15 10 5

Player 2 stands to lose the same no matter what player 1 plays (-10), but stands to gain the most by playing overhead. We’re presented with a situation where it is now to player 2’s benefit to use the overhead attack. Player 1 stands to win the same no matter what player 2 chooses (10), but he has the least to lose by going with the high attack (-5). Knowing that player 2 stands to benefit from using the overhead, player 1 can compensate for this by playing high attack.

However, the picture changes as we add multiple plays to the picture. If both player’s always adhered purely to their respective optimal strategies, player 1 would consistently win by playing high against player 2’s overhead. If player 2 realizes that player 1 is predominantly playing high, it’s to his advantage to “mix-it-up” and play low once in a while to punish player 1 for using a pure strategy. In practice, what we often find in these games are mixed strategies in which the players choose the different attacks with varying probabilities. Player 1 won’t always play high and player 2 won’t always play overhead, but they might tend to play these more often than the other choices.

The study of how players will optimize their strategies is a fascinating area of mathematical research. The Nash-Equilibrium is pair of mixed strategies for player 1 and player 2 such that neither player stands to benefit from changing their strategies while the other keeps theirs the same. A closely related concept is the notion of an Evolutionary Stable Strategy (ESS). Once an ESS is adopted by the majority of the population, it will not be invaded by any alternative strategy.

What impresses me most about playing fighting games is the way players tend to converge on the optimal strategies. Finding the optimal mixed strategy for a two-player bi-matrix game (one with different pay-off for different players) is known to be computationally hard problem. The most efficient method (that I know of) for finding the Nash-Equilibrium is the Lemke-Howson Algorithm, which takes exponentially longer to carry out as the number of choices increases. Yet somehow “pro” players seem to naturally discover these strategies despite the huge number of choices in a typical fighting game. In many ways, this process is much like the emergence of evolutionary stable strategies. The players with bad strategies lose, the players with good strategies win, and the winners gradually converge on an optimal strategy based on the behavior of the other players in the pool. Evolution seems to be quite proficient at solving optimization problems.

At this point I’ve probably raised more questions than I’ve answered, but I see this as a “good thing”. There’s a wealth of beautiful mathematics taking place in these games and just shedding some light on these problems is all I could hope for. Some of the problems discussed here are ones that will keep me searching for answers in years to come. The next time you find yourself in an arcade (one of the few that remain), take a moment to stop and watch a game of Street Fighter. I hope you’ll find it as awe-inspiring as I do.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.