The Methodology Behind the SI Predictions

How to predict this year’s playoffs? An interesting question with many potential answers. We didn’t get an opportunity to explain how we came up with the numbers that we did in the SI article itself, so we’ll have to do that here. Hopefully I can explain our methodology clearly even for those without a statistics background (and not too loosely for those who know their stats!). I should also mention that my colleague, Mikal Skuterud, really deserves a lot credit for the analysis.

The first issue to sort out is what the outcome is that you’re trying to explain. At some level, it’s very simple: who wins. Winning is a binary variable – either you win or you don’t. When trying to explain or predict a binary variable, probit regression analysis is incredibly useful. We’ve used it before, when looking at some interesting issues concerning playoff success last year. Some examples can be found here, here, here, and here. In these pieces, we used a probit regression. In a nutshell, a probit regression uses the data to assign a “score” to a matchup. This score is used in conjunction with the normal distribution to essentially determine how much “luck” (the error term, which includes the effects of missing variables as well as just random chance, and which is assumed to be drawn from the normal distribution) is required for a given team to win that series.

While we have found this to be a useful approach for some questions in the past, it’s actually not making as efficient use of the data as possible. Some series are very closely contested, and “puck luck” plays a large role, while others are quite lopsided, and the amount of “puck luck” would have to be extreme to change the outcome. The problem with a probit regression is that it lumps all series wins into the same category. We actually have some information on how lopsided a series is by looking at the number of games it went. It’s not perfect by any means, but it does contain information. As such, one can think of a team’s results from a series has having 8 possible outcomes: get swept, lose in 5, lose in 6, lose in 7, win in 7, win in 6, win in 5, and sweep. Note that the way these outcomes have been written represent a sort of worst-to-best. In other words, we can order the outcomes. When we can do this, one possible method of analysis is an ordered probit regression. It is similar to a probit, but allows us to exploit additional information about the closeness of a series (and the role of luck) in order to get better predictions.

An ordered probit still constructs a “score” for a series, as the probit regression does, but then uses that score to establish 8 regions of “luck” that would correspond to the 8 possible outcomes. From this, probabilities can be constructed for each of the 8 possible outcomes. The probability that a team wins, then, is simply the sum of the probabilities associated with the four winning outcomes (sweep, win in 5, win in 6, and win in 7).

Once the strategy for determining the probabilities that a given team wins a playoff series against a specific opponent has been established, the next order of business is to figure out what variables should be used to create this “score” used in the ordered probit regression. When doing predictive analysis, you ideally want everything that contains information about how a team will perform in the playoffs. Clearly, how they did in the regular season (i.e. their points) has some value, but what else is there? We went and gathered as much information as we could on a whole mess of variables, with the idea that our estimation strategy would help us figure out what was important. We collected data on regular season points, points in the last half of the season, points in the last 10 games, Corsi, Score Adjusted Corsi, penalty kill, power play, save percentage, shooting percentage, and more. There was one interesting variable that was created by Ian. It has been well-recognized in the analytics community that “puck luck” is a real thing, and that it can make the standings a poor representation of a team’s ability. One way that puck luck manifests itself is in the outcome of one-goal games – particularly overtime games and shootouts. These games are often determined by odd events, and occasionally a team gets the puck to bounce their way a disproportionate number of times. So, what Ian did was construct a variable that compared a team’s winning percentage in one-goal games to their winning percentage in other games. If they were doing much better in one-goal games, then it is possible that their regular season record is predicated not on ability but on puck luck. More on this variable later.

So, having collected all these historical data where we know the actual outcome of each series, now it’s time to plug them into our ordered probit regression and see what predictors are good at predicting the outcomes that actually happened, right? Unfortunately, it’s not quite that simple. Given that some of these variables were only available going back to the 2008 playoffs (in particular, we pulled the Score Adjusted Corsi from puckon.net, which only has that going back to the 2007-08 season), we were left with 105 observations on playoff series. With so many variables, and the fact that these variables are actually quite correlated with each other, using everything doesn’t actually yield anything with any statistical power.

Things are further complicated by the fact that, since 2008, playoff teams don’t really look all that different from each other in terms of these variables. We’ve entered an age of parity, and this is not good for statistical analysis. Regression analysis is based on seeing how differences in certain variables (team characteristics) lead to differences in outcomes (winning versus losing a series). If there aren’t many differences in team characteristics, then it gets hard to explain or predict the difference between who wins and who loses a series.

One solution to this problem is factor analysis. Factor analysis takes the variables that you have, and combines them into a single number, called a factor, in a way designed to make teams look as different as possible according to that factor. You would then use that factor in your regression analysis. You can run regressions using a single factor, or you can create multiple factors. The key is that the number of factors you create and use in the regression is less than the number of variables you began with.

So, our choices were to use factor analysis, with the number of factors to be determined, or to use a smaller set of variables in our regressions, that smaller set also to be determined. We wanted to use the best model possible, but which one would that be? What should the criterion be to discriminate between what is a good model and what is a bad model?

In our case, the “best” model is the one that has the most predictive power. This is not (necessarily) the same as the model that fits the data the best. When you run probit regressions, you can see how many of the series you would have got right if you had used that model to make your picks. Unfortunately, this is rather backwards looking, as the model is created using the data on who won. In other words, the model that fits the data the best is the one that has the most explanatory power, which is quite different from predictive power.

In order to establish predictive power, you need to see how well the model does in predicting the outcomes of series that weren’t used in the generation of the model. The way to do this is to run the regression using all the data you have except for one year. Then, use the resulting model to predict that year that wasn’t used, and compare your predictions to the actual results. This is known as “leave-one-out cross-validation.” So, we tried this with the factor analysis, using several different numbers of factors, as well as several different sets of variables. What we found was that the factor analysis with 5 factors had the most predictive power, predicting on average 10.7 correct series per year (so, out of 15). This was a fair bit better than looking at any single variable by itself, although the one that came closest was Ian’s luck variable. As it turns out, the luck variable was heavily weighted in the construction of the factors, so it turned out to be an important innovation! At some point, we’ll have to look into this more closely.

Now that the model had been established, it was time to generate some results. First off, we used the model to generate probabilities of each of the 8 outcomes for each of the first round series. As mentioned before, the probability that a team wins is the sum of the probabilities associated with that team winning. The first round predicted outcomes are as follows:

Team Oppo-nent Prob. Team Wins Lose in 4 Lose in 5 Lose in 6 Lose in 7 Win in 7 Win in 6 Win in 5 Win in 4
Rangers Pens 0.589 0.024 0.074 0.148 0.166 0.217 0.154 0.122 0.095
Isles Caps 0.392 0.070 0.143 0.211 0.184 0.188 0.104 0.065 0.035
Habs Sens 0.457 0.050 0.118 0.192 0.183 0.203 0.122 0.082 0.050
Wings Bolts 0.363 0.081 0.155 0.218 0.183 0.180 0.096 0.058 0.030
Blues Wild 0.503 0.039 0.102 0.178 0.179 0.210 0.134 0.095 0.063
‘Hawks Preds 0.603 0.022 0.070 0.142 0.163 0.218 0.157 0.127 0.101
Ducks Jets 0.311 0.104 0.178 0.229 0.178 0.163 0.081 0.046 0.021
Flames Nucks 0.492 0.042 0.105 0.181 0.180 0.208 0.131 0.092 0.060

 

Note that this model predicts that the Senators will beat the Canadiens, but if you look at the single most likely outcome, it’s that the Habs win in 7. This is also true for the Flames/Canucks series: the Canucks are predicted to win, but the most likely outcome (of the 8) is that the Flames win in 7. At some level, this is telling us how close these series will be.

From here, we then generated probabilities for all the second round matchups, and generated probabilities for who would win all of those. We then went on to generate probabilities for all the possible third round matchups, and used the model to generate probabilities for the outcomes of those series, and so on right to the Finals. Using this method, we generated probabilities for each team to win the Stanley Cup. Again, it is worth noting, that the most likely Stanley Cup Final (as predicted by this model) is the Blackhawks versus the Lightning. If you were to fill out a bracket by taking the team most likely to win a series into the next round, this is what you would get as well.

Finally, looking at the probabilities generated for the first round, we can see that there are going to be some tightly contested series. The model predicts that each series will go at least 6 games, and the predicted winners generally have less than a 60% chance of winning. We’ll check back after the first round to see how things are going.

14 Comments

  1. Dutch Hockeyfan's Gravatar Dutch Hockeyfan
    April 15, 2015    

    Hello,

    If i'm reading the table correctly The Jets have an almost 70% chance of beating Anaheim.
    This would make them the ''highest favorite'' among all first-round matchup contenders.
    You dont adress this matchup specifically in your post but could you elaborate on why the Jets are favored so much?

    Pardon my english

    Gr,

    • Phil Curry's Gravatar Phil Curry
      April 15, 2015    

      Hi - thanks for the question. And you have no need to apologize for your English - it's excellent.

      You have hit upon something that I mentioned in the article, but probably didn't emphasize enough. The reason the Jets are the strongest favourite, according to our model's results, has to do with Ian's "luck" variable. This variable is based on the notion that the outcomes of one-goal games can be heavily influenced by things that aren't necessarily reflective of being a good team - for example, lucky bounces. Being good at shootouts isn't something that helps in the playoffs, either. What we found is that teams that did much better in one-goal games than in other games generally fared poorly in the playoffs - it had a lot of predictive power in our model. Anaheim went an unbelievable 33-1-7 in one-goal games this year, and so our model flagged them as a team ripe for an upset.

      I hope this answers your question.
      Phil

      • Dutch Hockeyfan's Gravatar Dutch Hockeyfan
        April 15, 2015    

        It most certainly does answer my question. Gives you a whole different perspective on teams clinching tight games...
        I always imagined ''Puck luck'' to be a small factor in the NHL given the 82 game schedule and ive used the argument ''in a seven game series the best team will win'' However based on these measurements i would have to agree that ''luck'' might have a bigger effect on the outcome of games than i realised before...
        But i guess that's the price to pay if you want a league where anything can happen.

        • Phil Curry's Gravatar Phil Curry
          April 15, 2015    

          Yes, I agree it's fairly surprising - you would think 82 games would be enough that "luck" wouldn't play much of a role. But, the NHL these days has a lot of parity, and when teams are equally matched, you never know what will happen!

  2. Jack L's Gravatar Jack L
    April 15, 2015    

    You mention twice in the SI article that the Pens should have traded for Kessel, but w/ no 1st rounder this yr (Perron trade), no "can't-miss" prospects, and no cap room this yr or next, how would they have pulled off that trade?

    • Phil Curry's Gravatar Phil Curry
      April 15, 2015    

      Yeah, Ian somehow got fixated on that idea and seems to be having difficulty letting it go. I'm not really sure how he thought that could happen. I'll see if I can get him to give you (and the rest of us!) an answer. He's a shifty lawyer, though - not always easy to pin down!

      Phil

      • Ian Cooper's Gravatar Ian Cooper
        April 23, 2015    

        I'm definitely being a little tongue in cheek by fixating on this, and either Phil (Curry not Kessel) has a bad short term memory or didn't read the piece too closely. ; ) But at the time I imagined the Pens giving up one of their young D (Maatta being the most obvious since he was out for the season) and presumably some first round picks beyond this year's draft. And while capgeek was no longer available to give precise information, based on what I read elsewhere it looked like dumping a 35 year old Rob Scuderi would have gotten them awfully close on cap space. Of course to be fair I didn't imagine their D would be further decimated. Nevertheless, I remain committed to the idea that Kessel would be a fantastic Penguin; whereas, he'll just languish and be abused in Toronto well into his 30s if he stays here. So maybe that's an offseason move. You can read the full piece below if you'd like...

        https://www.bsports.com/statsinsights/nhl/penguins-need-phil-kessel#.VThhkWRViko

  3. April 15, 2015    

    Phil, we talked a while back about your study of the contract year. I see you continue to do great work. I'm wondering why you didn't also include second half CF%, scoring chance for%, as well as post trade deadline CF% etc. I believe these sub-periods would provide additional explanatory power over the full season or last 10 games.

    • Phil Curry's Gravatar Phil Curry
      April 15, 2015    

      Hi Henry,

      It's good to hear from you again, and thanks for the kind words. Honestly, I don't have a great answer to that question. We should have done that, as I suspect that you're totally right. Ah well, we'll have to remember that for next year! Hopefully we haven't lost too much predictive power with that omission, but I have no doubt it would have helped.

      Phil

  4. Shawn Martin's Gravatar Shawn Martin
    May 28, 2015    

    How the puck luck variable is determined, it is conceivable that it contains, at least for some teams, things other than puck luck. For instance, teams that have strong starts, get the lead but then get fatigued or get too "comfortable" and allow late goals, resulting 1 goal wins. I think some of these may not occur as much in the playoffs since the cup is in sight and teams bring their A-game which may be the case with the ducks. I didn't watch them play very much in the season so I don't know for sure, but i think their puck luck variable is correlated to some "bad habits" which they have corrected. It might be interesting to calculate puck luck using the same method, but only games where the final goal is the game winning goal (if that data is easily compiled), and maybe Anaheim's probability will increase. Anyways I guess we'll see if your predictions are right on Saturday, I hope they are. Go Hawks

    • Phil Curry's Gravatar Phil Curry
      May 28, 2015    

      The "luck" variable is definitely capturing a bunch of things. Exactly what is worth more investigation, but the Ducks are definitely different from other teams over the past 7 post-seasons who won a lot of one-goal games. It has been a heck of a series, though. I look forward to game 7 for sure!

  5. Scott Kalina's Gravatar Scott Kalina
    June 5, 2015    

    Enjoyed you guys on NHL Radio this morning. Does your methodology include strength of schedule? In the SCF, for instance, the Blackhawks regular season record & stats is based on playing all the teams in the Central 5 times, the weakest being COL with 90 pts. However, TB played a total of 10 games against TOR & BUF. It would seem CHI should get some type of bonus for playing STL, NSH, MIN, WPG multiple times.

    • Ian Cooper's Gravatar Ian Cooper
      June 6, 2015    

      Thanks for listening Scott! Strength of schedule does indirectly find its way into our model by way of the Simple Rating System (SRS) used on hockey-reference.com (http://www.hockey-reference.com/leagues/NHL_2015.html) which was one of the variables we included.

      That measure takes a team's goal differential and either gives it a bump or a haircut based on the goal differential of each of its opponents. The math is basically a massive system of equations, but the basic idea is if you beat the Leafs by 3 goals, and on average they lose by 2.5, your SRS would be 0.5 even though your goal differential might look far more impressive. Because every team's SRS is dependent on that of every other team, the math gets more complicated than that, but that's the basic idea. So, for example, Tampa had a goal differential of +51 (0.62 per game), which looks a lot better than Chicago's +40 (0.49 per game), but in fact the teams were a lot closer because of strength of schedule, giving Chicago an SRS of 0.51 vs. Tampa's 0.57. Tampa still gets the edge there, but you're correct - the margin shrinks once you take into account strength of schedule.

      Anything can happen in a playoff series, but we think odds are very high that this one's going to be close!

      • Scott Kalina's Gravatar Scott Kalina
        June 6, 2015    

        Thank you!

  1. Analytics predicts the 2016 NHL playoffs and Cup winner | | Geoponet Sports on April 13, 2016 at 10:23 pm

Leave a Reply to Phil Curry Cancel reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>