<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title> &#187; Phil Curry</title>
	<atom:link href="http://www.depthockeyanalytics.com/author/pcurry/feed/" rel="self" type="application/rss+xml" />
	<link>http://www.depthockeyanalytics.com</link>
	<description></description>
	<lastBuildDate>Mon, 29 May 2017 13:37:00 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.0.38</generator>
	<item>
		<title>The Methodology Behind the SI Predictions</title>
		<link>http://www.depthockeyanalytics.com/uncategorized/the-methodology-behind-the-si-predictions/</link>
		<comments>http://www.depthockeyanalytics.com/uncategorized/the-methodology-behind-the-si-predictions/#comments</comments>
		<pubDate>Tue, 14 Apr 2015 19:24:37 +0000</pubDate>
		<dc:creator><![CDATA[Phil Curry]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.depthockeyanalytics.com/?p=575</guid>
		<description><![CDATA[How to predict this year’s playoffs? An interesting question with many potential answers. We didn’t get an opportunity to explain how we came up with the numbers that we did in the SI article itself, so we’ll have to do that here. Hopefully I can explain our methodology clearly even for those without a statistics [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>How to predict this year’s playoffs? An interesting question with many potential answers. We didn’t get an opportunity to explain how we came up with the numbers that we did in the SI article itself, so we’ll have to do that here. Hopefully I can explain our methodology clearly even for those without a statistics background (and not too loosely for those who know their stats!). I should also mention that my colleague, Mikal Skuterud, really deserves a lot credit for the analysis.</p>
<p>The first issue to sort out is what the outcome is that you’re trying to explain. At some level, it’s very simple: who wins. Winning is a binary variable – either you win or you don’t. When trying to explain or predict a binary variable, probit regression analysis is incredibly useful. We’ve used it before, when looking at some interesting issues concerning playoff success last year. Some examples can be found <a href="http://www.depthockeyanalytics.com/uncategorized/the-star-possession-is-34-of-the-playoffs/">here</a>, <a href="http://www.depthockeyanalytics.com/uncategorized/defense-wins-championships-well-not-really/">here</a>, <a href="http://www.depthockeyanalytics.com/uncategorized/momentum-matters/">here</a>, and <a href="http://www.depthockeyanalytics.com/uncategorized/additional-thoughts-on-special-teams-and-playoff-success/">here</a>. In these pieces, we used a probit regression. In a nutshell, a probit regression uses the data to assign a “score” to a matchup. This score is used in conjunction with the normal distribution to essentially determine how much “luck” (the error term, which includes the effects of missing variables as well as just random chance, and which is assumed to be drawn from the normal distribution) is required for a given team to win that series.</p>
<p>While we have found this to be a useful approach for some questions in the past, it’s actually not making as efficient use of the data as possible. Some series are very closely contested, and “puck luck” plays a large role, while others are quite lopsided, and the amount of “puck luck” would have to be extreme to change the outcome. The problem with a probit regression is that it lumps all series wins into the same category. We actually have some information on how lopsided a series is by looking at the number of games it went. It’s not perfect by any means, but it does contain information. As such, one can think of a team’s results from a series has having 8 possible outcomes: get swept, lose in 5, lose in 6, lose in 7, win in 7, win in 6, win in 5, and sweep. Note that the way these outcomes have been written represent a sort of worst-to-best. In other words, we can order the outcomes. When we can do this, one possible method of analysis is an <em>ordered probit regression</em>. It is similar to a probit, but allows us to exploit additional information about the closeness of a series (and the role of luck) in order to get better predictions.</p>
<p>An ordered probit still constructs a “score” for a series, as the probit regression does, but then uses that score to establish 8 regions of “luck” that would correspond to the 8 possible outcomes. From this, probabilities can be constructed for each of the 8 possible outcomes. The probability that a team wins, then, is simply the sum of the probabilities associated with the four winning outcomes (sweep, win in 5, win in 6, and win in 7).</p>
<p>Once the strategy for determining the probabilities that a given team wins a playoff series against a specific opponent has been established, the next order of business is to figure out what variables should be used to create this “score” used in the ordered probit regression. When doing predictive analysis, you ideally want everything that contains information about how a team will perform in the playoffs. Clearly, how they did in the regular season (i.e. their points) has some value, but what else is there? We went and gathered as much information as we could on a whole mess of variables, with the idea that our estimation strategy would help us figure out what was important. We collected data on regular season points, points in the last half of the season, points in the last 10 games, Corsi, Score Adjusted Corsi, penalty kill, power play, save percentage, shooting percentage, and more. There was one interesting variable that was created by Ian. It has been well-recognized in the analytics community that “<a href="http://www.hockeyabstract.com/luck">puck luck</a>” is a real thing, and that it can make the <a href="http://www.depthockeyanalytics.com/uncategorized/why-the-team-with-the-most-points-isnt-always-the-best/">standings a poor representation of a team’s ability</a>. One way that puck luck manifests itself is in the outcome of one-goal games – particularly overtime games and shootouts. These games are often determined by odd events, and occasionally a team gets the puck to bounce their way a <a href="http://www.depthockeyanalytics.com/uncategorized/ducks-luck/">disproportionate number of times</a>. So, what Ian did was construct a variable that compared a team’s winning percentage in one-goal games to their winning percentage in other games. If they were doing much better in one-goal games, then it is possible that their regular season record is predicated not on ability but on puck luck. More on this variable later.</p>
<p>So, having collected all these historical data where we know the actual outcome of each series, now it’s time to plug them into our ordered probit regression and see what predictors are good at predicting the outcomes that actually happened, right? Unfortunately, it’s not quite that simple. Given that some of these variables were only available going back to the 2008 playoffs (in particular, we pulled the Score Adjusted Corsi from <a href="http://www.puckon.net/">puckon.net</a>, which only has that going back to the 2007-08 season), we were left with 105 observations on playoff series. With so many variables, and the fact that these variables are actually quite correlated with each other, using everything doesn’t actually yield anything with any statistical power.</p>
<p>Things are further complicated by the fact that, since 2008, playoff teams don’t really look all that different from each other in terms of these variables. We’ve entered an <a href="http://www.depthockeyanalytics.com/uncategorized/has-nhl-salary-cap-created-competitive-balance/">age of parity</a>, and this is not good for statistical analysis. Regression analysis is based on seeing how differences in certain variables (team characteristics) lead to differences in outcomes (winning versus losing a series). If there aren’t many differences in team characteristics, then it gets hard to explain or predict the difference between who wins and who loses a series.</p>
<p>One solution to this problem is <em>factor analysis</em>. Factor analysis takes the variables that you have, and combines them into a single number, called a factor, in a way designed to make teams look as different as possible according to that factor. You would then use that factor in your regression analysis. You can run regressions using a single factor, or you can create multiple factors. The key is that the number of factors you create and use in the regression is less than the number of variables you began with.</p>
<p>So, our choices were to use factor analysis, with the number of factors to be determined, or to use a smaller set of variables in our regressions, that smaller set also to be determined. We wanted to use the best model possible, but which one would that be? What should the criterion be to discriminate between what is a good model and what is a bad model?</p>
<p>In our case, the “best” model is the one that has the most predictive power. This is not (necessarily) the same as the model that fits the data the best. When you run probit regressions, you can see how many of the series you would have got right if you had used that model to make your picks. Unfortunately, this is rather backwards looking, as the model is created using the data on who won. In other words, the model that fits the data the best is the one that has the most <em>explanatory</em> power, which is quite different from predictive power.</p>
<p>In order to establish predictive power, you need to see how well the model does in predicting the outcomes of series that weren’t used in the generation of the model. The way to do this is to run the regression using all the data you have except for one year. Then, use the resulting model to predict that year that wasn’t used, and compare your predictions to the actual results. This is known as “leave-one-out cross-validation.” So, we tried this with the factor analysis, using several different numbers of factors, as well as several different sets of variables. What we found was that the factor analysis with 5 factors had the most predictive power, predicting on average 10.7 correct series per year (so, out of 15). This was a fair bit better than looking at any single variable by itself, although the one that came closest was Ian’s luck variable. As it turns out, the luck variable was heavily weighted in the construction of the factors, so it turned out to be an important innovation! At some point, we’ll have to look into this more closely.</p>
<p>Now that the model had been established, it was time to generate some results. First off, we used the model to generate probabilities of each of the 8 outcomes for each of the first round series. As mentioned before, the probability that a team wins is the sum of the probabilities associated with that team winning. The first round predicted outcomes are as follows:</p>
<table>
<tbody>
<tr>
<td width="705">
<table width="808">
<tbody>
<tr>
<td width="87">Team</td>
<td width="72">Oppo-nent</td>
<td width="72">Prob. Team Wins</td>
<td width="72">Lose in 4</td>
<td width="72">Lose in 5</td>
<td width="72">Lose in 6</td>
<td width="72">Lose in 7</td>
<td width="72">Win in 7</td>
<td width="72">Win in 6</td>
<td width="72">Win in 5</td>
<td width="72">Win in 4</td>
</tr>
<tr>
<td width="87">Rangers</td>
<td width="72">Pens</td>
<td width="72">0.589</td>
<td width="72">0.024</td>
<td width="72">0.074</td>
<td width="72">0.148</td>
<td width="72">0.166</td>
<td width="72">0.217</td>
<td width="72">0.154</td>
<td width="72">0.122</td>
<td width="72">0.095</td>
</tr>
<tr>
<td width="87">Isles</td>
<td width="72">Caps</td>
<td width="72">0.392</td>
<td width="72">0.070</td>
<td width="72">0.143</td>
<td width="72">0.211</td>
<td width="72">0.184</td>
<td width="72">0.188</td>
<td width="72">0.104</td>
<td width="72">0.065</td>
<td width="72">0.035</td>
</tr>
<tr>
<td width="87">Habs</td>
<td width="72">Sens</td>
<td width="72">0.457</td>
<td width="72">0.050</td>
<td width="72">0.118</td>
<td width="72">0.192</td>
<td width="72">0.183</td>
<td width="72">0.203</td>
<td width="72">0.122</td>
<td width="72">0.082</td>
<td width="72">0.050</td>
</tr>
<tr>
<td width="87">Wings</td>
<td width="72">Bolts</td>
<td width="72">0.363</td>
<td width="72">0.081</td>
<td width="72">0.155</td>
<td width="72">0.218</td>
<td width="72">0.183</td>
<td width="72">0.180</td>
<td width="72">0.096</td>
<td width="72">0.058</td>
<td width="72">0.030</td>
</tr>
<tr>
<td width="87">Blues</td>
<td width="72">Wild</td>
<td width="72">0.503</td>
<td width="72">0.039</td>
<td width="72">0.102</td>
<td width="72">0.178</td>
<td width="72">0.179</td>
<td width="72">0.210</td>
<td width="72">0.134</td>
<td width="72">0.095</td>
<td width="72">0.063</td>
</tr>
<tr>
<td width="87">‘Hawks</td>
<td width="72">Preds</td>
<td width="72">0.603</td>
<td width="72">0.022</td>
<td width="72">0.070</td>
<td width="72">0.142</td>
<td width="72">0.163</td>
<td width="72">0.218</td>
<td width="72">0.157</td>
<td width="72">0.127</td>
<td width="72">0.101</td>
</tr>
<tr>
<td width="87">Ducks</td>
<td width="72">Jets</td>
<td width="72">0.311</td>
<td width="72">0.104</td>
<td width="72">0.178</td>
<td width="72">0.229</td>
<td width="72">0.178</td>
<td width="72">0.163</td>
<td width="72">0.081</td>
<td width="72">0.046</td>
<td width="72">0.021</td>
</tr>
<tr>
<td width="87">Flames</td>
<td width="72">Nucks</td>
<td width="72">0.492</td>
<td width="72">0.042</td>
<td width="72">0.105</td>
<td width="72">0.181</td>
<td width="72">0.180</td>
<td width="72">0.208</td>
<td width="72">0.131</td>
<td width="72">0.092</td>
<td width="72">0.060</td>
</tr>
</tbody>
</table>
</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>Note that this model predicts that the Senators will beat the Canadiens, but if you look at the single most likely outcome, it’s that the Habs win in 7. This is also true for the Flames/Canucks series: the Canucks are predicted to win, but the most likely outcome (of the 8) is that the Flames win in 7. At some level, this is telling us how close these series will be.</p>
<p>From here, we then generated probabilities for all the second round matchups, and generated probabilities for who would win all of those. We then went on to generate probabilities for all the possible third round matchups, and used the model to generate probabilities for the outcomes of those series, and so on right to the Finals. Using this method, we generated probabilities for each team to win the Stanley Cup. Again, it is worth noting, that the most likely Stanley Cup Final (as predicted by this model) is the Blackhawks versus the Lightning. If you were to fill out a bracket by taking the team most likely to win a series into the next round, this is what you would get as well.</p>
<p>Finally, looking at the probabilities generated for the first round, we can see that there are going to be some tightly contested series. The model predicts that each series will go at least 6 games, and the predicted winners generally have less than a 60% chance of winning. We’ll check back after the first round to see how things are going.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.depthockeyanalytics.com/uncategorized/the-methodology-behind-the-si-predictions/feed/</wfw:commentRss>
		<slash:comments>15</slash:comments>
		</item>
		<item>
		<title>NHL has done good job creating parity, analytics suggest</title>
		<link>http://www.depthockeyanalytics.com/uncategorized/nhl-has-done-good-job-creating-parity-analytics-suggest/</link>
		<comments>http://www.depthockeyanalytics.com/uncategorized/nhl-has-done-good-job-creating-parity-analytics-suggest/#comments</comments>
		<pubDate>Thu, 02 Apr 2015 16:13:29 +0000</pubDate>
		<dc:creator><![CDATA[Phil Curry]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.depthockeyanalytics.com/?p=559</guid>
		<description><![CDATA[There have been several articles recently suggesting that changes need to be made to the NHL’s draft lottery. The problem, apparently, is that tanking has gotten so out of control that the league needs to further try to deter it. I wrote earlier about how the draft lottery has mitigated against tanking. Tanking is something [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>There have been several articles recently suggesting that <a href="http://www.thestar.com/sports/breakaway_blog/2015/03/tanking-for-mcdavid-time-for-nhl-to-have-a-playoff-for-draft-order-or-end-the-draft-altogether.html">changes need to be made to the NHL’s draft lottery</a>. The problem, apparently, is that tanking has gotten so out of control that the league needs to further try to deter it.</p>
<p>I wrote earlier about how the <a href="http://www.thestar.com/sports/hockey/2015/02/12/nhls-draft-lottery-system-has-helped-fight-tanking.html">draft lottery has mitigated against tanking</a>. Tanking is something the league wants to discourage. There is greater interest in games between two evenly-matched (and good!) teams than in lop-sided affairs. Given that the lottery proves it is possible to deter tanking, perhaps the league should do more?</p>
<p>An important consideration, however, is the main purpose of the reverse order draft – to help bad teams get better. Just as the league doesn’t want the gap between good and bad teams to be too large, it also wants there to be different good teams and bad teams each year. It’s not good for business to have the same teams making the playoffs and winning the Stanley Cup each year.</p>
<p>So, when it comes to the draft, the league has to walk a fine line between deterring tanking, and helping bad teams become better. Anything done to discourage tanking, such as widening the scope of the draft lottery, comes at a cost of bad teams generally having a harder time improving.</p>
<p>The idea that all teams should get equal time being at the top of the league as at the bottom is a concept known as across-season parity. And, just as when I wrote about the <a href="http://www.thestar.com/sports/hockey/2014/11/06/has_nhl_salary_cap_created_competitive_balance.html">NHL’s within-season parity</a>, there’s a well-established measure for it.</p>
<p>This measure is based on economists’ measure for concentration in an industry, and it’s constructed using each firm’s market share. The more equally distributed the firm’s market shares are, the lower the score of this index. In the context of across-season parity, instead of looking at market share, we look at how often a team wins the Stanley Cup – a kind of “championship share”. If Stanley Cups are evenly distributed across teams, then this measure will be lower than when they are concentrated among a few teams. A period in which only one team won the Cup, a “dynasty” so to speak, would yield the highest score.</p>
<p>We can also look at how often teams make the playoffs, or their “playoffs share”. As above, this measure tells us how playoff berths are spread across teams – with higher scores meaning that it is the same teams that make the playoffs more often. (For those interested in finding out exactly how this measure is constructed, go to <a href="http://www.depthockeyanalytics.com/">www.depthockeyanalytics.com</a> for more detail.)</p>
<p>Comparing the NHL’s current level of across-season parity to the other North American major sports leagues, we can see that the NHL is actually doing quite well. Championships and playoff berths are more equally spread across teams in the NHL than in any of the other major leagues, although the NBA comes close in playoff berths.</p>
<p>When we compare the NHL to previous eras, however, things don’t look quite so rosy. While Stanley Cups are more evenly spread across teams (the current era is far less dynastic than previous ones), the same is not true for playoff berths. When it comes to making the playoffs, the current era has a greater divide between “have” franchises (those that make the playoffs regularly) and “have-nots” (those that can regularly book April tee times) than ever before.</p>
<p>Why might this be? It is possible that the draft lottery has had some effect, but it seems unlikely to be that important given how minor the impact is on a team’s draft position. Perhaps more plausible is that the salary cap has magnified the difference between well-run and poorly-run teams – increasing the return to front office talent.</p>
<p>So what does this mean for the draft lottery? First, it should be noted that changes are already planned. After next season, the lottery will be used for the first three picks instead of just for the first overall. Second, there is reason to think that this year is not typical. This year is seen to be a particularly deep draft, and so being in the bottom few of the league guarantees a team an exceptional player, even if the team doesn’t win the lottery. Buffalo, assuming they finish last, for example, will come away with no worse a player than Jack Eichel, viewed by many as a generational talent even if he isn’t Connor McDavid. Indeed, <a href="http://www.thehockeynews.com/blog/nhls-new-draft-lottery-rules-will-encourage-tanking-heres-why/">some people had foreseen this year’s race to the bottom even before the season began.</a> The question we should be asking, then, is whether tanking really is a serious systemic problem, and are we willing to pay the price to fix it? Because the solution doesn’t come free.</p>
<p>&nbsp;</p>
<table width="491">
<tbody>
<tr>
<td width="223">League/Era</td>
<td width="116">Playoff Parity</td>
<td width="152">Championship Parity</td>
</tr>
<tr>
<td width="223">Original Six NHL (1942-67)</td>
<td width="116">1.133</td>
<td width="152">1.971</td>
</tr>
<tr>
<td width="223">Post-Expansion NHL (1979-91)</td>
<td width="116">1.074</td>
<td width="152">3.667</td>
</tr>
<tr>
<td width="223">Modern NHL (2000-2014)</td>
<td width="116">1.191</td>
<td width="152">1.462</td>
</tr>
<tr>
<td width="223">Modern NBA (2004-2014)</td>
<td width="116">1.193</td>
<td width="152">2.400</td>
</tr>
<tr>
<td width="223">Modern MLB (1998-2014)</td>
<td width="116">1.451</td>
<td width="152">2.529</td>
</tr>
<tr>
<td width="223">Modern NFL (2002-2014)</td>
<td width="116">1.374</td>
<td width="152">1.769</td>
</tr>
</tbody>
</table>
]]></content:encoded>
			<wfw:commentRss>http://www.depthockeyanalytics.com/uncategorized/nhl-has-done-good-job-creating-parity-analytics-suggest/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Spreading the Wealth: Measuring Across-Season Parity</title>
		<link>http://www.depthockeyanalytics.com/uncategorized/spreading-the-wealth-measuring-across-season-parity/</link>
		<comments>http://www.depthockeyanalytics.com/uncategorized/spreading-the-wealth-measuring-across-season-parity/#comments</comments>
		<pubDate>Thu, 02 Apr 2015 14:37:21 +0000</pubDate>
		<dc:creator><![CDATA[Phil Curry]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.depthockeyanalytics.com/?p=561</guid>
		<description><![CDATA[This week’s article in The Star looked at “across-season parity”. In a nutshell, across-season parity refers to all teams spending equal time being playoff teams, and winning an equal number of Stanley Cups. A lack of across-season parity occurs when the same teams make the playoffs each year, and when the same team wins the [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>This week’s article in The Star looked at “across-season parity”. In a nutshell, across-season parity refers to all teams spending equal time being playoff teams, and winning an equal number of Stanley Cups. A lack of across-season parity occurs when the same teams make the playoffs each year, and when the same team wins the Stanley Cup over and over again (a dynasty).</p>
<p>How can this be measured?</p>
<p>As it turns out, there is a well-established way of measuring across-season parity that sports economists use. It is based on the measure economists use for the competitiveness of an industry, called the Hefindahl-Hirschman Index (HHI). The HHI works as follows. It takes each firm’s market share, squares it, and then adds them all up. If all firms have equal market share, then the HHI yields a measure of 1/N, where N is the number of firms. As the industry becomes concentrated in a single firm, the measure moves closer to 1. Thus, the HHI represents more concentration the higher the measure.</p>
<p>Consider the following examples. Suppose there is an industry with 6 firms. Equal distribution of market share would mean each firm serves 1/6 of the market. Squaring 1/6 yields 1/36, and adding up over the six firms gives 6/36 = 1/6. Now suppose that a single firm makes all the sales (even though the other 5 were still in business somehow). In that case, the share of the dominant firm is 1, while the share of the other 5 is 0. Squaring each yields 1 for the dominant firm and 0 for the rest, and the sum is 1. The following table depicts these scenarios along with two others.</p>
<p>&nbsp;</p>
<table>
<tbody>
<tr>
<td width="88"></td>
<td width="88">Firm 1</td>
<td width="88">Firm 2</td>
<td width="88">Firm 3</td>
<td width="88">Firm 4</td>
<td width="88">Firm 5</td>
<td width="88">Firm 6</td>
<td width="88">HHI</td>
</tr>
<tr>
<td width="88">Market Share – case 1</td>
<td width="88">1</td>
<td width="88">0</td>
<td width="88">0</td>
<td width="88">0</td>
<td width="88">0</td>
<td width="88">0</td>
<td width="88">1</td>
</tr>
<tr>
<td width="88">Market Share – case 2</td>
<td width="88">2/3</td>
<td width="88">1/12</td>
<td width="88">1/12</td>
<td width="88">1/12</td>
<td width="88">1/24</td>
<td width="88">1/24</td>
<td width="88">0.47</td>
</tr>
<tr>
<td width="88">Market Share – case 3</td>
<td width="88">1/4</td>
<td width="88">1/4</td>
<td width="88">1/4</td>
<td width="88">1/12</td>
<td width="88">1/24</td>
<td width="88">1/24</td>
<td width="88">0.20</td>
</tr>
<tr>
<td width="88">Market Share – case 4</td>
<td width="88">1/6</td>
<td width="88">1/6</td>
<td width="88">1/6</td>
<td width="88">1/6</td>
<td width="88">1/6</td>
<td width="88">1/6</td>
<td width="88">1/6</td>
</tr>
</tbody>
</table>
<p>Now let’s consider how this can be applied to across-season parity. First, let’s consider Stanley Cups. Let’s suppose that there were 6 teams in the league, and that the league operated at this number of teams for 24 years. If there were perfect parity, then each team would have won 4 Stanley Cups, or 1/6 of the total possible. This corresponds to the 4<sup>th</sup> case in the table above, and the HHI of Stanley Cups would be 1/6. If one team won all 24 (nice try, Habs fans, the Canadiens were never that dominant), then we would be in case 1 above. The second case would entail one teams that won 16 Cups, three teams that won 2, and one team that won 1 Cup. The third case describes a scenario in which three teams won 6 Cups each, another team that won 2, and two teams that won 1 Cup.</p>
<p>Things are a little more complicated when it comes to playoff berths. Let’s suppose that 4 teams made the playoffs each year during this time period. In that case, 96 playoff berths would have been allocated over the 24 year period. With perfect parity, each team would have made the playoffs 16 times, which is the same as saying they received 1/6 of the total number of playoff berths. A complete lack of parity, however, would have had the same 4 teams making the playoffs each year, meaning that four teams would have received ¼ of the total number of playoff berths, and two teams would have had zero.</p>
<p>There is one additional complication to deal with. You will note that the measure for perfect parity depends on the number of teams. If there had been 12 teams in my example above, then perfect parity would have had each team winning 2 Cups, or having a 1/12 share, over the 24 years, yielding an HHI of 1/12.</p>
<p>In fact, it’s a little more complicated than that. Suppose there had been 48 teams over that 24 year period. Given that it’s impossible to win ½ of a Stanley Cup, perfect parity would have had 24 teams winning the Stanley Cup once, which yields an HHI of 1/24. This means that when comparing eras with differing numbers of teams and comprising differing numbers of years, adjustments have to be made to account for the fact that perfect parity would correspond to different numbers.</p>
<p>The adjustment I made, in this case, was to take the observed measure of parity (as given by the HHI) and to divide it by the number corresponding to perfect parity for that particular case. So, for the examples above, case 4 corresponds to perfect parity, which is a measure of 1/6. If I were to observe that case, I would give it a score of 1 (1/6 divided by 1/6). The case of the dominant team, case 1, would get a score of 6 (1 divided by 1/6), and the two intermediate cases would get scores of 2.81 and 1.19, respectively.</p>
<p>Finally, the time period needs to be set such that the number of teams in the league is constant, and the time period should be reasonably long. Perfect parity does not mean an equal allocation of Stanley Cups (or playoff berths) if some of the teams were not in existence for some of the time period.</p>
<p>In the NHL, there are only three eras where the number of teams has stayed constant for more than a decade: the Original Six era (25 seasons from 1942-67), the first Post-Expansion Era with 21 teams (12 seasons from 1979-91), and the Modern Era, with 30 teams (13 seasons from 2000 to present). These were the eras used for the analysis. The breakdown for Stanley Cups and playoff berths over these time periods are given in the tables below. First, let us consider the Original Size era.</p>
<p>&nbsp;</p>
<table width="259">
<tbody>
<tr>
<td width="94">Original 6 - 25 years</td>
<td width="62">Playoffs</td>
<td width="103">Stanley Cups</td>
</tr>
<tr>
<td width="94">Montreal</td>
<td width="62">24</td>
<td width="103">10</td>
</tr>
<tr>
<td width="94">Toronto</td>
<td width="62">21</td>
<td width="103">9</td>
</tr>
<tr>
<td width="94">Detroit</td>
<td width="62">22</td>
<td width="103">5</td>
</tr>
<tr>
<td width="94">Boston</td>
<td width="62">14</td>
<td width="103">0</td>
</tr>
<tr>
<td width="94">Chicago</td>
<td width="62">12</td>
<td width="103">1</td>
</tr>
<tr>
<td width="94">NY Rangers</td>
<td width="62">7</td>
<td width="103">0</td>
</tr>
<tr>
<td width="94"></td>
<td width="62">100</td>
<td width="103">25</td>
</tr>
</tbody>
</table>
<p>Note that this era comprises 6 teams and 25 years, so it is actually quite close to the example used above, just with one extra year. This means that perfect parity would entail 5 of the teams winning 4 Cups, and one team winning 5. This would yield an HHI of 0.168, so very close to 1/6. The HHI of the actual distribution, however, is 0.3312. Dividing this number by the HHI for the case of perfect parity yields the number 1.971 that was reported in The Star article.</p>
<p>In terms of playoff berths, perfect parity would have 4 teams going to the playoffs 17 times, and 2 teams going 16, for an HHI of 0.1668. The HHI of the observed distribution is 0.189, which when divided by 0.1668, gives us the number of 1.133 that was in The Star.</p>
<p>Next, let us consider the 12 years where there were 21 teams in the NHL.</p>
<p>&nbsp;</p>
<table width="319">
<tbody>
<tr>
<td width="147">Post-Expansion (21 teams, 12 years)</td>
<td width="68">Playoffs</td>
<td width="104">Stanley Cups</td>
</tr>
<tr>
<td width="147">Philadelphia</td>
<td width="68">10</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Edmonton</td>
<td width="68">12</td>
<td width="104">5</td>
</tr>
<tr>
<td width="147">Washington</td>
<td width="68">9</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Winnipeg</td>
<td width="68">8</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Calgary</td>
<td width="68">12</td>
<td width="104">1</td>
</tr>
<tr>
<td width="147">Montreal</td>
<td width="68">12</td>
<td width="104">1</td>
</tr>
<tr>
<td width="147">Quebec</td>
<td width="68">7</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Buffalo</td>
<td width="68">10</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">NY Islanders</td>
<td width="68">10</td>
<td width="104">4</td>
</tr>
<tr>
<td width="147">St Louis</td>
<td width="68">11</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Chicago</td>
<td width="68">12</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Boston</td>
<td width="68">12</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Los Angeles</td>
<td width="68">10</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Hartford</td>
<td width="68">7</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Detroit</td>
<td width="68">6</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">NY Rangers</td>
<td width="68">11</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Minnesota</td>
<td width="68">10</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Vancouver</td>
<td width="68">8</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">New Jersey</td>
<td width="68">3</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147">Pittsburgh</td>
<td width="68">5</td>
<td width="104">1</td>
</tr>
<tr>
<td width="147">Toronto</td>
<td width="68">7</td>
<td width="104">0</td>
</tr>
<tr>
<td width="147"></td>
<td width="68">192</td>
<td width="104">12</td>
</tr>
</tbody>
</table>
<p>In this case, we have more teams than years of observations, so perfect parity in terms of Stanley Cups would mean that 12 different teams won the Cup once. This would yield an HHI of 1/12. The distribution above, however, yields an HHI of 0.306, which gives us an adjusted score of 3.667.</p>
<p>For playoff berths, perfect parity would have 18 teams making the playoffs 9 times, and 3 teams making it 10 times. This yields an HHI of 0.048. The actual distribution of playoff berths, however, gives us an HHI of 0.051, which leads to an adjusted score of 1.074.</p>
<p>Finally, let us look at the 13 seasons in which there have been the current 30 teams.</p>
<p>&nbsp;</p>
<table width="319">
<tbody>
<tr>
<td width="140">Modern Era (30 teams, 13 years)</td>
<td width="68">Playoffs</td>
<td width="111">Stanley Cups</td>
</tr>
<tr>
<td width="140">Philadelphia</td>
<td width="68">11</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Edmonton</td>
<td width="68">3</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Washington</td>
<td width="68">8</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Winnipeg</td>
<td width="68">1</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Calgary</td>
<td width="68">5</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Montreal</td>
<td width="68">9</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Colorado</td>
<td width="68">8</td>
<td width="111">1</td>
</tr>
<tr>
<td width="140">Buffalo</td>
<td width="68">5</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">NY Islanders</td>
<td width="68">5</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">St Louis</td>
<td width="68">8</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Chicago</td>
<td width="68">7</td>
<td width="111">2</td>
</tr>
<tr>
<td width="140">Boston</td>
<td width="68">10</td>
<td width="111">1</td>
</tr>
<tr>
<td width="140">Los Angeles</td>
<td width="68">7</td>
<td width="111">2</td>
</tr>
<tr>
<td width="140">Carolina</td>
<td width="68">4</td>
<td width="111">1</td>
</tr>
<tr>
<td width="140">Detroit</td>
<td width="68">13</td>
<td width="111">2</td>
</tr>
<tr>
<td width="140">NY Rangers</td>
<td width="68">8</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Minnesota</td>
<td width="68">5</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Vancouver</td>
<td width="68">10</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">New Jersey</td>
<td width="68">10</td>
<td width="111">1</td>
</tr>
<tr>
<td width="140">Pittsburgh</td>
<td width="68">9</td>
<td width="111">1</td>
</tr>
<tr>
<td width="140">Toronto</td>
<td width="68">5</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">San Jose</td>
<td width="68">12</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Ottawa</td>
<td width="68">10</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Tampa Bay</td>
<td width="68">6</td>
<td width="111">1</td>
</tr>
<tr>
<td width="140">Anaheim</td>
<td width="68">8</td>
<td width="111">1</td>
</tr>
<tr>
<td width="140">Dallas</td>
<td width="68">7</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Arizona</td>
<td width="68">4</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Nashville</td>
<td width="68">7</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Columbus</td>
<td width="68">2</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140">Florida</td>
<td width="68">1</td>
<td width="111">0</td>
</tr>
<tr>
<td width="140"></td>
<td width="68">208</td>
<td width="111">13</td>
</tr>
</tbody>
</table>
<p>Again, there are more teams than years, so perfect parity would have 13 teams winning the Stanley Cup exactly once. This would yield an HHI of 1/13. The actual distribution of Stanley Cups gives us an HHI of 0.112. Dividing by 1/13 gives us the score of 1.462 reported in The Star.</p>
<p>Finally, looking at playoff berths, if they were distributed equally, then 28 teams would have made the playoffs 7 times, while 2 teams would have made it 6 times. This corresponds to an HHI of 0.033. The actual distribution of playoff berths, however, has an HHI of 0.040, giving us the adjusted score of 1.191.</p>
<p>Given these measures of across-season parity, it is certainly worthwhile to ask what is going on, especially with playoff berths. As mentioned in The Star article, one possible explanation is that the salary cap era has meant that teams with incompetent front offices cannot buy their way out of their mistakes, so it takes bad teams longer to recover from mistakes. The fact that some teams have been slow to adopt analytics could also make for a widening gap between good and bad teams.</p>
<p>While I (or any economist, really) would generally not be opposed to punishing incompetence, the fact of the matter is that the GMs whose bad decisions condemn teams to mediocrity are rarely the ones in charge of the rebuild. The people who really suffer when rebuilds become harder are the fans. As a Leafs fan, I speak from experience on this.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.depthockeyanalytics.com/uncategorized/spreading-the-wealth-measuring-across-season-parity/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Puck possession helps predict playoff chances, analytics suggest</title>
		<link>http://www.depthockeyanalytics.com/uncategorized/puck-possession-helps-predict-playoff-chances-analytics-suggest/</link>
		<comments>http://www.depthockeyanalytics.com/uncategorized/puck-possession-helps-predict-playoff-chances-analytics-suggest/#comments</comments>
		<pubDate>Thu, 05 Mar 2015 17:28:18 +0000</pubDate>
		<dc:creator><![CDATA[Phil Curry]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.depthockeyanalytics.com/?p=540</guid>
		<description><![CDATA[Springtime means different things in different NHL cities. In some cities, fans can begin to get comfortable with their shiny new trade deadline acquisitions – players intended to help them on their way to Lord Stanley’s coveted mug. In other cities, fans brace themselves for the inevitable end to the 18-wheeler’s freefall, and hope that [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Springtime means different things in different NHL cities. In some cities, fans can begin to get comfortable with their shiny new trade deadline acquisitions – players intended to help them on their way to Lord Stanley’s coveted mug. In other cities, fans brace themselves for the inevitable end to the 18-wheeler’s freefall, and hope that Connor McDavid might emerge from the wreckage of the season.</p>
<p>In other cities still, spring means the excitement of a playoff race, with uncertain outcomes but with at least some hope of making the league’s second season.  For those teams on the outside looking in, the question is: how can we tell whether that hope is reasonable, or the fanciful dreaming of a pretender?</p>
<p>In the last 6 full seasons (so 2007-08 to last year, but not counting 2012-13), there have been 10 teams who made the playoffs even though they did not hold at least a share of a playoff berth after 60 games.</p>
<p>What, if anything, do these late season comebacks (and corresponding collapses for teams that lost their tickets to the big dance) have in common?</p>
<p>Typically, the comeback teams were better possession teams than the collapsing teams. The average Score Adjusted Corsi, or SAC, of the comeback teams was 50.2, while the average SAC of collapsing teams was 47.7. Indeed, only 3 of the comeback teams caught teams that were better in terms of SAC.</p>
<p>This should not be too surprising. As many studies have demonstrated, possession metrics aren’t perfect, but they do pretty well at predicting a team’s future record. Moreover, in this case, we’re talking about teams that have roughly the same number of points over the first 60 games.</p>
<p>If a poor possession team has generated the same record as a good possession team over the first 60 games, who seems more likely to be able to step up their play down the stretch? A team that’s built their record on goaltending and shooting probably doesn’t have much room to go up in those categories, and it’s very rare for a team to suddenly start generating more shots. A good possession team, meanwhile, must not have had as good goaltending or shooting (or else they would have had a better record), and so is more likely to have room to improve their play in those areas. (There are exceptions for sure. For example, a team could have a great goalie who is only pretty good in the first part of the season and off the charts good down the stretch.)</p>
<p>Looking at this year’s situation, there don’t appear to be any prime comeback candidates. In the East, after 60 games the number 8 spot was occupied by Boston -- a pretty solid possession team, with a 52.3% SAC. Florida, Ottawa, and Philadelphia aren’t terrible (at 50.3, 50.1, and 49.4 respectively), but there isn’t much to suggest that they can catch a Boston team that is just now getting healthy.</p>
<p>In the West, while there may appear to be more of a race, chances are it won’t pan out to be much of one. The two wild card teams after 60 games were Winnipeg and Minnesota, both very strong possession teams. The chasers right now are Calgary, one of the league’s worst possession teams with a SAC of 44.4%, and San Jose, whose SAC of 51.1 is not bad, but considerably lower than their 54.6 of a year ago. The two teams that were just ahead of the wildcards were Los Angeles and Vancouver, both of whom could be catchable based on the point difference between them and Calgary and San Jose. The Kings, however, own the league’s 2<sup>nd</sup> best SAC (at 54.4%) and appear to be turning things around. That leaves the Canucks, whose SAC of 49.4% is less than San Jose’s, but they also had a 5 point cushion after 60 games.</p>
<p>Of course, this isn’t to say that it’s impossible for some team to come from outside to claim a playoff berth, just that it’s unlikely. But it sure would make things more interesting if some team were to pull it off.</p>
<p>&nbsp;</p>
<table width="407">
<tbody>
<tr>
<td width="130">Comeback Team</td>
<td width="65">Points Out of Last Playoff Position</td>
<td width="65">Score Adjusted Corsi</td>
<td width="82">Teams Caught</td>
<td width="65">Score Adjusted Corsi</td>
</tr>
<tr>
<td>2007-08 Colorado</td>
<td>3</td>
<td>47.8</td>
<td>Vancouver</td>
<td>47.6</td>
</tr>
<tr>
<td>2007-08 Washington</td>
<td>3</td>
<td>55.2</td>
<td>Buffalo</td>
<td>49.5</td>
</tr>
<tr>
<td>2008-09 Anaheim</td>
<td>4</td>
<td>51</td>
<td>Dallas</td>
<td>49.4</td>
</tr>
<tr>
<td>2008-09 St. Louis</td>
<td>5</td>
<td>47.8</td>
<td>Edmonton</td>
<td>47.7</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>Minnesota</td>
<td>47.7</td>
</tr>
<tr>
<td>2008-09 Carolina</td>
<td>3</td>
<td>52.3</td>
<td>Buffalo</td>
<td>47.9</td>
</tr>
<tr>
<td>2008-09 Pittsburgh</td>
<td>4</td>
<td>46.6</td>
<td>Florida</td>
<td>47.2</td>
</tr>
<tr>
<td>2010-11 Buffalo</td>
<td>1</td>
<td>50.8</td>
<td>Carolina</td>
<td>48.2</td>
</tr>
<tr>
<td>2011-12 Los Angeles</td>
<td>1</td>
<td>53.2</td>
<td>Calgary</td>
<td>46.9</td>
</tr>
<tr>
<td>2011-12 Washington</td>
<td>2</td>
<td>47.7</td>
<td>Toronto</td>
<td>49</td>
</tr>
<tr>
<td>2013-14 Columbus</td>
<td>1</td>
<td>49.8</td>
<td>Toronto</td>
<td>43.6</td>
</tr>
<tr>
<td colspan="2">Average Score Adjusted Corsi</td>
<td>50.22</td>
<td></td>
<td>47.7</td>
</tr>
</tbody>
</table>
]]></content:encoded>
			<wfw:commentRss>http://www.depthockeyanalytics.com/uncategorized/puck-possession-helps-predict-playoff-chances-analytics-suggest/feed/</wfw:commentRss>
		<slash:comments>4</slash:comments>
		</item>
		<item>
		<title>NHL’s draft lottery seems to help fight tanking</title>
		<link>http://www.depthockeyanalytics.com/uncategorized/nhls-draft-lottery-seems-to-help-fight-tanking/</link>
		<comments>http://www.depthockeyanalytics.com/uncategorized/nhls-draft-lottery-seems-to-help-fight-tanking/#comments</comments>
		<pubDate>Fri, 13 Feb 2015 15:26:34 +0000</pubDate>
		<dc:creator><![CDATA[Phil Curry]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.depthockeyanalytics.com/?p=529</guid>
		<description><![CDATA[The season is approaching the three quarter mark, and two races are in full flight: the race to secure a playoff seed, and the race to secure a high draft pick. For fans of those teams clearly out of the former, losses are cheered perhaps more than wins as tanking is seen as the best [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>The season is approaching the three quarter mark, and two races are in full flight: the race to secure a playoff seed, and the race to secure a high draft pick. For fans of those teams clearly out of the former, losses are cheered perhaps more than wins as tanking is seen as the best route for the future.</p>
<p>Tanking, or intentionally trying to lose games, is something the league is naturally very concerned about. As such, the NHL followed the NBA’s lead and implemented a draft lottery starting in 1995. The idea behind a draft lottery is that teams will have less incentive to tank if they are not guaranteed the first overall pick by being the worst team in the league that year. The question is, has it worked?</p>
<p>Given that tanking entails trying to amass as few points as possible, there should be a relationship between the incentive to tank and how many points that last place team accumulates in a given season. Specifically, if the lottery has reduced the incentive to tank, last place teams should be accumulating more points, on average, since the introduction of the lottery.</p>
<p>Unfortunately, there are several other factors that also affect how many points the last team has. First, and perhaps most obvious, teams haven’t played the same number of games every season. This discrepancy is easily adjusted for by considering the number of points per game the last place team collects. So, for example, based on the current 82-game season, the Florida Panthers’ 36 points in 48 games during the 2012-13 season would be equivalent to 61.5 points.</p>
<p>Related, but not so easy to account for, is the fact that the average number of points handed out in a game is no longer constant thanks to the extra point handed out for overtime and shootout losses. In order to deal with this, we considered the last place team’s points per game divided by the average number of points awarded per game that season. Prior to the so-called “Bettman point”, this was one (2 points per game for 2 teams). Since the Bettman point was introduced in 1999-2000, this has bounced around a bit but after the introduction of the salary cap in 2005, it has generally been close to 1.1.</p>
<p>Finally, there are a couple of factors that can influence how good (or bad) the last place team is in a given season. As we discussed a couple of months ago, the salary cap (and accompanying salary floor) has promoted <a href="http://www.thestar.com/sports/hockey/2014/11/06/has_nhl_salary_cap_created_competitive_balance.html">parity</a>, which means that we should expect last place teams to be better (all else being equal) since the cap’s introduction. Moreover, during the league’s gradual expansion to 30 teams, franchises in the early years of their existence didn’t have much time to accumulate talent and so should naturally be expected to be worse.</p>
<p>The salary cap has existed only after the lottery was put in place, and most expansion took place before the lottery was implemented, so these are both factors that could make the lottery look like it’s having a bigger effect than it actually is.</p>
<p>So after taking all these factors into consideration, what effect has the lottery system had? What we found is that today’s teams are 5 to 6 points better over an 82 game season, on average, than they would have been without the lottery system in place. That is, if we were to consider a hypothetical world in which the NHL operated as it currently does with a salary cap, the Bettman point, no recent expansion teams, but without a lottery system for the draft, we would generally expect the team that came in last place to average 55 to 56 points instead of the 61 points that they currently get. Thus the lottery has increased the last place team’s points by about 10%.</p>
<p>As of writing, the Buffalo Sabres are on pace for just over 52 points. Whether the Sabres are intentionally tanking or just epically bad is anyone’s guess. But what can be said for sure is that even though much in the game of hockey is changing, tanking can still be done the old fashioned way.</p>
<table width="270">
<tbody>
<tr>
<td colspan="4" width="270">Pre-Lottery Era</td>
</tr>
<tr>
<td>Year</td>
<td>Last Place Team</td>
<td>Games Played</td>
<td>Points</td>
</tr>
<tr>
<td>1979/80</td>
<td>Winnipeg</td>
<td>80</td>
<td>51</td>
</tr>
<tr>
<td>1980/81</td>
<td>Winnipeg</td>
<td>80</td>
<td>32</td>
</tr>
<tr>
<td>1981/82</td>
<td>Colorado</td>
<td>80</td>
<td>49</td>
</tr>
<tr>
<td>1982/83</td>
<td>Pittsburgh</td>
<td>80</td>
<td>45</td>
</tr>
<tr>
<td>1983/84</td>
<td>Pittsburgh</td>
<td>80</td>
<td>38</td>
</tr>
<tr>
<td>1984/85</td>
<td>Toronto</td>
<td>80</td>
<td>48</td>
</tr>
<tr>
<td>1985/86</td>
<td>Detroit</td>
<td>80</td>
<td>40</td>
</tr>
<tr>
<td>1986/87</td>
<td>Buffalo</td>
<td>80</td>
<td>64</td>
</tr>
<tr>
<td>1987/88</td>
<td>Minnesota</td>
<td>80</td>
<td>51</td>
</tr>
<tr>
<td>1988/89</td>
<td>Quebec</td>
<td>80</td>
<td>61</td>
</tr>
<tr>
<td>1989/90</td>
<td>Quebec</td>
<td>80</td>
<td>31</td>
</tr>
<tr>
<td>1990/91</td>
<td>Quebec</td>
<td>80</td>
<td>46</td>
</tr>
<tr>
<td>1991/92</td>
<td>San Jose</td>
<td>80</td>
<td>39</td>
</tr>
<tr>
<td>1992/93</td>
<td>Ottawa</td>
<td>84</td>
<td>24</td>
</tr>
<tr>
<td>1993/94</td>
<td>Ottawa</td>
<td>84</td>
<td>37</td>
</tr>
</tbody>
</table>
<table width="282">
<tbody>
<tr>
<td colspan="4" width="282">Pre-Salary Cap Lottery Era</td>
</tr>
<tr>
<td>Year</td>
<td>Last Place Team</td>
<td>Games Played</td>
<td>Points</td>
</tr>
<tr>
<td>1994/95</td>
<td>Ottawa</td>
<td>48</td>
<td>23</td>
</tr>
<tr>
<td>1995/96</td>
<td>Ottawa</td>
<td>82</td>
<td>41</td>
</tr>
<tr>
<td>1996/97</td>
<td>Boston</td>
<td>82</td>
<td>61</td>
</tr>
<tr>
<td>1997/98</td>
<td>Tampa Bay</td>
<td>82</td>
<td>44</td>
</tr>
<tr>
<td>1998/99</td>
<td>Tampa Bay</td>
<td>82</td>
<td>47</td>
</tr>
<tr>
<td>1999/00</td>
<td>Atlanta</td>
<td>82</td>
<td>39</td>
</tr>
<tr>
<td>2000/01</td>
<td>NY Islanders</td>
<td>82</td>
<td>52</td>
</tr>
<tr>
<td>2001/02</td>
<td>Atlanta</td>
<td>82</td>
<td>54</td>
</tr>
<tr>
<td>2002/03</td>
<td>Carolina</td>
<td>82</td>
<td>61</td>
</tr>
<tr>
<td>2003/04</td>
<td>Pittsburgh</td>
<td>82</td>
<td>58</td>
</tr>
</tbody>
</table>
<table width="280">
<tbody>
<tr>
<td colspan="4" width="280">Salary Cap and Lottery Era</td>
</tr>
<tr>
<td>Year</td>
<td>Last Place Team</td>
<td>Games Played</td>
<td>Points</td>
</tr>
<tr>
<td>2005/06</td>
<td>St. Louis</td>
<td>82</td>
<td>57</td>
</tr>
<tr>
<td>2006/07</td>
<td>Philadelphia</td>
<td>82</td>
<td>56</td>
</tr>
<tr>
<td>2007/08</td>
<td>Tampa Bay</td>
<td>82</td>
<td>71</td>
</tr>
<tr>
<td>2008/09</td>
<td>NY Islanders</td>
<td>82</td>
<td>61</td>
</tr>
<tr>
<td>2009/10</td>
<td>Edmonton</td>
<td>82</td>
<td>62</td>
</tr>
<tr>
<td>2010/11</td>
<td>Edmonton</td>
<td>82</td>
<td>62</td>
</tr>
<tr>
<td>2011/12</td>
<td>Columbus</td>
<td>82</td>
<td>65</td>
</tr>
<tr>
<td>2012/13</td>
<td>Florida</td>
<td>48</td>
<td>36</td>
</tr>
<tr>
<td>2013/14</td>
<td>Buffalo</td>
<td>82</td>
<td>52</td>
</tr>
</tbody>
</table>
]]></content:encoded>
			<wfw:commentRss>http://www.depthockeyanalytics.com/uncategorized/nhls-draft-lottery-seems-to-help-fight-tanking/feed/</wfw:commentRss>
		<slash:comments>6</slash:comments>
		</item>
		<item>
		<title>Dishonour for Connor: How the Lottery Has Affected Tanking</title>
		<link>http://www.depthockeyanalytics.com/uncategorized/dishonour-for-connor-how-the-lottery-has-affected-tanking/</link>
		<comments>http://www.depthockeyanalytics.com/uncategorized/dishonour-for-connor-how-the-lottery-has-affected-tanking/#comments</comments>
		<pubDate>Thu, 12 Feb 2015 16:14:14 +0000</pubDate>
		<dc:creator><![CDATA[Phil Curry]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.depthockeyanalytics.com/?p=532</guid>
		<description><![CDATA[This week’s article in The Star looked at the effect that the draft lottery has had on the incentive for a team to intentionally try to be bad in order to get the top pick in the draft. The draft was instituted in 1963 as a way of sorting out how to bring amateur players [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>This week’s article in The Star looked at the effect that the draft lottery has had on the incentive for a team to intentionally try to be bad in order to get the top pick in the draft.</p>
<p>The draft was instituted in 1963 as a way of sorting out how to bring amateur players not currently in the league to NHL teams. This was an era before free agency, meaning that teams owned the rights to players in the league essentially in perpetuity (although these rights could be traded), which had the effect of seriously suppressing salaries. By having a draft (which was an innovation developed by the NFL in 1936), the teams could assign the rights to players outside the league without getting into bidding wars.</p>
<p>It is worth mentioning that the league did in fact have some means in place of assigning the rights to these players. Each team kept a “negotiation list” of young players, and other teams were prevented from dealing with these players. Bobby Hull was infamously put on Chicago’s negotiation list at age 11. However, these negotiation lists were limited in the number of players that a team could have, and so a draft was deemed necessary to assign the rights to the rest of the players. Indeed, for the first several years of the draft, many top picks never even made it to the NHL, and those that did were often not star players. For this reason, in the statistical analysis below, I considered data starting in 1980, when the “NHL Amateur Draft” was renamed to the “NHL Entry Draft”, and became the only way for players under 20 to get into the league (with few exceptions).</p>
<p>The next step, then, was to determine the order in which teams would pick. The NHL again followed the NFL’s lead and went with a reverse order draft in which the worst teams that year would pick first. By doing this, the league would promote parity, which is good for the overall health of the league.</p>
<p>The problem with a reverse order draft, however, is that it can create perverse incentives. Instead of trying to win games, teams might now have an incentive to lose them in order to get a good draft pick, which could make them better off in the long run. This is naturally something that the league would be concerned about, as fans presumably are not as interested in watching a game in which one team is not actually trying to win.</p>
<p>So, in 1995, the NHL adopted a draft lottery, which had first been employed by the NBA in 1985. Starting in 1995, all 14 non-playoff teams would be entered into a lottery in which the probability of winning was greater for teams lower in the standings. However, the winner of the lottery could only move up 4 spaces, meaning that only the bottom 5 teams had a chance to pick 1<sup>st</sup> overall. This was changed for the 2013 draft, when the limit on how many places a team could move up was eliminated.</p>
<p>The idea behind the draft lottery is that the benefits to being bad (a high draft pick) would be reduced, while leaving the costs the same (reduced attendance and other sorts of revenue), but still there would be an overall improvement in league parity as worse teams would be getting (on average) better players.</p>
<p>So, given this history, the question at hand was to determine what effect (if any) the draft lottery has had on teams’ incentives to tank for a high draft pick.</p>
<p>The first issue that must be addressed in order to answer this question is how to measure the effect? There are many indicators of the incentive to tank, but in this case I decided to keep it simple and look at the points of the last place team. As mentioned in the article, this is not as simple as looking at how last place teams performed before the lottery and comparing that to how they performed after. Accounting for differences in games played is simple enough, but the fact that expansion teams are generally terrible (and expansion for the most part occurred before the introduction of the lottery), and a salary cap was introduced in 2005 and the extra “loser point” in 1999, makes thing more complicated.</p>
<p>The Bettman point has had the effect of increasing the total amount of points allocated in a season, which means that last place teams accumulate more points even if they aren’t any better. This is essentially points inflation. Dealing with inflation is relatively simple. When economists deal with inflation in money, it’s accounted for by considering how much money is required to buy a standard bundle of goods. If that bundle costs $100 in one year and $110 the next, then there has been 10% inflation.</p>
<p>For points, I considered how many points are needed to be average. Before the Bettman point, a point per game (82 points) was average. After the Bettman point, the average increased to 86-87 points in a given 82 game season, and then increased again to 91-92 points after the introduction of the salary cap. The following table shows what the average number of points was each season:</p>
<table width="206">
<tbody>
<tr>
<td width="72">Year</td>
<td width="134">Average Number of Points</td>
</tr>
<tr>
<td width="72">2000</td>
<td width="134">86.07143</td>
</tr>
<tr>
<td width="72">2001</td>
<td width="134">86.06667</td>
</tr>
<tr>
<td width="72">2002</td>
<td width="134">86.03333</td>
</tr>
<tr>
<td width="72">2003</td>
<td width="134">87.16667</td>
</tr>
<tr>
<td width="72">2004</td>
<td width="134">86.83333</td>
</tr>
<tr>
<td width="72">2006</td>
<td width="134">91.3667</td>
</tr>
<tr>
<td width="72">2007</td>
<td width="134">91.3667</td>
</tr>
<tr>
<td width="72">2008</td>
<td width="134">91.0667</td>
</tr>
<tr>
<td width="72">2009</td>
<td width="134">91.4</td>
</tr>
<tr>
<td width="72">2010</td>
<td width="134">92.03333</td>
</tr>
<tr>
<td width="72">2011</td>
<td width="134">91.9</td>
</tr>
<tr>
<td width="72">2012</td>
<td width="134">92</td>
</tr>
<tr>
<td width="72">2013</td>
<td width="134">53.4</td>
</tr>
<tr>
<td width="72">2014</td>
<td width="134">92.23333</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>Thus, the variable of interest is the number of points the last place team amassed in a given season divided by the average number of points across teams that year. Before the Bettman point, this is exactly equal to points per game, and after the Bettman point, it is a points per game expressed in pre-1999 points, so to speak (just as money measures are often expressed in 1999 dollars, for example). So, in the 2010-11 season Edmonton finished last with 62 points in 82 games, for 0.756 points per game. However, the league average number of points was 91.9, so I considered Edmonton has having earned 62/91.9 = 0.675 points per game (in pre-1999 points).</p>
<p>With regards to the draft lottery, its introduction is a discrete event. There are years (pre-1995) where it was not in effect, and years (1995 and after) where it was (and still is). The way to account for discrete events is through a dummy variable. Dummy variables take a value of 1 when something is true (in this case, when the lottery is in effect) and zero when it is not. Note that the lottery did change in 2013, but there are only two observations on the new system. For the moment, I just considered whether there was a lottery or not, but it would be worth examining the effect of the new lottery when more data are available. The salary cap (and floor) is also a discrete event, and so a dummy variable was used for its presence.</p>
<p>Finally, there are the expansion teams to account for. Given that teams are expected to be bad earlier in their existence, one might consider using the age of the franchise as a variable. However, the effect of being new does decay relatively quickly, and there really isn’t an advantage to being a particularly old franchise (ask Leafs fans), so using age isn’t appropriate. Again, the solution is to use a dummy variable. Here, however, there are more options. One could use a dummy variable for franchises in their first year, another dummy variable for franchises in their second year, and so on. However, there are only so many opportunities for the last place team to be in their first year of existence, so I opted to create dummy variables for the first 3 years of a franchise’s existence and the second 3 years (years 4 to 6).</p>
<p>Given all this, the linear regression model that was considered is:</p>
<p><em>Pts % = α + β<sub>1</sub>Lottery + β<sub>2</sub>SalaryCap + β<sub>3</sub>Expansion1-3 + β<sub>4</sub>Expansion4-6</em></p>
<p>where <em>Pts %</em> represented the last place team’s points percentage accounting for the inflation of the Bettman point, and <em>Lottery,</em> <em>SalaryCap</em>, <em>Expansion1-3</em>, and <em>Expansion4-6</em> are dummy variables for the respective events. As mentioned above, the data comprises all seasons from 1979-80 to 2013-14, which yields 15 years with no lottery, no extra point, and no salary cap (but considerable expansion), 5 years with the lottery, but no Bettman point or salary cap (and some expansion), 5 years with the lottery, the Bettman point, but no salary cap (and some expansion), and 9 years with the lottery, the Bettman point, and the salary cap (and no expansion).</p>
<p>The results of the regression are given below:</p>
<p><a href="http://www.depthockeyanalytics.com/wp-content/uploads/2015/02/Regression-Tanking.png"><img class="alignnone  wp-image-535" src="http://www.depthockeyanalytics.com/wp-content/uploads/2015/02/Regression-Tanking-300x139.png" alt="Regression Tanking" width="451" height="215" /></a></p>
<p>First off, it is worth noting that the R-squared, which is a measure of how well these factors explain the data is 0.4605. This is quite high for anything hockey-related. With regards to the coefficients, we see a constant of 0.5946539. This tells us what we should expect the last place team to do in points per game when there is no lottery, no salary cap, no Bettman point, and the team is not an expansion team. This works out to 48.76 points over 82 games.</p>
<p>The coefficient on the lottery is 0.613404, but falls short of being significant at the 95% level. The lack of significance is likely in part due to the limited number of observations and the relatively large number of variables (for the number of observations) in the regression. Perhaps similar regressions in the future (after we have more observations) will yield significant results, but we will have to wait and see.</p>
<p>Speaking of the salary cap, the coefficient in the regression above has a positive sign, which is what would be expected, but is a small effect that is not statistically significant at any reasonable level. The coefficient on the dummy for an expansion team in its first three years is highly significant and strong. Prior to the Bettman point, an expansion team in its first three years would be expected to be 13.74 points worse than an established team when coming in last. The coefficient on the dummy for an expansion team in years 4 to 6 is less significant and smaller in magnitude: 7.04 points fewer than an established team (prior to the Bettman point).</p>
<p>So, what does this mean for the current era, where we have no expansion teams, the salary cap and the Bettman point? With the lottery, we should expect the adjusted points per game of last place teams to be given by <em>Pts % = α + β<sub>1</sub>Lottery + β<sub>2</sub>SalaryCap = 0.6640667</em>. Accounting for the inflation of the Bettman point, this is between 60.47 and 61.25 points, depending on the year. As mentioned in the article, last place teams have averaged 61 points since 2005. If the salary cap were not in place, we should expect teams to produce (adjusted) points at a pace of <em>Pts % = α + β<sub>2</sub>SalaryCap = 0.6027263</em> points per game. Adjusting for the Bettman point inflation, this yields 54.88 to 55.59 points over 82 games, which is a difference of about 5.5 points.</p>
<p>There certainly is more that can be considered when analyzing the incentives to tank. For example, if the first overall pick is the objective, then how hard a team tanks can depend on how many other teams are also going after that “prize”. It may also depend on how good the best player available is expected to be, and even how good the next best players are. In addition (and related to the previous two points), some teams enter a season with the intentions of tanking, while others decide only after seeing that their playoff hopes have been dashed.</p>
<p>All in all, however, it would seem that the lottery has had its desired effect in that the worst teams in the lottery era are not as horrendously bad as the pre-lottery era, even if Buffalo is perhaps giving a shot at tanking old school.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.depthockeyanalytics.com/uncategorized/dishonour-for-connor-how-the-lottery-has-affected-tanking/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Sabres have best shot at Connor McDavid, analytics suggest</title>
		<link>http://www.depthockeyanalytics.com/uncategorized/sabres-have-best-shot-at-connor-mcdavid-analytics-suggest/</link>
		<comments>http://www.depthockeyanalytics.com/uncategorized/sabres-have-best-shot-at-connor-mcdavid-analytics-suggest/#comments</comments>
		<pubDate>Thu, 22 Jan 2015 17:33:20 +0000</pubDate>
		<dc:creator><![CDATA[Phil Curry]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.depthockeyanalytics.com/?p=519</guid>
		<description><![CDATA[Now that we’re officially more than halfway into the season we’re starting to get a clearer picture of which teams are contenders and which are pretenders. However, there is still a lot of hockey left to play, and there will be a fair bit of movement up and down the standings. So what should we [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Now that we’re officially more than halfway into the season we’re starting to get a clearer picture of which teams are contenders and which are pretenders.</p>
<p>However, there is still a lot of hockey left to play, and there will be a fair bit of movement up and down the standings. So what should we expect in the second half?</p>
<p>In order to answer this question, we gathered data (from <a href="http://puckon.net">puckon.net</a> and <a href="http://waronice.com">waronice.com</a>) on midseason points, various possession metrics, goal differential and shooting and save percentages for every team after 41 games, starting with the 2007-08 season, but excluding the lockout-shortened season on 2012-13. The goal was to see which combination of these variables was best at predicting teams’ second half point totals.</p>
<p>You’d certainly be excused if you thought that looking at a team’s first half points was the best way to predict its second half points. After 41 games, the standings must certainly reflect a team’s true ability, right?</p>
<p>As it turns out, second half performance is best predicted by two of the most commonly used variables in the analytics repertoire: Score Adjusted Corsi and PDO.</p>
<p>For those not familiar with these concepts, Corsi is another name for shot attempts (shots on goal, missed shots, and blocked shots).  “Score Adjusted” Corsi (SAC) is a stat that, as the name suggests, adjusts Corsi to account for the power of score effects. “Score effects” describes the tendency for teams that are trailing in a game to amp up the offense and attempt significantly more shots. This matters because if you look at Corsi without any adjustments, awful teams (who are playing from behind a lot) look better than they really are and great teams (who are often defending a lead) don’t look as strong as they truly are.</p>
<p>PDO is simpler – it is just the sum of a team’s shooting percentage and save percentage. So, for example, if the Carolina Hurricanes have a team shooting percentage of 6.1% and a save percentage of 90.8%, their PDO is 96.9.</p>
<p>What was interesting, however, was that basing predictions on two variables did only marginally better than looking at just SAC alone. The fact that SAC is one of the best predictors of future success is a result found by almost everyone doing hockey analytics. In general, however, predictions can usually be improved by using more information. In this particular case, what we find is that second half points are very hard to predict, and that, to the extent we can predict them, SAC is really all you need.</p>
<p>Also interesting is the fact that, to the extent that a second variable can improve predictive power, PDO is the best choice. Note that, when considering PDO in isolation, it has very little predictive value. This is generally thought to be because a team’s shooting and save percentage at the midseason point are very noisy measures of its true abilities in that regard. However, it turns out that the information embedded in PDO, shrouded in noise as it may be, is different enough from the information embedded in SAC, that it is the best complement.</p>
<p>Regardless, the predictions generated by these variables still leave a lot of room for error. In recent years teams have produced as many as 67 points in the second half (the 2009-10 Washington Capitals), or as few as 21 (the 2010-11 Colorado Avalanche), and the model can only explain about 20% of that variation. However, they’re still the best data (out of what we looked at) to base any predictions on. So what do they predict?</p>
<p>First, our model predicts the playoff teams will be the ones that held such a spot at the halfway mark. No surprise there, but what is interesting is the significant shuffling of seeding. Nashville should cool off from its torrid start (60 points in the first half), but is still predicted to clinch the best record in the league, just barely holding off Chicago. Buffalo, meanwhile, is predicted to get the best shot at Connor McDavid in the lottery as Edmonton is predicted to recover from its disastrous first half by posting a much improved second half record (better than 5 other teams including Calgary and Toronto). They, along with Carolina, are predicted to show the most improvement in the second half, bettering their first half point totals by 16 points each. Edmonton will, however, still end up 2<sup>nd</sup> last overall and Carolina 3<sup>rd</sup> last. The biggest drop-offs are for Anaheim (predicted to amass 12 fewer second half points than they did in the first half) and Montreal (11 fewer).</p>
<p>At the end of the day, however, a good takeaway from this is that second half performance is surprisingly difficult to predict. You might think you have a good handle on your team, but chances are things won’t play out as they did in the first half. Which is just as well – isn’t that why we watch?</p>
<p>&nbsp;</p>
<table width="626">
<tbody>
<tr>
<td width="88">Team</td>
<td width="72">First Half Points</td>
<td width="72">Predicted Second Half</td>
<td width="72">Predicted End of Season</td>
<td width="105">Team</td>
<td width="72">First Half Points</td>
<td width="72">Predicted Second Half</td>
<td width="72">Predicted End of Season</td>
</tr>
<tr>
<td width="88">NASHVILLE</td>
<td width="72">60</td>
<td width="72">49</td>
<td width="72">109</td>
<td width="105">PITTSBURGH</td>
<td width="72">56</td>
<td width="72">49</td>
<td width="72">105</td>
</tr>
<tr>
<td width="88">CHICAGO</td>
<td width="72">56</td>
<td width="72">52</td>
<td width="72">108</td>
<td width="105">TAMPA BAY</td>
<td width="72">54</td>
<td width="72">51</td>
<td width="72">105</td>
</tr>
<tr>
<td width="88">ANAHEIM</td>
<td width="72">58</td>
<td width="72">47</td>
<td width="72">105</td>
<td width="105">NY ISLANDERS</td>
<td width="72">55</td>
<td width="72">49</td>
<td width="72">104</td>
</tr>
<tr>
<td width="88">ST LOUIS</td>
<td width="72">53</td>
<td width="72">48</td>
<td width="72">101</td>
<td width="105">DETROIT</td>
<td width="72">53</td>
<td width="72">50</td>
<td width="72">103</td>
</tr>
<tr>
<td width="88">LOS ANGELES</td>
<td width="72">47</td>
<td width="72">50</td>
<td width="72">97</td>
<td width="105">WASHINGTON</td>
<td width="72">52</td>
<td width="72">48</td>
<td width="72">100</td>
</tr>
<tr>
<td width="88">SAN JOSE</td>
<td width="72">49</td>
<td width="72">48</td>
<td width="72">97</td>
<td width="105">MONTREAL</td>
<td width="72">55</td>
<td width="72">45</td>
<td width="72">100</td>
</tr>
<tr>
<td width="88">WINNIPEG</td>
<td width="72">47</td>
<td width="72">49</td>
<td width="72">96</td>
<td width="105">NY RANGERS</td>
<td width="72">52</td>
<td width="72">48</td>
<td width="72">100</td>
</tr>
<tr>
<td width="88">VANCOUVER</td>
<td width="72">49</td>
<td width="72">46</td>
<td width="72">95</td>
<td width="105">FLORIDA</td>
<td width="72">49</td>
<td width="72">47</td>
<td width="72">96</td>
</tr>
<tr>
<td width="88">DALLAS</td>
<td width="72">43</td>
<td width="72">47</td>
<td width="72">90</td>
<td width="105">BOSTON</td>
<td width="72">46</td>
<td width="72">48</td>
<td width="72">94</td>
</tr>
<tr>
<td width="88">MINNESOTA</td>
<td width="72">41</td>
<td width="72">47</td>
<td width="72">88</td>
<td width="105">OTTAWA</td>
<td width="72">42</td>
<td width="72">45</td>
<td width="72">87</td>
</tr>
<tr>
<td width="88">CALGARY</td>
<td width="72">45</td>
<td width="72">40</td>
<td width="72">85</td>
<td width="105">TORONTO</td>
<td width="72">45</td>
<td width="72">41</td>
<td width="72">86</td>
</tr>
<tr>
<td width="88">COLORADO</td>
<td width="72">42</td>
<td width="72">40</td>
<td width="72">82</td>
<td width="105">PHILADELPHIA</td>
<td width="72">39</td>
<td width="72">44</td>
<td width="72">83</td>
</tr>
<tr>
<td width="88">ARIZONA</td>
<td width="72">36</td>
<td width="72">43</td>
<td width="72">79</td>
<td width="105">COLUMBUS</td>
<td width="72">39</td>
<td width="72">41</td>
<td width="72">80</td>
</tr>
<tr>
<td width="88">EDMONTON</td>
<td width="72">27</td>
<td width="72">43</td>
<td width="72">70</td>
<td width="105">NEW JERSEY</td>
<td width="72">35</td>
<td width="72">44</td>
<td width="72">79</td>
</tr>
<tr>
<td width="105">CAROLINA</td>
<td width="72">30</td>
<td width="72">46</td>
<td width="72">76</td>
</tr>
<tr>
<td width="105">BUFFALO</td>
<td width="72">31</td>
<td width="72">31</td>
<td width="72">62</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.depthockeyanalytics.com/uncategorized/sabres-have-best-shot-at-connor-mcdavid-analytics-suggest/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Forecasting the Second Half</title>
		<link>http://www.depthockeyanalytics.com/uncategorized/forecasting-the-second-half/</link>
		<comments>http://www.depthockeyanalytics.com/uncategorized/forecasting-the-second-half/#comments</comments>
		<pubDate>Thu, 22 Jan 2015 15:01:57 +0000</pubDate>
		<dc:creator><![CDATA[Phil Curry]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.depthockeyanalytics.com/?p=521</guid>
		<description><![CDATA[This week’s article in The Star looked at what information was most useful for predicting how teams will perform in the second half of the season. In a previous article, I considered how a team’s performance in several variables after 25 games was predictive of making the playoffs. Here, the prediction was not a simple [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>This week’s article in The Star looked at what information was most useful for predicting how teams will perform in the second half of the season. In a <a href="http://www.depthockeyanalytics.com/uncategorized/shot-attempts-are-valuable-information/">previous article</a>, I considered how a team’s performance in several variables after 25 games was predictive of making the playoffs. Here, the prediction was not a simple yes or no about whether a team would make the playoffs, but rather how many points they would have at season’s end – so it addressed seeding as well.</p>
<p>To begin with, data was collected from <a href="http://www.puckon.net">puckon.net</a> and <a href="http://waronice.com">waronice.com</a> on various measures of team performance at the midseason point (i.e. after 41 games). Specifically, I looked at Points, Score Adjusted Corsi, Corsi Close, Score Adjusted Fenwick, Fenwick Close, Goal Differential, and PDO.</p>
<p>First, I considered how each one of those variables correlated with the number of points a team amasses in the second half of the season (the last 41 games). 41 games is not a particularly small number, so it certainly seems reasonable to think that how a team does in the first half should be a good predictor of how they do in the second. As it turns out, it isn’t that good. In fact of the variables listed above, only PDO was worse at predicting second half points, and PDO was close to useless. The following table summarizes the R-squared (a measure of how well one variable correlates with another) between each of the variables and second half points:</p>
<table>
<tbody>
<tr>
<td width="352">Variable</td>
<td width="352">R-squared</td>
</tr>
<tr>
<td width="352">PDO</td>
<td width="352">0.0135</td>
</tr>
<tr>
<td width="352">Points after 41 games</td>
<td width="352">0.0876</td>
</tr>
<tr>
<td width="352">Goal Differential</td>
<td width="352">0.1037</td>
</tr>
<tr>
<td width="352">Fenwick Close</td>
<td width="352">0.1584</td>
</tr>
<tr>
<td width="352">Corsi Close</td>
<td width="352">0.1783</td>
</tr>
<tr>
<td width="352">Score Adjusted Fenwick</td>
<td width="352">0.1793</td>
</tr>
<tr>
<td width="352">Score Adjusted Corsi</td>
<td width="352">0.2031</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>As is being commonly noted around the analytics community, Score Adjusted measures of possession are superior to other measures in predictive value, and Corsi seems to be better than Fenwick. I should also note that there are different forms of these score adjusted measures, and I used the data from puckon.net, which is different from the adjusted measures at waronice, which is different from the adjusted measures proposed by Micah McCurdy over at HockeyGraphs.com. Which of those measures is the best is not the point of this exercise, although would be worth looking at.</p>
<p>The main point here was to examine how predictions could be improved by looking at more than one variable. In that regard, this is similar to the earlier article looking at how best to predict which teams would make the playoffs. There, I found that Points and Score Adjusted Corsi were the best predictors of who would make the playoffs, and the improvement in predictive power was substantial. In light of those results, the findings here were somewhat surprising.</p>
<p>First off, looking at two variables offers very little improvement in predictive value over looking at just one. Perhaps more surprisingly, the best variables to pair together are Score Adjusted Corsi (no surprise there) and PDO. That’s right – PDO, which offers very little information on its own, offers the most incremental information over Score Adjusted Corsi. (To see the results of all the regressions run, go <a href="https://docs.google.com/document/d/1D_d1_BKkEi-uW34xlR-AjVASFmC_B1NSJt4TvYG_jXU/edit?usp=sharing">here</a>.)</p>
<p>From an information standpoint, this means that the information contained in PDO has the least overlap with the information contained in Score Adjusted Corsi. Indeed, it offers almost entirely new information, even if it is very little. The fact that the other variables offer even less of an improvement over Score Adjusted Corsi tells us that the information they contain is almost entirely subsumed by the information SAC contains.</p>
<p>On the one hand, there is something intuitive about this, in that games are won by scoring more goals than your opponent. Goals are created by generating shots and converting them. Score Adjusted Corsi is the best measure that we have for a team’s ability to generate more shots than its opponents, and PDO is the measure for a team’s ability to convert those shots at a higher rate than its opponents. As such, it makes some sense that these variables would work well together to predict second half performance. In particular, there doesn’t appear to be much correlation between a team’s ability to generate shots and their ability to finish, so the information contained in those two variables should be somewhat separate from each other.</p>
<p>What is surprising, however, is that these variables hold more predictive value than looking at a team’s goal differential from the first half. They are also more predictive than looking at a team’s first half points – either in isolation or in conjunction with other variables.</p>
<p>There is no question, however, that PDO is a very noisy measure of a team’s ability to convert (or stop) shots. An implication of these findings would be to develop better measures of a team’s true ability in this regard.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.depthockeyanalytics.com/uncategorized/forecasting-the-second-half/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Shot Attempts Are Valuable Information</title>
		<link>http://www.depthockeyanalytics.com/uncategorized/shot-attempts-are-valuable-information/</link>
		<comments>http://www.depthockeyanalytics.com/uncategorized/shot-attempts-are-valuable-information/#comments</comments>
		<pubDate>Thu, 11 Dec 2014 17:45:17 +0000</pubDate>
		<dc:creator><![CDATA[Phil Curry]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.depthockeyanalytics.com/?p=502</guid>
		<description><![CDATA[This week’s article in The Star looked at the predictive value of information embodied in a team’s points and in their possession metrics. In particular it showed that these two stats contained different kinds of information, so that when used together, predictive power is increased. I should begin by mentioning that the first draft of [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>This week’s article in The Star looked at the predictive value of information embodied in a team’s points and in their possession metrics. In particular it showed that these two stats contained different kinds of information, so that when used together, predictive power is increased.</p>
<p>I should begin by mentioning that the first draft of the article was written with data at the 20 game mark. It ended up getting pushed back, and so it was rewritten to reflect data from the 25 game mark. There were some interesting observations from the original data, however, that I didn’t get a chance to update, so in this piece I will be referring mostly to results that are based on 20 game data.</p>
<p>The data on various measures of team performance were gathered from the last 5 non-lockout seasons (so 2008-09 to 2011-12 and 2013-14) and considered in isolation and in various combinations to see how well they explained which teams made the playoffs. As mentioned in The Star, <a href="http://www.puckon.net" target="_blank">www.puckon.net</a> was a valuable resource, but other data used here but not in The Star piece were taken from <a href="http://war-on-ice.com/" target="_blank">war-on-ice.com</a>.</p>
<p>Looking at the effect that the number of points after 20 (or 25) games has on making the playoffs is fairly straightforward. However, when looking at possession, there are various forms to consider. There’s Corsi (all shot attempts) or Fenwick (only unblocked shot attempts), both of which examine shot attempts in 5-on-5 situations. They can be modified to consider shot attempts only in “close” situations – when the game is either tied, or there is no more than a one goal lead in the first or second period – or adjusted to reflect score effects (as mentioned in The Star article). These variables all differ in their predictive value.</p>
<p>Note that, since we’re looking at whether teams made the playoffs (a binary variable), probit regressions were used. With these regressions, there is nothing like an R-squared measure to tell how much of the variation is being explained. It does generate, however, a pseudo R-squared, which can be useful for comparing models, although it does not have the interpretation of an R-squared. Other useful measures for comparing models are the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). For both these measures, smaller scores are better, while bigger is better for the pseudo R-squared. The following table shows these scores for a variety of models run with just a single explanatory variable.</p>
<table>
<tbody>
<tr>
<td width="233">Variable</td>
<td width="199">Pseudo R-Squared</td>
<td width="133">AIC</td>
<td width="140">BIC</td>
</tr>
<tr>
<td width="233">Points after 20 games</td>
<td width="199">0.2367</td>
<td width="133">193.8639</td>
<td width="140">200.2499</td>
</tr>
<tr>
<td width="233">Points after 25 games</td>
<td width="199">0.3191</td>
<td width="133">145.1337</td>
<td width="140">151.155</td>
</tr>
<tr>
<td width="233">Goal Differential (20 gms)</td>
<td width="199">0.2532</td>
<td width="133">189.7471</td>
<td width="140">196.1331</td>
</tr>
<tr>
<td width="233">Fenwick (20 gms)</td>
<td width="199">0.1342</td>
<td width="133">219.3607</td>
<td width="140">225.7467</td>
</tr>
<tr>
<td width="233">Corsi (20 gms)</td>
<td width="199">0.1254</td>
<td width="133">221.5341</td>
<td width="140">227.9201</td>
</tr>
<tr>
<td width="233">Fenwick Close (20 gms)</td>
<td width="199">0.1997</td>
<td width="133">169.874</td>
<td width="140">175.8953</td>
</tr>
<tr>
<td width="233">Corsi Close (20 gms)</td>
<td width="199">0.1904</td>
<td width="133">171.8077</td>
<td width="140">177.829</td>
</tr>
<tr>
<td width="233">Score Adj. Fenwick (20 gms)</td>
<td width="199">0.2437</td>
<td width="133">160.7652</td>
<td width="140">166.7865</td>
</tr>
<tr>
<td width="233">Score Adj. Corsi (20 gms)</td>
<td width="199">0.2428</td>
<td width="133">160.9597</td>
<td width="140">166.9809</td>
</tr>
<tr>
<td width="233">PDO (20 gms)</td>
<td width="199">0.0770</td>
<td width="133">233.5908</td>
<td width="140">239.9768</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>In terms of predictive value after 20 games, score adjusted measures contain more information than the other measures. Interestingly, however, the results are somewhat in contrast to a <a href="http://hockey-graphs.com/2014/11/13/adjusted-possession-measures/" target="_blank">recent study</a> done by Micah McCurdy over at Hockey-Graphs.com. He also found that score adjusted measures were best, but that close measures were worse than standard possession measures. Here, we are seeing that close measure are in fact better than standard measures. McCurdy also found that Corsi measures are better than Fenwick, in all cases. Here, we are again seeing the opposite (although the difference is negligible when looking at score adjusted measures).</p>
<p>It should be noted that McCurdy was looking at the correlation of these measures to different outcomes – goal percentage and winning percentage, specifically. Perhaps there is something about the coarseness of the measure here – making the playoffs or not, that is causing the differences. At any rate, it is worth further investigation.</p>
<p>Interestingly, when looking at just a single statistic after 20 games, goal differential seems to be the best. According to <a href="http://www.sportsnet.ca/hockey/nhl/30-thoughts-what-changes-will-the-sharks-make/" target="_blank">Elliotte Friedman</a>, this is Mike Babcock’s preferred statistic, so chalk up another point for the man who might be the most sought-after free agent this offseason.</p>
<p>In combination, however, it appears that the best combination is points and Score Adjusted Corsi. The following table gives the above measures of goodness of fit for the various models, all using data from the 20 game mark.</p>
<table>
<tbody>
<tr>
<td width="233">Variables</td>
<td width="199">Pseudo R-Squared</td>
<td width="133">AIC</td>
<td width="140">BIC</td>
</tr>
<tr>
<td width="233">Pts  + Goal Diff</td>
<td width="199">0.2645</td>
<td width="133">188.934</td>
<td width="140">198.5128</td>
</tr>
<tr>
<td width="233">Pts + Fenwick</td>
<td width="199">0.3120</td>
<td width="133">177.1281</td>
<td width="140">186.707</td>
</tr>
<tr>
<td width="233">Pts + Corsi</td>
<td width="199">0.3243</td>
<td width="133">174.0581</td>
<td width="140">183.637</td>
</tr>
<tr>
<td width="233">Pts + Fenwick Close</td>
<td width="199">0.3716</td>
<td width="133">136.2432</td>
<td width="140">145.2751</td>
</tr>
<tr>
<td width="233">Pts + Corsi Close</td>
<td width="199">0.3806</td>
<td width="133">134.3879</td>
<td width="140">143.4198</td>
</tr>
<tr>
<td width="233">Pts + SAF</td>
<td width="199">0.3954</td>
<td width="133">131.3251</td>
<td width="140">140.357</td>
</tr>
<tr>
<td width="233">Pts + SAC</td>
<td width="199">0.4049</td>
<td width="133">129.3522</td>
<td width="140">138.3842</td>
</tr>
<tr>
<td width="233">Pts + PDO</td>
<td width="199">0.2434</td>
<td width="133">194.1864</td>
<td width="140">203.7653</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
<p>Note that, when using points as well as various possession measures, we see something slightly different from above. First, Corsi measures are now better than Fenwick measures, as found in the McCurdy study discussed above. However, close measures are still better than standard measures. Why Corsi is better when controlling for points, and Fenwick is better when not, is definitely worth further investigation. Given that the difference is blocked shots, the answer must have something to do with the predictive value of blocked shots versus the explanatory value (how blocked shots correlate with points after 20 games).</p>
<p>Goal differential actually adds very little information to points, because they are so highly correlated – they contain very similar information. Possession measures, however, don’t tell you very much about how a team has done so far, which is actually a good thing in this context. When looking at two variables, you would like to find the variables that contain <em>different kinds</em>  of information – information that is said to be <em>orthogonal</em> to each other. Variables that are good at explaining who <em>has</em> won in the past, such as goal differential, do not offer additional information over points. Ideally, you want information about things that teams did that <em>didn’t </em>lead to points, but are indicative of getting points in the future. Possession metrics contain such information.</p>
<p>It is worth mentioning that this is exactly why exercises such as the one <a href="http://tangotiger.com/index.php/site/article/introducing-weighted-shots-differential-aka-tango">recently by Tangotiger</a> are getting it exactly wrong. It is the shots that a team took that didn’t go in that are useful to know about when making predictions – we already have information about the ones that did go in by looking at the standings.</p>
]]></content:encoded>
			<wfw:commentRss>http://www.depthockeyanalytics.com/uncategorized/shot-attempts-are-valuable-information/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Maple Leafs Likely to Miss Playoffs, Hockey Analytics Suggest</title>
		<link>http://www.depthockeyanalytics.com/uncategorized/maple-leafs-likely-to-miss-playoffs-hockey-analytics-suggest/</link>
		<comments>http://www.depthockeyanalytics.com/uncategorized/maple-leafs-likely-to-miss-playoffs-hockey-analytics-suggest/#comments</comments>
		<pubDate>Thu, 11 Dec 2014 17:45:05 +0000</pubDate>
		<dc:creator><![CDATA[Phil Curry]]></dc:creator>
				<category><![CDATA[Uncategorized]]></category>

		<guid isPermaLink="false">http://www.depthockeyanalytics.com/?p=497</guid>
		<description><![CDATA[At this time of year team records can be deceiving. Every year some teams get off to fast starts but then collapse, and others start slow but work their way into playoff position by season’s end. Last year saw the Phoenix (now Arizona) Coyotes come storming out of the gate to get 34 points in [&#8230;]]]></description>
				<content:encoded><![CDATA[<p style="text-align: justify;">At this time of year team records can be deceiving. Every year some teams get off to fast starts but then collapse, and others start slow but work their way into playoff position by season’s end.</p>
<p style="text-align: justify;">Last year saw the Phoenix (now Arizona) Coyotes come storming out of the gate to get 34 points in their first 25 games, tied for the 7th best start in the league, only to end up out of the playoffs with 89 points.</p>
<p style="text-align: justify;">In the other direction, Columbus and Philadelphia started last year with 21 and 24 points, respectively, and worked their way up to 93 and 94 points, and playoff berths.<br />
So what happened?</p>
<p style="text-align: justify;">One of the biggest contributions of the current analytics movement has been to emphasize that teams can go on hot runs over relatively large stretches while being largely outplayed, but that such success is generally not sustainable for a full season. Conversely, teams can have prolonged stretches where the points don’t come, even though they’ve generally been the better team, but chances are that before long the points will start to come for those teams.</p>
<p style="text-align: justify;">This isn’t to say that a team’s record after 25 games doesn’t matter. Far from it. But models that include both points and possession measures do significantly better in explaining who made the playoffs and who didn’t. Specifically, a model with both points and Score Adjusted Corsi fits the data the much better than models with just one variable, and better than any of the models with 2 variables that were looked at.</p>
<p style="text-align: justify;">For those not familiar with Score Adjusted Corsi, it is a possession metric that reflects the fact that teams generally do better in terms of possession when they’re trailing (or worse when they’re ahead) – a phenomenon called “score effects” (see IJay’s Nov. 14 column for more on that point). Thus, some teams’ possession metrics look better than they really are simply because they play from behind a lot, while others look worse because they have the lead a lot.</p>
<p style="text-align: justify;">From the model using both points and Score Adjusted Corsi, we found that on average, each additional point a team has at the 25 game mark increased their chance of making the playoffs by about 7.2 percentage points; each percentage point increase in their Score Adjusted Corsi increased their chance of making the playoffs by 8.1 percentage points.</p>
<p style="text-align: justify;">So what does that mean for this year’s teams? We can generate probabilities for each team of making the post-season this year based on data from past seasons. Keep in mind that, as mentioned above, the model only accounts for 42% of the information that one would ideally like to have, and so should be taken with a grain of salt. For example, it doesn’t account for teams that have above average shooting or goaltending, nor does it take into consideration the information contained in the games that teams have played past the 25 game mark. Unfortunately, it does not even account for the recent change in playoff format, with divisions and wildcards. However, it does give insight as to who are likely candidates for collapses and who might yet climb up the standings.</p>
<p style="text-align: justify;">According to the model, the team most likely to collapse is Calgary, with their chance of making the playoffs being only 34.6%, even though they had 32 points after 25 games, good for 6th best in the West and 3rd in their division.</p>
<p style="text-align: justify;">In the East, Tampa Bay, Montreal and Detroit were sitting in the divisionally guaranteed playoff positions after 25 games, while Boston, Toronto and Florida were all tied with 29 points. Only 2 of those teams could make the playoffs as a wildcard, however, as either the Rangers or the Capitals would get the nod by virtue of being 3rd in the Metropolitan. According to the model, Toronto would be the odds on favourite to be on the outside looking in (again), while Washington would be favoured to get that 3rd divisional spot.</p>
<p style="text-align: justify;">With just over a third of the season left to play, things can (and almost certainly will) change between now and the end of the season - but already there are some indications that all is not as the standings suggest. We’ll check back at the midseason point to see how things are progressing.</p>
<p style="text-align: justify;">Data were taken from <a href="http://www.puckon.net" target="_blank">www.puckon.net</a>.</p>
<table>
<tbody>
<tr>
<td width="101"><strong>Team</strong></td>
<td width="80"><strong>Pts After 25 Gms</strong></td>
<td width="93"><strong>Score Adj. Corsi</strong></td>
<td width="75"><strong>Prob. of Playoffs</strong></td>
<td width="104"><strong>Team</strong></td>
<td width="79"><strong>Pts After 25 Gms</strong></td>
<td width="93"><strong>Score Adj. Corsi</strong></td>
<td width="81"><strong>Prob. of Playoffs</strong></td>
</tr>
<tr>
<td width="101">Nashville</td>
<td width="80">36</td>
<td width="93">53.6</td>
<td width="75">99.19%</td>
<td width="104">Pittsburgh</td>
<td width="79">36</td>
<td width="93">52.4</td>
<td width="81">98.45%</td>
</tr>
<tr>
<td width="101">Vancouver</td>
<td width="80">35</td>
<td width="93">50.1</td>
<td width="75">93.29%</td>
<td width="104">NY Islanders</td>
<td width="79">36</td>
<td width="93">53.6</td>
<td width="81">99.19%</td>
</tr>
<tr>
<td width="101">St Louis</td>
<td width="80">34</td>
<td width="93">51.3</td>
<td width="75">94.11%</td>
<td width="104">Tampa Bay</td>
<td width="79">36</td>
<td width="93">53.5</td>
<td width="81">99.14%</td>
</tr>
<tr>
<td width="101">Anaheim</td>
<td width="80">33</td>
<td width="93">50.7</td>
<td width="75">89.56%</td>
<td width="104">Montreal</td>
<td width="79">34</td>
<td width="93">49.3</td>
<td width="81">87.49%</td>
</tr>
<tr>
<td width="101">Chicago</td>
<td width="80">33</td>
<td width="93">57.2</td>
<td width="75">99.54%</td>
<td width="104">Detroit</td>
<td width="79">33</td>
<td width="93">52.6</td>
<td width="81">95.05%</td>
</tr>
<tr>
<td width="101">Calgary</td>
<td width="80">32</td>
<td width="93">43.6</td>
<td width="75">34.63%</td>
<td width="104">Boston</td>
<td width="79">29</td>
<td width="93">53.1</td>
<td width="81">84.68%</td>
</tr>
<tr>
<td width="101">Los Angeles</td>
<td width="80">31</td>
<td width="93">51.5</td>
<td width="75">85.47%</td>
<td width="104">Florida</td>
<td width="79">29</td>
<td width="93">51.6</td>
<td width="81">76.18%</td>
</tr>
<tr>
<td width="101">Minnesota</td>
<td width="80">29</td>
<td width="93">55.1</td>
<td width="75">92.46%</td>
<td width="104">Toronto</td>
<td width="79">29</td>
<td width="93">46.5</td>
<td width="81">36.57%</td>
</tr>
<tr>
<td width="101">Winnipeg</td>
<td width="80">28</td>
<td width="93">51.6</td>
<td width="75">70.18%</td>
<td width="104">NY Rangers</td>
<td width="79">26</td>
<td width="93">50</td>
<td width="81">43.36%</td>
</tr>
<tr>
<td width="101">San Jose</td>
<td width="80">26</td>
<td width="93">53</td>
<td width="75">67.50%</td>
<td width="104">Washington</td>
<td width="79">26</td>
<td width="93">52.4</td>
<td width="81">62.91%</td>
</tr>
<tr>
<td width="101">Arizona</td>
<td width="80">23</td>
<td width="93">47.2</td>
<td width="75">9.77%</td>
<td width="104">Ottawa</td>
<td width="79">25</td>
<td width="93">47.5</td>
<td width="81">19.29%</td>
</tr>
<tr>
<td width="101">Dallas</td>
<td width="80">23</td>
<td width="93">49.4</td>
<td width="75">20.06%</td>
<td width="104">New Jersey</td>
<td width="79">22</td>
<td width="93">49.7</td>
<td width="81">16.85%</td>
</tr>
<tr>
<td width="101">Colorado</td>
<td width="80">23</td>
<td width="93">44</td>
<td width="75">2.52%</td>
<td width="104">Philadelphia</td>
<td width="79">20</td>
<td width="93">47.6</td>
<td width="81">3.92%</td>
</tr>
<tr>
<td width="101">Edmonton</td>
<td width="80">16</td>
<td width="93">49.4</td>
<td width="75">1.71%</td>
<td width="104">Buffalo</td>
<td width="79">20</td>
<td width="93">36.2</td>
<td width="81">&lt;0.01%</td>
</tr>
<tr>
<td width="101"></td>
<td width="80"></td>
<td width="93"></td>
<td width="75"></td>
<td width="104">Carolina</td>
<td width="79">19</td>
<td width="93">50.7</td>
<td width="81">9.66%</td>
</tr>
<tr>
<td width="101"></td>
<td width="80"></td>
<td width="93"></td>
<td width="75"></td>
<td width="104">Columbus</td>
<td width="79">18</td>
<td width="93">45.8</td>
<td width="81">0.62%</td>
</tr>
</tbody>
</table>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://www.depthockeyanalytics.com/uncategorized/maple-leafs-likely-to-miss-playoffs-hockey-analytics-suggest/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
