logo

Exploring Potential Biases in Prospect Rankings

alt
Photo credit:David Banks-USA TODAY Sports
Jeremie Spagnolo
5 years ago

Introduction

Drafting biases in the NHL have been covered for some time now. Understanding them is vital to anybody who intends to improve their drafting success. Removing drafting biases are a great way to find inefficiencies in prospect valuation, and thus to gain an edge on other teams, especially in a league with a salary cap. Less talked about, but insinuated in drafting biases, are prospect ranking biases. The countless, microscopic judgments which form the nebulous process of adjusting prospect rankings throughout their draft years based on new information are difficult to explain entirely, which leaves room for systematic miscalculations. In particular, I want to explore the biases which occur when adjusting prospectsrankings throughout their draft years. Two possible biases are defined below.
Recency bias: weighing a prospects recent results more heavily than older results.
Semmelweis reflex: a prospect who is ranked at a given level is kept there despite new information which may or may not require ranking adjustment.
My intuition was that players who experience large movement in ranking through their draft year are harder to properly rank than players who have remained stable. The idea being that it would be difficult to properly account for prospectsimprovement/setbacks against prospects previously thought to be superior/worse. In this piece, Im going to focus particularly on prospects which have experienced a considerable drop (-4 to -30) or rise (+4 to +30) in rankings throughout their Draft years (our own Timothy Liljegren, -22, is an example).

The Data

Before beginning, let this be clear: the lack of publicly available prospect ranking data leaves this analysis bulging with bias. For one, we are looking at the rankings of only one company, Future Considerations, because as far as I know (thanks to Scott Wheeler for the tip), they are the only publication to post their historical prospect rankings, which means the rankings arent anywhere close to the Bob McKenzie-grade consensus were accustomed to. The biases we may find in FCs rankings will reflect only FCs biases, but could, in the future, be applied to a richer dataset. Moreover, we only have the 7 most recent draft classes, which not only is a minuscule amount, but does not give many of the prospects a fair shot at having made an impact (in fact, I ignored the 2017 draft because of this reason). With this in mind, understand this isnt intended to be an expansive analysis, but a potential pathway for future analyses once more historical prospect rankings become available.

The Process

To better calibrate expectations of each prospect, I created my own in-sample version of the well-established Draft Value Chart using Game Score Per Game and Point Shares Per Game. To make these, I obtained the in-sample mean of each metric for each final ranking position, fit a smoothed line (seen below), and then extracted the metrics values at each ranking position. Heres what that looks like:
Based on the group of prospects wed like to analyze – in this case, Risers (+4 to +30 spots), Fallers (-4 to -30 spots), and Evens (-3 to 3 spots) – we can calculate the expected mean of each metric against their observed means. This allows us to put into context the rankings of each prospect category. The results are posted below. I break into them just after.

The Results

Lets go through what each column means, and then what we could potentially extrapolate from the table as a whole.
Mean Final Rank: average final ranking
Mean Rank GSPG: average expected GSPG by GSPG-FinalRank curve posted above
Mean GSPG: mean observed game score per game
Mean Rank PSPG: average expected PSPG by PSPG-FinalRank curve posted above
Mean PSPG: mean observed point shares per game
Pct Played: percentage of prospects that have played at least one game in the NHL
The point of interest  here is the difference between the expected GSPG/PSPG andthe observed GSPG/PSPG. When considering this, we see the Fallers seem to have been adequately evaluated, while the Risers seem not to have been ranked highly enough and the Evens have been ranked too highly. Again, I should stress that these results should be taken lightly – the sample size is far too small, both at the ranking level and the prospect level. In fact, visualizing the bootstrap confidence interval of each group better illustrates this point.

Conclusion

I believe that an increase in public prospect rankings – especially those which are gathered multiple times across a given draft year – would allow us to better measure our biases, whether it be recency bias, Semmelweis reflex, or the already established size and nationality biases.
With that said, some risers in this years draft:
Name
Rise
Final Ranking
Noah Dobson
52
10
Jesperi Kotkaniemi
21
13
Evan Bouchard
10
8
Adam Boqvist
7
7

ARTICLE BROUGHT TO YOU BY SPORTS EXCELLENCE

Founded in 1950, Sports Excellence Corporation represents over 150 family-owned independent hockey retailers across Canada and the United States. Our highly knowledgeable hockey specialists are available to assist all your equipment needs. Find your closest Sports Excellence retailer here!
Recent articles from Jeremie Spagnolo

Check out these posts...