There was a legendary statistician named Harvey Pollack who worked for the Philadelphia 76ers for years. He was decades ahead of his time, and it turns out that Pollack actually kept plus-minus data long before the NBA officially did. While it’s rumored other teams like the Celtics and Lakers tracked plus-minus in the 80s — and oh, what I would do to see that — we have Pollack’s 76er data as far back as 1974. He also started tracking it league-wide in 1994, three years before the publicly available NBA data.
Although Pollack never published any lineup data (which would allow for far deeper analysis than seasonal aggregates), there’s actually a lot we can glean just from having plus-minus data. It allows us to know how well a team played with a player on the court, off the court, and the net “on/off” impact of that player. Even though we can’t access play-by-play data to adjust for teammate and opponent strength, there’s a pretty strong linear relationship between net on/off and RAPM:
As you can see, there are no anomalies in that data (just large errors) which is good for setting up a prediction model. A longtime poster on APBR and realgm named Colts18 was the first person I’ve seen to try and map raw plus-minus to RAPM. And after encountering all of the Pollack data (thanks to the great poster fpliii), I thought I’d give this a whirl. Instead of using a single-year set, I ran a regression from 2005-10 with some hand-selected variables to predict RAPM using plus-minus as a base.
And the results were pretty good (details below). Combining on/off data with some box score data allows us to pretty accurately guesstimate a player’s RAPM. The more pedestrian the predicted RAPM, the more accurate the result; for high-performing players, almost all values are within plus-or-minus three points from their real RAPM (none over 4.0) and for moderately performing players, most are within 2.0. Not bad given the lack of play-by-play data.
We can think of this regression of plus-minus data (which is regressed onto regressed plus-minus data!) as an “augmented” plus-minus. Interestingly, because it’s using a blend of box score data and plus-minus data, the model is more stable than standard year-to-year RAPM (and certainly more stable than non-prior RAPM).
This means that for players with huge shifts, it will likely underestimate them in one year and then overestimate them in the following season. And this isn’t necessarily a bad thing, because while the metric will not give us the “true” RAPM value in such cases, it’s less subject to vagaries that might be caused by factors outside of the player’s control. Food for thought.
Of the 1391 1000-minute players I used from 2005-10, the best augmented season (AuPM) belonged to LeBron‘s 2009 campaign, at +8.9. No one besides LeBron was over +7.0 and only 1.3 percent of seasons were above 5.0. In other words, a typical top-5 season is somewhere between +4.5 and +5.0 using this metric.
Anyway, what does the augmented plus-minus tell us from Pollack’s data? I’ve compiled all the results in a google doc alongside known RAPM to give a historical perspective of this kind of data from 1994-2013 (and back to 1977 for a select group of 76ers). Check it out for yourself — for my money, David Robinson looks like the king of plus-minus in the 90s, Karl Malone, Scottie Pippen and Mookie Blaylock look great, and Julius Erving takes a huge hit. I’ve also created an interactive visual with some notable players — it’s easy to compare players if you deselect everyone.
Finally, this kind of stuff is only possible because of the great work of statisticians and historians that have paved the way, and I find myself perpetually in awe of their work. In this case, using Pollack’s data like this is only possible because of pioneers like Wayne Winston, Steve Ilardi, Joe Sill and Daniel Myers and Pollack himself.
Data set used was 2005-2010, using PI RAPM from Jeremias Englemann and plus-minus from basketball-reference.com. Variables were hand-selected. I played with the relationship between a player’s on/off and his teammates, and while many made minor improvements, the largest came from simply summing the difference of the 1000 MP teammates ahead of a player. For instance, take the following group of teammates:
Player A = 2.0
Player B = 5.0
Player C = 6.0
Player A’s “summed above” value would be the difference between himself and B plus the difference between himself and C, or 7.0. For the box score, there’s already a composite (regression-based) metric that maps to RAPM, which is Box Plus-Minus (BPM). Adding it significantly improved accuracy. Finally, a team’s actual “on” value was important. The coefficients look like this:
AuPM = -0.0185 + Net * 0.2064 + 0.1113 * On + 0.2343 * BPM – 0.0209 * SumAbove – 0.0017*(Net * SumAbove)
R-Squared was 0.76. Mean Absolute Error (MAE) was 0.92. Max error was 4.5 with a standard deviation of 0.71. Errors were larger among larger values:
- For players +3.5 or better, 40 percent of predicted RAPM’s were within 1.0 of actual RAPM, 74 percent were within 2.0 and 96 percent were within 3.0.
- For players between +1.5 and +3.5, 57 percent were within 1.0 points of actual RAPM, 89 percent were within 2.0 and 98 percent within 3.0.