From the Vault: Observations from a season of Stat-Tracking

This post was originally published on April 13, 2011. It is a summary of findings after one year of stat-tracking basketball games in attempt to extend the box score. SportsVU now captures similar data. 

If one spends enough time watching NBA games with a DVR, trends start to jump out. Unfortunately, there’s no way the human brain can accurately catalog all that information. Perhaps Data from Star Trek should be assigned to finding trends in basketball games. In the meantime, here are some statistical observations from roughly 23,000 possessions of stat-tracking in 2011:


  • 16% of all field goals came off of an Opportunity Created (OC).
  • 46% of 3-pointers came off of an OC.
  • The average player shot 40% on 3-point shots off of an OC.
  • The Spurs led the league in OC’s (23.5 per 100 possessions), with the Hawks second at 23.4.
  • The Jazz needed the most help on defense — which means their opponents create the most opportunities (23.7 OC per 100).
    • The Jazz had the lowest Defensive Rating in the sample by far (118.7).


  • The Lakers committed the fewest shooting fouls in the league (16.5 free throws/100).
  • Someone takes an offensive foul every 88 possessions…or a little more than once per game.
  • Phoenix takes more offensive fouls than any other team – 2.2 per 100 possessions.


  • In guarded situations, the most successful teams are the best defensive teams: The four leaders in guarded field goal percentage are in the top-5 in defensive rating (and Milwaukee’s sample was too small):
    1. Miami (36.6%)
    2. Chicago (36.6%)
    3. Boston (36.9%)
    4. LA Lakers (37.7%)
  • The Lakers make the most defensive errors…but give up the fewest points per error (1.50 points/error), a credit to Andrew Bynum and Pau Gasol protecting the paint.
  • Every 172 possessions there is a forced turnover not counted as a steal (eg slapped off a leg out of bounds. That means the NBA doesn’t track about 1700 “steals” during the season.
  • Teams shoot the worst in unguarded situations against the Lakers (56.6% eFG%), which suggests that LA does well closing out shots and fighting through screens…or they’re just lucky.

Top “Healthy” Teams in NBA History

Who are the best teams in NBA history? We often answer this question by looking at a team’s entire body of work, lumping in the good, the bad and the injured. Most teams have key players miss games and some even trade for key players, changing the chemistry of a given lineup. So who were the best teams when all of the key actors were on stage?

Below I’ve indexed the top “healthy” teams — when all 25-minute per game players were in action for a game — since the shot clock (1955) by SRS (adjusted margin of victory). Using this criteria, 51 teams have posted at least an 8.0 SRS when healthy.  Just 29 teams have eclipsed the 9.0 mark. (10 of those teams failed to win a title — well inline with what is predicted by the variability of a 7-game series.) The best are below, playoffs included:

Top Healthy Teams

Disclaimers: SRS, while a better predictor of results than win percentage, is not a de facto team-ranker. First, it’s subject to the usual variance seen in the NBA (detailed in Chapter 4 of Thinking Basketball), so it’s not a perfect representation of team strength. Second, some teams are more resilient in makeup — they are better equipped at handling a variety of opponents while still remaining efficient, boosting their odds of winning from series to series. Finally, SRS is a measure of within-season dominance, so it cannot allow for perfect comparisons across seasons. A 10 SRS in 1986 is probably more impressive than one in 1972.

With that said, it is by far the single best metric for evaluating the performance of a team against its competition. The teams listed above were manhandling opponents, which is why many went on to win a title.

While this year’s Warriors were the most dominant single-season team ever, their SRS is influenced by a league that was incredibly top-heavy. Four of the top-40 healthy teams ever played in 2016 (Golden State, San Antonio, Oklahoma City and Cleveland), which is either an unlikely coincidence, or a reflection of inflated numbers from a lopsided league.

The other top four seasons are from expansion eras, when teams could pick up an additional point or two by facing expansion squads a few times a year and padding their numbers with blowouts. All of those teams are in the conversation for “greatest ever,” but their statistical dominance here should be slightly curved.

As mentioned, we see the usual suspects: Jordan’s first three-peat Bulls. Jordan’s second three-peat Bulls. Kareem’s Bucks and the early 70’s Lakers. This is all line with in-depth analysis of the greatest teams ever.

So who are the most impressive teams of all-time that you probably didn’t know about:

  1. 2014 Spurs. When healthy, they posted an amazing 11.8 SRS. That team is basketball’s Sistine Chapel and Gregg Popovich its Michelangelo.
  2. 2004 Pistons. Absolutely impregnable after the Rasheed Wallace trade in ways that reminded everyone it was time for a rule change.
  3. 2008-09 Lakers and Celtics. These teams were fantastic in an incredibly competitive league. The Celtics were +8.8 and +9.3 when healthy, and the Lakers +9.7 and +9.0 once Pau Gasol joined. Kevin Garnett’s injury robbed us of possibly the NBA’s greatest trilogy.
  4. 1996 Magic. Yes, they were worthy of a documentary.

Amazingly, of the top 40 healthy teams of all-time, seven are Pop’s Spurs teams. Five are Jordan’s Bulls. Four are Laker teams with Kobe Bryant.

Remember this list the next time you construct an all-time list or you look ahead to the 2016 season.

From the Vault: Exploring the Spacing Effect

This post was originally published on November 26, 2011. It examines a concept mentioned in my new book, Thinking Basketball

One of the more dominant themes of this summer’s Online Hoops Summit of Nerdness was the “Spacing Effect” that good shooters provide for an offense. By being a threat to score from all over the floor, shooters pull out defenders who could otherwise help on penetration or flood the paint for defense and rebounding. For example, in the last post we combed over five years of raw on/off data — how well a team performed with a player in the lineup versus when he was on the bench — and some of the biggest impacts were made by great shooters.

Of the 21 players who added at least six points of efficiency to a 107 offense (teams averaging 107 points or more per 100 possessions without the player), seven are on the all-time top-100 list of 3-point percentage leaders (minimum 500 attempts). 17 of the 21 (81%) used the 3-point shot regularly, with only Brad Miller (2004), Shaquille O’Neal (2005), Kevin Garnett (2008) and Tyson Chandler (2008) operating primarily inside the arc. The average 3-point percentage from that group was a whopping 38.2%. (League average 35.7% over that time.)

Below are the 21 player seasons, with their 3-point percentage:

Player Year Net Change Ortg On Court Ortg Off Court Season 3 pt %
Josh Howard 2004 6 117.6 111.6 .303
Radmanovic 2008 8.6 119.5 110.9 .406
Williams 2008 6.1 116 109.9 .395
Nowitzki 2004 6.2 115.6 109.4 .341
Bryant 2008 6.5 115.4 108.9 .361
Lewis 2005 7.3 116 108.7 .400
Joe Johnson 2005 8.4 117 108.6 .478
Josh Howard 2007 6.5 114.9 108.4 .385
Allen 2005 7 115.2 108.2 .376
Marion 2007 8.6 116.8 108.2 .317
Radmanovic 2005 11.7 119.8 108.1 .389
Chandler 2008 6.9 114.5 107.6
O’Neal 2005 7.6 114.9 107.3
Christie 2004 6.7 114 107.3 .345
Finley 2005 6.8 114 107.2 .407
Posey 2006 6.2 113.4 107.2 .403
B. Miller 2004 7.5 114.6 107.1
Billups 2008 8 115.1 107.1 .401
Terry 2006 8.5 115.5 107 .411
Garnett 2008 8 115 107

Also from that five-year chunk of data, there were 55 instances of players boasting an on/off of 9.0 or better on offense (minimum 1000 minutes played). Again, this means their teams offense scored at least nine more points per 100 possessions with them on the court that year. Only ten of those seasons saw a player attempt less than one 3-point shot per game. We see the same results: the other 45 (82% of the group) averaged 38.4% from behind the arc.

Of particular interest are the shooting specialists. Who we classify as one-dimensional shooters is somewhat subjective, but it’s a mighty coincidence that Vladimir Radmanovic appears on the above list twice, with two different teams. And that Peja Stojakovic does the same, in two different situations, in his two best 3-point shooting seasons (43.3% in 2004, 44.1% in 2008). And that Damon Jones seemed to help Miami so much in 2005 with a career-best 43.2% from downtown. And that Fred Hoiberg led the league in 3-point percentage in 2005 at a staggering 48.3% and booted Minnesota’s offense while on the court.

Of course, making so many 3′s is also part of the reason these players are helping so much, but perhaps not quite as much as one would think. In Hoiberg’s case, he attempted 4.1 3′s every 36 minutes, which means the difference between 48.3% and league average was roughly 1.6 points per 36 minutes, or about 2.3 points/100 at Minnesota’s 2005 pace. Radmanovic launched 5.7 3′s every 36 minutes in 2008, and if he converted at league average the Lakers would have scored about 1.8 fewer points in his games.

So while greater accuracy translates directly to more points, something else is happening here indirectly. It’s possible these shooters are repeatedly the beneficiary of coming in and out of the lineup with their team’s superstars. Although that seems unlikely, we can look at long-term adjusted plus-minus (APM) data and see the same pattern.

In Joe Ilardi’s 2003-2009 APM model, the best offensive players in the league are names we’d expect: Steve NashLeBron JamesKobe BryantChris Paul and Dwyane Wade. It’s also littered with resident shooters, like Antawn Jamison (“stretch” power forward) at No. 7, Michael Redd (12th), Ray Allen (13th), Jason Terry (19th), Anthony Morrow (21st), Peja Stojakovic (22nd), Rashard Lewis (23rd), Danilo Gallinari (26th), Anthony Parker (40th), Mike Bibby (45th) and Sasha Vujacic (48th). Below are how the top-50 3-pooint shooters (500 attempts) by percentage scored in Ilardi’s APM study:

Player 3P% Off APM
Jason Kapono .454 -1.36
Steve Nash .439 8.84
Anthony Parker .424 2.55
Ben Gordon .415 2.37
Raja Bell .414 -1.22
Daniel Gibson .412 1.40
Bobby Simmons .410 -0.61
Brent Barry .409 0.32
Matt Bonner .409 1.00
Peja Stojakovic .409 4.15
Bruce Bowen .408 -4.99
Wally Szczerbiak .406 1.08
Leandro Barbosa .404 0.51
Kyle Korver .404 -0.20
Eddie House .403 -1.88
Mike Miller .402 1.21
Chauncey Billups .401 5.32
Matt Carroll .400 0.39
Troy Murphy .398 0.41
Roger Mason .395 -0.58
Brian Cook .394 -2.41
Danny Granger .393 1.40
James Jones .393 1.80
Ray Allen .392 5.33
Steve Blake .392 -0.08
Luther Head .392 -1.21
Shane Battier .391 0.33
Rashard Lewis .390 3.91
Michael Finley .389 -1.58
Kevin Martin .389 1.10
Jameer Nelson .389 0.14
Hedo Turkoglu .389 1.89
Jason Terry .387 4.41
Mo Williams .386 1.39
Tyronn Lue .384 -0.47
Jose Calderon .383 0.71
Vladimir Radmanovic .381 1.84
Michael Redd .381 5.46
Kirk Hinrich .380 -0.88
Mike Bibby .379 2.31
Joe Johnson .379 1.40
Dirk Nowitzki .379 4.71
Mike James .378 -0.80
Delonte West .378 -0.39
Andrea Bargnani .377 -1.24
Maurice Evans .377 0.26
Mehmet Okur .377 -0.48
Sasha Vujacic .377 2.24
Manu Ginobili .376 4.94
J.R. Smith .376 1.98
Derek Fisher .375 -1.60

The average Offensive APM in the entire study was -0.45. The average Offensive APM of the top-50 3-point shooters on the list is +1.08. 32 of the 50 were positive-impact players. The glaring outlier, Bruce Bowen, can be explained away quite nicely. We’re using the 3-point shot to approximate outside shooting ability (or the threat of outside shooting), and Bowen isn’t a very good outside shooter. Using available data, he took about one deep jumper a game from 2007-2009 converting at 38%. He shot 57.5% from the free throw line during the period, the worst of anyone of the list by nearly 8%.

We could further define “good outside shooters” by looking at floor data on shooting from 16-23 feet if we wanted to. Although, despite the presence of someone like Bowen, 3-point shooting is sufficient for now to demonstrate the presence of the Spacing Effect.

Thinking Basketball Now Available on Amazon

Excited to announce that my new book, Thinking Basketball, is now available on Amazon in paperback.

The book is largely a culmination of the ideas on this blog over the years, using our own cognition to explore misconceptions about the NBA. It’s built on the concepts that have been presented on this blog (some of which I’ll try and re-upload this summer), as well as new research that was developed specifically for the book.

It would not exist without you, the reader, supporting this blog over the years and constantly participating to improve the ideas shared in this space. Thanks for reading and I hope you enjoy it!

Some core topics:

  • Averaging 50 points per game is rarely better than averaging 20
  • Why “Chokers” aren’t always chokers
  • How winning warps our memories, and thus our narratives about players and teams
  • The value of clutch play and closers
  • Building championship teams and the value of one-on-one play

Half-Court Math: Hack-a-Whoever, Isolation and Long 2’s

In my upcoming book, Thinking Basketball, I allude to certain instances where “low efficiency” isolation offense provides value for teams. Most of us compare a player’s efficiency to the overall team or league average, but that’s not quite how the math works, because the average half-court possession is worth less than the average overall possession.

In 2016, the typical NBA possession was worth about 1.06 points. That’s a sample that includes half-court possessions against a set defense, but also scoring attempts from:

  • transition
  • loose-ball fouls
  • intentional fouls
  • technical fouls

Transition is by far the largest subset of that group, accounting for 15% of possessions for teams, per Synergy Sports play-tracking estimations. Not surprisingly, transition chances, when the defense is not set, are worth far more than half-court chances. As are all of the free-throw shooting possessions that occur outside of the half-court offense.

Strip away those premium opportunities from transition and miscellaneous free throws and the 2016 league averaged 95 points per 100 half-court possessions. (All teams were between 7 and 14 points worse in the half-court than their overall efficiency.) Golden State, the best half-court offense in the league this year, tallied an offensive rating around 105, far off its overall number of 115 that analysts are used to seeing.

Transition vs Half Court Efficiency

This has major implications for the math behind “Hack-A-Whoever.” If the defense is set, then, all things being equal, fouling someone who shoots over 50% from the free throw line is doing them a favor. One might think that a 53% free throw shooter (1.06 points per attempt) at the line is below league average on offense because of the overall offensive efficiency. But it’s actually well above league average against a set, half-court defense. (Other factors, like offensive rebounding and allowing the free-throw shooters team to set-up on defense complicate the equation.)

Said another way — fouling a 53% free throw shooter is similar to giving up a 53% 2-point attempt…which is woeful half-court defense.

There could be other viable reasons to “Hack-A-Whoever,” such as breaking up an opponent’s rhythm or psychologically disrupting the fouled player. (These would be good strategic reasons to keep the rule, in my opinion.) But assuming he was a 50-60% foul shooter, coaches would still be making a short-term tradeoff, exchanging an inefficient defensive possession for other strategic gains.

This also has ramifications for isolation scorers and long 2-point shots. Isolation matchups that create around a point per possession in the half court — or “only” 50% true shooting — are indeed excellent possessions. If defenses don’t react accordingly, they will be burned by such efficiency in the half-court. As an example, San Antonio registered about 103 points per 100 half-court possessions this year, and combined it with a below-average transition attack to still finish with an offensive rating of 110, fourth-best in the league.

The same goes for the dreaded mid-range or long 2-pointer — giving these shots to excellent shooters from that range (around 50% conversion) is a subpar defensive strategy. And even a 35% 3-point shooter (1.05 points per shot) yields elite half-court offense.

So, when we talk about the Expected Value of certain strategies, mixing transition possessions together with half-court ones will warp the numbers. Sometimes, seemingly below-average efficiency is actually quite good.


How 2016 NBA Teams Differentiated Themselves on Offense

Dean Oliver’s Four Factors uses box score data to determine how teams are successful in key elemental areas. Instead of looking at box stats like turnovers and rebounding, what if we used different types of plays to determine a team’s offensive strengths? Synergy tracks a number of play types, but not all have a large impact on the game. Based on the 2016 data on, the following were the most common play types this year:

  • 25% were pick-n-roll plays
  • 20% were spot-ups
  • 15% were in transition

Naturally, teams differentiate themselves from the pack based on the plays they run the most; The Lakers led the league in isolation plays, but their efficiency was below-average on those plays, so they lost lots of ground on the average offense. The five categories from Synergy with the largest degree of differentiation were:*

  1. Pick-n-Roll (PnR)
  2. Spot Up
  3. Transition
  4. Post Up
  5. Off Screen

Below is a visual of how every team in the NBA this year fared in these five factors.

2016 Differentiation by Play Type

The y-axis represents the per-game differentiation based on efficiency of a given play type (relative to league average). For instance, if a team ran 820 post ups (10 per game) and averaged 0.10 points per play more than league average, they would generate an extra point per game.

Not surprisingly, the most differentiating play type during the 2016 season was a Golden State spot-up shot. Of the 203 players with at least 100 spot-ups, Steph Curry was 2nd in efficiency at 1.49 points per play and splash brother Klay Thompson 15th at 1.18 points per play. (League average was 0.97 points per spot-up.) Let’s simplify the above visual and just focus on the final eight teams left in this year’s playoff field:

2016 Differentiation Final 8

Now it’s easier to see how the remaining teams stack up. The Warriors don’t really have a post-up game, but so what? They excel at everything else and created the most differentiation of any team in the league in three major categories (PnR, Spot Up and Off Screen.) On the other hand, the Spurs were dominant in the post and excellent in their own right at spot-up plays, but they don’t do damage in transition. (San Antonio also led the league in “put backs” by a large degree, generating over a point of separation alone in that category.) The East’s best team, Cleveland, was above-average at everything.

*Isolation plays would be the 6th major play type. However, no team in 2016 created a point of positive or negative differentiation from isolation plays, which accounted for 8% of all plays tracked during the season. 

Goodell’s Illogical and False Deflategate Statements

It turns out that Roger Goodell, Exponent and Ted Wells just aren’t very good at logic. Whether that’s due to severe defensiveness and a major confirmation bias or something else is irrelevant. I’m not going to go into legal details or CBA issues, but I will discuss the scientific and logical errors and inconsistencies from Goodell’s appeal ruling and the hearing itself in deflategate.

Falsehood No. 1 — Timing was accounted for the in the statistical test

On pg 6 of his ruling, Goodell writes:

“In reaching this conclusion, I took into account Dean Snyder’s opinion that the Exponent analysis had ignored timing…however, both [Dr. Caligiuri and Dr. Steffey] explained how timing was, in fact, taken into account in both their experimental and statistical analysis.”

This is patently false. It is not an opinion of Dr. Dnyder’s. It is a fact. And it is a fact agreed upon by Dr. Caligiuri and Dr. Steffey after much run around and refusal to answer this question. In Dr. Caligiuri’s testimony on pg 361 of the hearing:

“So the reason you don’t see a timing effect that we concluded in the statistical analysis is because it’s being masked out by the variability in the data due to these other effects.”

And then later on pg 380:

Kessler: So the initial test you did to determine whether there was anything to study did not have a timing variable?

Caligiuri: Not specifically, no.

Steffey echoes this fact on page 429 and 430:

Kessler: This one-structured model that you chose to present as your only structured model in this appendix and in the entire report, okay, has no timing variable in it, correct?”

Steffey: There’s no term in there that says time effect.

Goodell is either misrepresenting the truth or he is very, very confused and was not able to understand this issue at the hearing. Either way, once and for all, timing is not accounted for in Exponent’s statistical analysis. It is a major confound, and it does change the results when timing is indeed accounted for.

(By the way, the Exponent scientists were attempting to claim that an ordering effect is the same thing as accounting for timing, but that is also wrong. First, an ordering effect can have different increments of time (as the Patriot and Colt balls do) and second, an ordering effect is independent of time, which is relevant in an instance where another variable, like wetness, would completely mitigate the presence of an ordering effect but not undo the effect of time.)

Falsehood No. 2 — Brady’s “extraordinary volume” of communication for ball prep

On pg 8 of his ruling, as part of discrediting Brady’s testimony, Goodell reasons:

“After having virtually no communications by cellphone for the entire regular season, on January 19, the day following the AFC Championship Game, Mr. Brady and Mr. Jastremski had four cellphone conversations, totaling more than 25 minutes, exchanged 12 text messages, and, at Mr. Brady’s direction, met in the ‘QB room,’ which Mr. Jastremski had never visited before…the need for such frequent communication beginning on January 19 is difficult to square with the fact that there apparently was no need to communicate by cellphone with Mr. Jastremski or to meet personally with him in the ‘QB room’ during the preceding twenty weeks.”

This is a serious mischaracterization of facts. Let’s ignore the basic fact that there wasn’t a media frenzy surrounding Jastremski’s domain in any of the previous 20 weeks. During the hearing, Brady explained that, for the Super Bowl, Jastremski needs to prepare approximately one hundred footballs, at least eight times his normal volume.

Furthermore, Brady testified that deflategate allegations surfaced on days when he was not at the stadium because of the Super Bowl break. Frankly, it would have been stranger if he didn’t call Jastremski. The hoopla over the visit to the QB room is also bizarre, since Brady said he simply didn’t want to look for him in the stadium. There is no justification for how Goodell ignores this evidence, even taking it further and writing on pg 9:

“The sharp contrast between the almost complete absence of communication through the AFC Championship Game and the extraordinary volume of communication during the three days following the AFC Championship Game undermines any suggestion that the communication addressed only preparation of footballs for the the Super Bowl..”

Yet Brady testified, in front of Goodell, that they were discussing Super Bowl preparation (of 100 balls, not 12) and the issue of alleged tampering.

Logic Error No. 1 — It has never happened…but it has happened…but that doesn’t matter

On page 3 of his ruling, Goodell writes that:

“Mr. McNally’s unannounced removal of the footballs from the locker room was a substantial breach of protocol, one that Mr. Anderson had never before experienced. Other referees interviewed said…that [McNally] had not engaged in similar conduct in the games that they had worked at Gillette Stadium.”

So Goodell is saying that McNally grabbing the balls is a huge deal and in fact, it has never even happened before! Which would then make it impossible for this to have been a regular practice.

Thus, when analyzing text messages, Goodell ignores this information and believes that McNally’s references to “Deflator” (in May) and “needles” in October of 2014 are signs of a tampering scheme, but when trying to establish the severity of the situation he believes nothing like this has ever happened before.

Similarly, during the hearing (pg 307) Ted Wells admitted that he ignored the testimony of Rita Calendar and Paul Galanis — game day employees — who claimed that McNally took the balls to the field about half of the time without the officials. Wells doesn’t even think this issue is relevant, explaining that:

“I didn’t need to drill down and decide when he walked down the hall 50 percent of the time by himself or was this person right or that person right.”

Got all that? This has been happening since at least 2014, but this is the first time something like this has ever happened. And Wells thinks it doesn’t matter whether this ever happened before or not.

Logic Error No. 2 — Jastremski expects a 13 PSI ball despite a tampering scheme

On pg 278 of the hearing, Wells acknowledges that John Jastremski texted his significant other about the Jets game and said that he expected the footballs to be 13 PSI. Amazingly, Wells believes he is telling the truth. Which creates yet another, Wellsian logical impossibility.

How can Wells believe Jastremski expected the balls to be at 13 PSI for the Jets game and believe that there was a scheme to deflate the balls below 12.5? It is a completely contradictory thought. (This is similar to Jastremski’s text message that he sent to McNally about the ref causing the balls to be 16 PSI in that game, and not a message to McNally about why the balls weren’t properly deflated.)

This makes it logically impossible for there to have been a tampering scheme for that home game against the Jets. This either means that:

  1. There was no tampering scheme ever
  2. There was a tampering scheme, but only after October, 2014
  3. The tampering is carried out inconsistently at home

The third explanation borders on preposterous, namely because the text still would have said something like “we should deflate every week from now on to avoid this!” The other two explanations make it impossible for the comments from May, 2014 to be about deflating footballs. Yet Goodell follows suit and cites such messages as evidence of a tampering scheme (pg 10 of his ruling):

“Equally, if not more telling, is a text message earlier in 2014, in which Mr. McNally referred to himself as “the deflator.”

Goodell, like Wells before him, omits that McNally claimed the reference was about weight loss, which may sound crazy until you consider that other people use the term for weight loss, including the NFL’s own network in 2009, and that McNally himself appears to make a reference to weight loss using the term “deflate” during the Patriot-Packers in 2014 in Green Bay. (McNally was watching the game on TV from his living room, and after seeing a picture of Jastremski on the suddenly in a large, puffy jacket texted him a message to “deflate and give someone that jacket.”)

Logic Error No. 3 — For the Colts, the Logo gauge matters. For the Patriots, it is impossible.

On pg 3 of Goodell’s ruling, he writes:

“Eleven of New England’s footballs were tested at halftime; all were below the prescribed air pressure range as measured on each of two gauges. Four of Indianapolis’s footballs were tested at halftime; all were within the prescribed air pressure range on at least one of the two gauges

First, this is bizarre, because it’s clear both sets of footballs lost pressure due to environmental factors. The Colts being “within the prescribed air pressure range” is simply due to their balls starting higher — Goodell knows it, you know it, every c-minus physics student in America knows it.

But what’s more problematic, and yet another assault on common sense, is that Goodell later rules that Anderson had to have used the non-logo gauge at halftime due to unassailable logic, but here he references the Colts being “within the prescribed air pressure range” on a gauge he considers to have been impossible to have been used.

Logic Error No. 4 — The balls were the same wetness

Wetness or moisture is a huge issue in the science. Yet here’s what Exponent scientist Dr. Caligiuri had to say about it as an alternative explanatory factor to tampering on pg 385 of the hearing:

“It is a possibility [that the Patriots’ balls could have been much wetter than the Colts’ balls because of the fact that the Patriots were on offense all the time with the balls], but there is no evidence that that occurred. The ball boys themselves said they tried to keep them as dry as possible. “

Brady’s attorney Jeffrey Kessler then asks him to confirm:

Kessler: Well, if you are on offense and you playing with the ball, can you keep it dry when it’s out there on the field?

Caligiuri: No

Kessler: Okay. So if the Patriots have those balls out there on the field, it’s plausible those balls were wetter, sir, right? You are under oath.

Caligiuri: Sure.

Kessler: Okay. And you didn’t test of that plausible assumption, right? Did you test for it?

Caligiuri: No…

Later Caligiuri states:

“We did not test for that because there was no basis to test for that.”

Yet, there is indisputable evidence that the Patriot balls were wetter. Namely, it was raining during the game and the Patriot possessed the ball for essentially 17 consecutive minutes in real-time, during the rain, to end the first half. Saying that there is no basis to test for that is a direct contradiction of the publicly available and undisputed information. Yet, on page 383-384 of the hearing, Caligiuri says:

“Did we look at wetness as a variability…in the beginning, no we didn’t.

Instead, he says they looked at “extremes.” This makes plenty of sense, except there are two giant problems. First, misting a football every 15 minutes with a hand spray and then immediately toweling it off is a nonsensical proxy for constant exposure to rain. Second, it does no good to create a range of possibilities and then not test the most likely possibility, namely that one set of footballs is on the wetter end of that range and the other is on the dryer end.

Logic Error No. 5 — Evidence that inflation mattering = deflation mattering/preference for deflation

Goodell has another breakdown in logic on pg 11, footnote 9:

“Even accepting Mr. Brady’s testimony that his focus with respect to game balls is on a ball’s ‘feel'” rather than its inflation level, there is ample evidence that the inflation level of the ball does matter to him.”

Yes, there is evidence that it matters if the ball is grossly overinflated. There’s no evidence that he wants it underinflated, or that reasonable inflation levels actually matter to him. None. It is a logical fallacy to think otherwise. It’s like saying “Mr. Brady complained about his food being too salty last night, therefore there is evidence that Mr. Brady really cares about having under-salted food.”

Logic Error No. 6 — Practical Significance

Finally, lost in all the discussion of the statistical significance is the issue of practical significance. This is the area that I really wish the NFLPA would have attacked at the hearing, but they did not broach it at all. Ironically, It’s probably the easiest part of the science for the lay person to understand.

Let’s assume that we were 99.9999% certain that the Patriot balls were all 0.3 PSI below where they should have been at halftime based on temperature alone — right around the actual number we think they are based on projections. That certainly does not mean that “tampering” is the only alternative explanation, and more importantly, it’s not very likely if the real-world explanation is not practical.

What benefit would someone actually gain from a completely undetectable change in PSI? Remember, players have never even known there were PSI changes from temperature in the past.

In other words, even if there is statistical significance on data that incorporates measurement time (which there isn’t), what would that data be suggesting? That Brady can magically detect differences in footballs that others can’t (and yet despite this, does not care if balls on the road are not a few tenths below 12.5), or that some other factor, like wetness, wind, temperature difference, gauge variability, inaccurate memory, etc., is a more practical explanation?

For those who missed it, Exponent themselves discovered on order of a few tenths of a PSI difference between the Patriot actual halftime measurements and where they projected their measurements to be.

Bonus Logic Error — It had to be the Non-Logo gauge

I’m hesitant to discuss this Red Herring, because the difference is negligible between the Logo and Non-Logo gauge when comparing the Colt and Patriot measurements. And this makes total sense — shifting the Patriot balls down a few tenths should (and does) also shift the Colt balls down a few tenths. But let’s pause to appreciate the absurdity of this logic, and then doubling-down to call it “unassailable.”

On pg 7, footnote 1, Goodell writes:

“I find unassailable the logic of the Wells Report and Mr. Wells’s testimony that the non-logo gauge was used because otherwise neither the Colt’s balls nor Patriots’ balls, when tested by Mr. Anderson prior to the game, would have measured consistently with the pressure at which each team had set their footballs prior to delivery to the game officials.”

Here’s what he’s referring to, echoed by Dr. Caligiuri on pg 364 of the hearing:

“Yes, he calculated, I rounded it up. 12.17, correct, okay. And then if you look at the Colts’ balls, if the same logo gauge was used, it’s reading 12.6, 12.7. We were told that the Patriots and the Colts were insistent that they delivered balls at 12 and a half and 13, which means, geez, looks like the logo gauge wasn’t used pre-game.”

OK, now let me assail it quickly — something that was already done at the hearing which Goodell provided over. The Logo gauge is inaccurate (reads too high) and the Non-Logo gauge is much closer to the “true” reading. Exponent tested a bunch of new gauges. Based on these two facts alone, Wells and Exponent have concluded that it’s improbable the Logo gauge was used because then then Colt and Patriot gauge would also have to be off by a similar amount, and that’s just, I mean, geez, that’s just insane.


Except for the pesky little problem that according to the Wells Report, Exponent tested one model only, Wilson model CJ-01. A model they describe as being “similar” to the Non-Logo gauge! So their sample size to make these “unassailable” conclusions is really one.

But there’s more: Exponent discovered gauges can “drift,” or grow more inaccurate with use. It’s quite possible that the Patriot and Colt ballboys both have older gauges that have “drifted” to a similar degree. At the hearing, this was scoffed at because it would be coincidental that they were off by the same amount. Again, this doesn’t actually matter — it’s a Red Herring — but it demonstrates how poor these people are at basic logic. On pg 295, Wells said:

“Maybe lightning could strike and both the Colts and Patriots also had a gauge that just happened to be out of whack like the logo gauge. I rejected that.”

The Patriots claim to set balls at 12.6 PSI, but Anderson did not remembering gauging them all at 12.6 in the pre-game. (He remembered 12.5, and had to re-inflate two balls that were under 12.5.) There are two likely explanations for this:

  1. The gloving procedure created some variability in the Patriot balls. This would make it more likely the Logo gauge were used base on Exponent’s logic.
  2. The Patriot gauge and Anderson’s pre-game gauge are off by about 0.15 PSI.

Either way, it’s impossible for the “lightning striking” concept to even apply (that the gauges were off by an identical amount). Using Wellsian logic — which means we ignore things like gloving or temperature changes from ball to ball — the very fact that the balls weren’t 12.6 as the Patriot say but some were under 12.5 for Anderson tells you that the two gauges are not identical. So there’s no need for “lightning to strike.”

Bonus Question: How closely did Roger Goodell read the Wells Report?

In his ruling, Goodell states that he relied on the factual and evidentiary findings (pg 1) of the Wells Report — but during the appeal, there are times during the appeal hearing when Goodell does not seem to know the basic case facts:

  • pg 49, he asks “John who?” when Brady is talking about John breaking the balls in. It’s possible this is Goodell’s way of confirming he is talking about John Jastremski, however it’s bizarre given the context of Brady’s explanation and Jastremski being one of a handful central figures in the case that he has to ask who John is. Does he know about Jim and Dave too?
  • pg 61, in reference to the October Jets game, he says, “Just so I’m clear, the Jets game is in New York.” This is a huge detail to not understand as it relates to the 13 PSI text mentioned above.
  • pg 177, while Edward Snyder is discussing the halftime period, he interjects “Just so I’m clear, you are saying it would take 4 minutes for 11 balls to be properly inflated? That’s your analysis or what analysis is that?” Here, Goodell is saying that he is completely unaware that the witnesses in the room at halftime provided those estimates to Wells, who relayed them to Exponent, and that those estimates are central to the scientific and statistical analysis in the case.
  • pg 180, in the discussion about “dry time” (vs moisture), Goodell asks in regards to moisture, “that’s a what-if, right?” How can the person ruling on the case, after reading a report that was designed to determine if environmental factors could explain the halftime measurements, ask if “rain” is a “what if” when it rained during the game?
  • pg 396, perhaps the clearest indication that Goodell either did not read or did not properly retain the information in the report is that he has no idea what the “gloving” issue is. This is the gloving referenced by Bill Belichick in his press conference and given an entire section by Exponent in their report.

Deflategate: Exponent’s Bias and the Master Error

With all of the publicized corrections to the science section of the Wells Report, I’ve been asked by more than one person whether Exponent, the author of said section, was simply incompetent, or whether they were biased. It’s a question that might have legal ramifications in the near future for Tom Brady.

As I’ll detail below,  there is a body of evidence suggesting that Exponent’s report was not merely the result of bad science, but conducted with a clear anti-Patriot bias. They repeatedly made errors or only looked at possibilities that weakened the Patriot’s position without ever making errors in their favor. The nature and frequency of these errors makes it unlikely to be a coincidence. Furthermore, Exponent committed a major error in one of their key figures, an error that allowed them to report, incorrectly, an anti-Patriot conclusion back to Ted Wells. What exactly am I referring to?

Not accounting for time of halftime measurements

At a high level, the biggest methodological error Exponent commits is not properly accounting for the time differences of when the balls were measured at halftime. This leads to a nonsensical statistical test that they publish to establish “statistical significance.” The problem is, they knew about this factor. They too considered it a salient factor. They made multiple transient curves mapping how things change depending on when they were measured at halftime.

They didn’t stop there.

They dedicated an entire section (Table 13, page 58) to perform a mini-version of the analysis I present here, using periods of “average measurement time” to compare the difference between expected PSI and observed PSI at a given time.

Wells writes, on page 122:

“According to Exponent, the environmental conditions with the most significant impact on the halftime measurements were the temperature in the Officials Locker Room when the game balls were tested prior to the game and at halftime, the temperature on the field during the first half of the game, the amount of time elapsed between when the game balls were brought back to the Officials Locker Room at halftime and when they were tested, and whether the game balls were wet or dry when they were tested. “

So they thought a lot about the impact of the timing of halftime measurements. On page 57, in one of many mentions of this:

“A similar effect is seen in the game day simulation data; the average pressure rises as the average measurement time is increased.”

Again on page 62:

“Based on the transient curves explained above, one would expect that if the Patriots footballs were set to a consistent or relatively consistent starting pressure, the pressure would rise relatively consistently as they were tested later in the Locker Room Period.”

Yet they still published their p-values on page 11 and conducted analyses in the opening pages without considering time! This cannot be due to incompetence since they are keenly aware of and explicitly call out the importance of time on multiple occasions. On page 64, in their concluding statement, their second point cites these statistical tests as critical pieces of evidence supporting their conclusion. Unless different people prepared different parts of the report, this is evidence of a clear bias against the Patriots. But it’s also just the beginning.

Switching Fig. 26 to the extreme low temperature of 67 degrees

The transient curve used in Figure 24 to project Non-Logo gauge results uses a pre-game room temperature of 71 degrees. The HVAC on the day of the game was set between 71 and 74 degrees. But Exponent measured the temperature in the room where the balls were gauged by officials in the pre-game to range from 67-71 degrees. It was a good 30 degrees colder outside on the day Exponent measured, and there wasn’t the same game day activity where numerous people give off extra heat in the room.

When they project the Logo gauge results on the transient curve used in Fig. 26, they switch the pre-game temperature to 67 degrees, the extreme end of the plausible spectrum that produces the lowest Patriot reading. Their explanation for using 67 degrees is so the Colt measurements align with the projections. This is a reasonable approach, given that the Colt balls “should” obey the laws of physics, but (a) it should not be the only scenario examined and (b) they did not need to drop the pre-game temp all the way to 67 degrees to achieve this! Doing so only increased the appearance of guilt for the Patriots. The Colt readings are still viable and withinin Exponent’s “range’ of what is predicted by physics even with a 69 degree pre-game temperature.

Misting the footballs to simulate rain

When accounting for water, as described on page 42 (footnote 36), footballs were sprayed every 15 minutes with a hand held spray bottle and then toweled off immediately. As has been demonstrated, this is a minimal attempt at simulating rain. This is critical to interpreting the results (that will be discussed below and that reflect those presented here); Exponent’s wet curves between Figure 24 and Figure 26 show an additional effect of about 0.25 PSI due to wetness simply from running the simulation again. Yet, as we’ll see in a second, they cannot imagine how the Patriot footballs would be a few tenths below where they were expected based on temperature-only projections.

Not calculating the actual PSI differences from expected

The mini experiment Exponent runs in Table 13 produces the following results: at the earliest plausible time (let’s use the 4:17 reading), Patriot averages should have been 11.54 PSI on the Non-Logo gauge. The actual Master-adjusted halftime average on the Non-Logo gauge was 11.09 PSI. So the Patriots are -0.45 PSI from expected. The Colts Non-Logo average was 12.29 according to Table 11 on pg 45. (This is because Exponent uses the “switch” option to correct for the anomalous 3rd Colt ball.) Therefore, the Patriot balls are about 0.4 PSI below the Colt balls relative to expected. Is that clear from Table 13?

Exponent Table 13

Not only is it unclear, Exponent never even publishes the differences. They fail to calculate or discuss perhaps the most specific and important detail of all of their experimentation, instead simply noting that the Colt readings are in-line with these simulations and the Patriot readings are not. This is not incompetence, it is a bias of omission. More importantly, are the Colt measurement times in Table 13 even plausible?

Assuming the Colt balls are measured before the Patriot balls

Exponent assumes, contrary to the evidence, that the Colt balls were gauged before the 11 Patriot balls were reinflated. This is yet another anti-Patriot “error” or instance where they refuse to examine other plausible scenarios. The repeated and consistent manner in which this happens is hard to chalk up to coincidental incompetence.

Wells does not explicitly state that the Colt balls were gauged before the Patriot balls were re-inflated. Exponent should have asked about this and should have clearly stated it if it were provided such information. If not, they should have, “to be fair,” at least considered the possibility that the Colt balls were gauged later in the locker room period as an explanation for the differences of a few tenths of air pressure.

Burying the Logo and Non-Logo average PSI results

So, what happens if they were to explicitly note the PSI differences in their table as well as including Colt measurements at 11 or 12 minutes, the times that they were likely to be gauged?

Table 13 Updated

An updated version of Exponent’s table 13, showing Non-Logo Gauge Master-Adjusted results with a 71-degree pre-game temperature. This table includes a later measurement time for the Colts as well as explicitly calling out the differences between the expected and observed halftime values.

Now, for example, it’s crystal clear that an approximate 4-and-a-half minute measure time for New England and 11-minute measure time for Indianapolis result in a difference of 0.3 PSI on the Non-Logo gauge between the Patriot and Colt balls. This is similar to what has been observed in more detailed analyses.

Forget the inclusion of a later Colt measurement though. Why doesn’t Exponent call out that differential since it’s perhaps the single most salient data point in their entire report? Without any corrections, it would reveal differences of a few tenths of PSI between the control (Colts) and Patriot Non-Logo readings. Would publishing that number have impacted people’s reactions to their conclusions?

What about the Logo gauge experiment in Table 14? The Patriot Master-adjusted Logo halftime average value was 11.21 PSI, hidden in the paragraph on the following page, meaning that their experiment again found Patriot balls 0.3-0.4 PSI below expected on the Logo gauge, with the pre-game temperature at 67 degrees.

Table 14 Updated

An updated version of Exponent’s table 14, showing Logo Gauge Master-Adjusted results with a 67-degree pre-game temperature. This table includes a later measurement time for the Colts as well as explicitly calling out the differences between the expected and observed halftime values.

Could water account for that small difference? Or a different temperature? Placing the pre-game temperature at something like 69 degrees will bring the Patriot balls about 0.1 PSI closer to expected. Again, this is something Exponent conveniently does not even consider, despite providing a plausible temperature range of 67-74 degrees and running misting tests that demonstrate an effect of wetness.

The Master Error — failing to use master projections for master results

And then there’s this enormous error.

In Figure 26 (a figure recycled again in Figure 30), Exponent used a Master-adjusted transient curve to demonstrate where the footballs are projected to be as they heat up at halftime. Only they fail to present an adjusted curve! Figure 26 is simply wrong.

The curve shows a dry starting halftime value of over 11.5 PSI for the expected Patriot values. But a Master-adjusted Patriot ball would actually be 12.17 PSI in the pre-game according to Exponent. A dry football is expected to be 11.20 PSI at 48 degrees if it were set at 12.17 PSI in a 67 degree environment in the pre-game, as Exponent is attempting to model. The graph is not master-adjusted, even though Exponent claims it is. It is a clear error and needs to be corrected.

What happens when it is corrected?
Screen Shot 2015-07-24 at 10.14.59 PM

The Logo scenario that Exponent presents to support its case suddenly contradicts it. It makes their primary conclusion on page 55 simply wrong:

“Based on the above conclusions, although the relative ‘explainability’ of the results from Game Day are dependent on which gauge was used by Walt Anderson prior to the game, given the most likely timing of events during halftime, the Patriots halftime measurements do not appear to be explained by the environmental factors tested, regardless of the gauge used.

Correcting this huge error would fundamentally alter this conclusion.

Incorrectly claiming that the pre-game temperature is set to help the Patriots

They continue to write, on page 54, that

“it is important again to note that values for the pre-game and halftime locker room temperatures shown in Figure 27 put the Patriots transient curves at their lowest possible positions.”

But this is completely backwards — yet another anti-Patriot error. In order to generate the lowest starting transient curve within the HVAC parameters, the pre-game temperature would be 74 degrees, producing a starting halftime value of 10.86 PSI. 67 degrees is actually the worst starting value for the Patriot differentials.

Inability to conceive of wetness as the explainable natural factor

The icing on the cake is that the differences in the Colt and Patriot measurements are in all likelihood the difference in their exposure to rain. For the uninitiated, this can be clearly seen in the gradient of differences among the Patriot balls that suggests some Patriot balls were exposed to more rain, and in particular those balls on the final drive of the half.

Yet on page 55, when discussing wetness as a factor, they write:

“According to Paul, Weiss, [a majority of wet balls] were most likely not present on Game Day.”

How can they say that, given the factors around wetness? They mention nothing of the Patriot balls being used more, and being in play at the end of the half. This is yet another ant-Patriot oversight. Remember, they presented back-to-back graphics in which water made on order of 0.2 PSI-0.4 PSI differences from the “dry” condition, based on their own misting procedure. Despite the game being played in rain, Exponent concludes that results of the exact same magnitude cannot be explained by rain.


All told, the only time they seem to do something that isn’t anti-Patriot is when they create a row in Tables 13 and 14 for average measurement times that are improbably early in the locker room period. Otherwise, every misstep, omission and blatant error is decidedly anti-Patriot, and often committed in inexplicable fashion. In summary, Exponent demonstrates the following biases by:

  • Failing to account for halftime measurements in publishing p-values, despite knowing time of measurement is critical
  • Switching to an (unnecessarily) extremely low temperature projection for the Patriot Logo gauge
  • Misting footballs to simulate rain (and immediately toweling them off)
  • Not publishing the actual PSI differences between halftime measurements and expected measurements
  • Assuming the Colt balls are measured improbably early in the locker room period, and not considering later measurement times
  • Presenting Figure 26 and 30 with completely false transient curves, thereby altering their conclusions vis-a-vis the Logo gauge
  • Incorrectly claiming the pre-game indoor temperature of 67 degrees is a best-case for the Patriots
  • Not considering wetness as an explanation for the few tenths difference despite finding a few tenths difference from wetness