Half-Court Math: Hack-a-Whoever, Isolation and Long 2’s

In my upcoming book, Thinking Basketball, I allude to certain instances where “low efficiency” isolation offense provides value for teams. Most of us compare a player’s efficiency to the overall team or league average, but that’s not quite how the math works, because the average half-court possession is worth less than the average overall possession.

In 2016, the typical NBA possession was worth about 1.06 points. That’s a sample that includes half-court possessions against a set defense, but also scoring attempts from:

  • transition
  • loose-ball fouls
  • intentional fouls
  • technical fouls

Transition is by far the largest subset of that group, accounting for 15% of possessions for teams, per Synergy Sports play-tracking estimations. Not surprisingly, transition chances, when the defense is not set, are worth far more than half-court chances. As are all of the free-throw shooting possessions that occur outside of the half-court offense.

Strip away those premium opportunities from transition and miscellaneous free throws and the 2016 league averaged 95 points per 100 half-court possessions. (All teams were between 7 and 14 points worse in the half-court than their overall efficiency.) Golden State, the best half-court offense in the league this year, tallied an offensive rating around 105, far off its overall number of 115 that analysts are used to seeing.

Transition vs Half Court Efficiency

This has major implications for the math behind “Hack-A-Whoever.” If the defense is set, then, all things being equal, fouling someone who shoots over 50% from the free throw line is doing them a favor. One might think that a 53% free throw shooter (1.06 points per attempt) at the line is below league average on offense because of the overall offensive efficiency. But it’s actually well above league average against a set, half-court defense. (Other factors, like offensive rebounding and allowing the free-throw shooters team to set-up on defense complicate the equation.)

Said another way — fouling a 53% free throw shooter is similar to giving up a 53% 2-point attempt…which is woeful half-court defense.

There could be other viable reasons to “Hack-A-Whoever,” such as breaking up an opponent’s rhythm or psychologically disrupting the fouled player. (These would be good strategic reasons to keep the rule, in my opinion.) But assuming he was a 50-60% foul shooter, coaches would still be making a short-term tradeoff, exchanging an inefficient defensive possession for other strategic gains.

This also has ramifications for isolation scorers and long 2-point shots. Isolation matchups that create around a point per possession in the half court — or “only” 50% true shooting — are indeed excellent possessions. If defenses don’t react accordingly, they will be burned by such efficiency in the half-court. As an example, San Antonio registered about 103 points per 100 half-court possessions this year, and combined it with a below-average transition attack to still finish with an offensive rating of 110, fourth-best in the league.

The same goes for the dreaded mid-range or long 2-pointer — giving these shots to excellent shooters from that range (around 50% conversion) is a subpar defensive strategy. And even a 35% 3-point shooter (1.05 points per shot) yields elite half-court offense.

So, when we talk about the Expected Value of certain strategies, mixing transition possessions together with half-court ones will warp the numbers. Sometimes, seemingly below-average efficiency is actually quite good.

 

How 2016 NBA Teams Differentiated Themselves on Offense

Dean Oliver’s Four Factors uses box score data to determine how teams are successful in key elemental areas. Instead of looking at box stats like turnovers and rebounding, what if we used different types of plays to determine a team’s offensive strengths? Synergy tracks a number of play types, but not all have a large impact on the game. Based on the 2016 data on nba.com, the following were the most common play types this year:

  • 25% were pick-n-roll plays
  • 20% were spot-ups
  • 15% were in transition

Naturally, teams differentiate themselves from the pack based on the plays they run the most; The Lakers led the league in isolation plays, but their efficiency was below-average on those plays, so they lost lots of ground on the average offense. The five categories from Synergy with the largest degree of differentiation were:*

  1. Pick-n-Roll (PnR)
  2. Spot Up
  3. Transition
  4. Post Up
  5. Off Screen

Below is a visual of how every team in the NBA this year fared in these five factors.

2016 Differentiation by Play Type

The y-axis represents the per-game differentiation based on efficiency of a given play type (relative to league average). For instance, if a team ran 820 post ups (10 per game) and averaged 0.10 points per play more than league average, they would generate an extra point per game.

Not surprisingly, the most differentiating play type during the 2016 season was a Golden State spot-up shot. Of the 203 players with at least 100 spot-ups, Steph Curry was 2nd in efficiency at 1.49 points per play and splash brother Klay Thompson 15th at 1.18 points per play. (League average was 0.97 points per spot-up.) Let’s simplify the above visual and just focus on the final eight teams left in this year’s playoff field:

2016 Differentiation Final 8

Now it’s easier to see how the remaining teams stack up. The Warriors don’t really have a post-up game, but so what? They excel at everything else and created the most differentiation of any team in the league in three major categories (PnR, Spot Up and Off Screen.) On the other hand, the Spurs were dominant in the post and excellent in their own right at spot-up plays, but they don’t do damage in transition. (San Antonio also led the league in “put backs” by a large degree, generating over a point of separation alone in that category.) The East’s best team, Cleveland, was above-average at everything.

*Isolation plays would be the 6th major play type. However, no team in 2016 created a point of positive or negative differentiation from isolation plays, which accounted for 8% of all plays tracked during the season. 

Goodell’s Illogical and False Deflategate Statements

It turns out that Roger Goodell, Exponent and Ted Wells just aren’t very good at logic. Whether that’s due to severe defensiveness and a major confirmation bias or something else is irrelevant. I’m not going to go into legal details or CBA issues, but I will discuss the scientific and logical errors and inconsistencies from Goodell’s appeal ruling and the hearing itself in deflategate.

Falsehood No. 1 — Timing was accounted for the in the statistical test

On pg 6 of his ruling, Goodell writes:

“In reaching this conclusion, I took into account Dean Snyder’s opinion that the Exponent analysis had ignored timing…however, both [Dr. Caligiuri and Dr. Steffey] explained how timing was, in fact, taken into account in both their experimental and statistical analysis.”

This is patently false. It is not an opinion of Dr. Dnyder’s. It is a fact. And it is a fact agreed upon by Dr. Caligiuri and Dr. Steffey after much run around and refusal to answer this question. In Dr. Caligiuri’s testimony on pg 361 of the hearing:

“So the reason you don’t see a timing effect that we concluded in the statistical analysis is because it’s being masked out by the variability in the data due to these other effects.”

And then later on pg 380:

Kessler: So the initial test you did to determine whether there was anything to study did not have a timing variable?

Caligiuri: Not specifically, no.

Steffey echoes this fact on page 429 and 430:

Kessler: This one-structured model that you chose to present as your only structured model in this appendix and in the entire report, okay, has no timing variable in it, correct?”

Steffey: There’s no term in there that says time effect.

Goodell is either misrepresenting the truth or he is very, very confused and was not able to understand this issue at the hearing. Either way, once and for all, timing is not accounted for in Exponent’s statistical analysis. It is a major confound, and it does change the results when timing is indeed accounted for.

(By the way, the Exponent scientists were attempting to claim that an ordering effect is the same thing as accounting for timing, but that is also wrong. First, an ordering effect can have different increments of time (as the Patriot and Colt balls do) and second, an ordering effect is independent of time, which is relevant in an instance where another variable, like wetness, would completely mitigate the presence of an ordering effect but not undo the effect of time.)

Falsehood No. 2 — Brady’s “extraordinary volume” of communication for ball prep

On pg 8 of his ruling, as part of discrediting Brady’s testimony, Goodell reasons:

“After having virtually no communications by cellphone for the entire regular season, on January 19, the day following the AFC Championship Game, Mr. Brady and Mr. Jastremski had four cellphone conversations, totaling more than 25 minutes, exchanged 12 text messages, and, at Mr. Brady’s direction, met in the ‘QB room,’ which Mr. Jastremski had never visited before…the need for such frequent communication beginning on January 19 is difficult to square with the fact that there apparently was no need to communicate by cellphone with Mr. Jastremski or to meet personally with him in the ‘QB room’ during the preceding twenty weeks.”

This is a serious mischaracterization of facts. Let’s ignore the basic fact that there wasn’t a media frenzy surrounding Jastremski’s domain in any of the previous 20 weeks. During the hearing, Brady explained that, for the Super Bowl, Jastremski needs to prepare approximately one hundred footballs, at least eight times his normal volume.

Furthermore, Brady testified that deflategate allegations surfaced on days when he was not at the stadium because of the Super Bowl break. Frankly, it would have been stranger if he didn’t call Jastremski. The hoopla over the visit to the QB room is also bizarre, since Brady said he simply didn’t want to look for him in the stadium. There is no justification for how Goodell ignores this evidence, even taking it further and writing on pg 9:

“The sharp contrast between the almost complete absence of communication through the AFC Championship Game and the extraordinary volume of communication during the three days following the AFC Championship Game undermines any suggestion that the communication addressed only preparation of footballs for the the Super Bowl..”

Yet Brady testified, in front of Goodell, that they were discussing Super Bowl preparation (of 100 balls, not 12) and the issue of alleged tampering.

Logic Error No. 1 — It has never happened…but it has happened…but that doesn’t matter

On page 3 of his ruling, Goodell writes that:

“Mr. McNally’s unannounced removal of the footballs from the locker room was a substantial breach of protocol, one that Mr. Anderson had never before experienced. Other referees interviewed said…that [McNally] had not engaged in similar conduct in the games that they had worked at Gillette Stadium.”

So Goodell is saying that McNally grabbing the balls is a huge deal and in fact, it has never even happened before! Which would then make it impossible for this to have been a regular practice.

Thus, when analyzing text messages, Goodell ignores this information and believes that McNally’s references to “Deflator” (in May) and “needles” in October of 2014 are signs of a tampering scheme, but when trying to establish the severity of the situation he believes nothing like this has ever happened before.

Similarly, during the hearing (pg 307) Ted Wells admitted that he ignored the testimony of Rita Calendar and Paul Galanis — game day employees — who claimed that McNally took the balls to the field about half of the time without the officials. Wells doesn’t even think this issue is relevant, explaining that:

“I didn’t need to drill down and decide when he walked down the hall 50 percent of the time by himself or was this person right or that person right.”

Got all that? This has been happening since at least 2014, but this is the first time something like this has ever happened. And Wells thinks it doesn’t matter whether this ever happened before or not.

Logic Error No. 2 — Jastremski expects a 13 PSI ball despite a tampering scheme

On pg 278 of the hearing, Wells acknowledges that John Jastremski texted his significant other about the Jets game and said that he expected the footballs to be 13 PSI. Amazingly, Wells believes he is telling the truth. Which creates yet another, Wellsian logical impossibility.

How can Wells believe Jastremski expected the balls to be at 13 PSI for the Jets game and believe that there was a scheme to deflate the balls below 12.5? It is a completely contradictory thought. (This is similar to Jastremski’s text message that he sent to McNally about the ref causing the balls to be 16 PSI in that game, and not a message to McNally about why the balls weren’t properly deflated.)

This makes it logically impossible for there to have been a tampering scheme for that home game against the Jets. This either means that:

  1. There was no tampering scheme ever
  2. There was a tampering scheme, but only after October, 2014
  3. The tampering is carried out inconsistently at home

The third explanation borders on preposterous, namely because the text still would have said something like “we should deflate every week from now on to avoid this!” The other two explanations make it impossible for the comments from May, 2014 to be about deflating footballs. Yet Goodell follows suit and cites such messages as evidence of a tampering scheme (pg 10 of his ruling):

“Equally, if not more telling, is a text message earlier in 2014, in which Mr. McNally referred to himself as “the deflator.”

Goodell, like Wells before him, omits that McNally claimed the reference was about weight loss, which may sound crazy until you consider that other people use the term for weight loss, including the NFL’s own network in 2009, and that McNally himself appears to make a reference to weight loss using the term “deflate” during the Patriot-Packers in 2014 in Green Bay. (McNally was watching the game on TV from his living room, and after seeing a picture of Jastremski on the suddenly in a large, puffy jacket texted him a message to “deflate and give someone that jacket.”)

Logic Error No. 3 — For the Colts, the Logo gauge matters. For the Patriots, it is impossible.

On pg 3 of Goodell’s ruling, he writes:

“Eleven of New England’s footballs were tested at halftime; all were below the prescribed air pressure range as measured on each of two gauges. Four of Indianapolis’s footballs were tested at halftime; all were within the prescribed air pressure range on at least one of the two gauges

First, this is bizarre, because it’s clear both sets of footballs lost pressure due to environmental factors. The Colts being “within the prescribed air pressure range” is simply due to their balls starting higher — Goodell knows it, you know it, every c-minus physics student in America knows it.

But what’s more problematic, and yet another assault on common sense, is that Goodell later rules that Anderson had to have used the non-logo gauge at halftime due to unassailable logic, but here he references the Colts being “within the prescribed air pressure range” on a gauge he considers to have been impossible to have been used.

Logic Error No. 4 — The balls were the same wetness

Wetness or moisture is a huge issue in the science. Yet here’s what Exponent scientist Dr. Caligiuri had to say about it as an alternative explanatory factor to tampering on pg 385 of the hearing:

“It is a possibility [that the Patriots’ balls could have been much wetter than the Colts’ balls because of the fact that the Patriots were on offense all the time with the balls], but there is no evidence that that occurred. The ball boys themselves said they tried to keep them as dry as possible. “

Brady’s attorney Jeffrey Kessler then asks him to confirm:

Kessler: Well, if you are on offense and you playing with the ball, can you keep it dry when it’s out there on the field?

Caligiuri: No

Kessler: Okay. So if the Patriots have those balls out there on the field, it’s plausible those balls were wetter, sir, right? You are under oath.

Caligiuri: Sure.

Kessler: Okay. And you didn’t test of that plausible assumption, right? Did you test for it?

Caligiuri: No…

Later Caligiuri states:

“We did not test for that because there was no basis to test for that.”

Yet, there is indisputable evidence that the Patriot balls were wetter. Namely, it was raining during the game and the Patriot possessed the ball for essentially 17 consecutive minutes in real-time, during the rain, to end the first half. Saying that there is no basis to test for that is a direct contradiction of the publicly available and undisputed information. Yet, on page 383-384 of the hearing, Caligiuri says:

“Did we look at wetness as a variability…in the beginning, no we didn’t.

Instead, he says they looked at “extremes.” This makes plenty of sense, except there are two giant problems. First, misting a football every 15 minutes with a hand spray and then immediately toweling it off is a nonsensical proxy for constant exposure to rain. Second, it does no good to create a range of possibilities and then not test the most likely possibility, namely that one set of footballs is on the wetter end of that range and the other is on the dryer end.

Logic Error No. 5 — Evidence that inflation mattering = deflation mattering/preference for deflation

Goodell has another breakdown in logic on pg 11, footnote 9:

“Even accepting Mr. Brady’s testimony that his focus with respect to game balls is on a ball’s ‘feel'” rather than its inflation level, there is ample evidence that the inflation level of the ball does matter to him.”

Yes, there is evidence that it matters if the ball is grossly overinflated. There’s no evidence that he wants it underinflated, or that reasonable inflation levels actually matter to him. None. It is a logical fallacy to think otherwise. It’s like saying “Mr. Brady complained about his food being too salty last night, therefore there is evidence that Mr. Brady really cares about having under-salted food.”

Logic Error No. 6 — Practical Significance

Finally, lost in all the discussion of the statistical significance is the issue of practical significance. This is the area that I really wish the NFLPA would have attacked at the hearing, but they did not broach it at all. Ironically, It’s probably the easiest part of the science for the lay person to understand.

Let’s assume that we were 99.9999% certain that the Patriot balls were all 0.3 PSI below where they should have been at halftime based on temperature alone — right around the actual number we think they are based on projections. That certainly does not mean that “tampering” is the only alternative explanation, and more importantly, it’s not very likely if the real-world explanation is not practical.

What benefit would someone actually gain from a completely undetectable change in PSI? Remember, players have never even known there were PSI changes from temperature in the past.

In other words, even if there is statistical significance on data that incorporates measurement time (which there isn’t), what would that data be suggesting? That Brady can magically detect differences in footballs that others can’t (and yet despite this, does not care if balls on the road are not a few tenths below 12.5), or that some other factor, like wetness, wind, temperature difference, gauge variability, inaccurate memory, etc., is a more practical explanation?

For those who missed it, Exponent themselves discovered on order of a few tenths of a PSI difference between the Patriot actual halftime measurements and where they projected their measurements to be.

Bonus Logic Error — It had to be the Non-Logo gauge

I’m hesitant to discuss this Red Herring, because the difference is negligible between the Logo and Non-Logo gauge when comparing the Colt and Patriot measurements. And this makes total sense — shifting the Patriot balls down a few tenths should (and does) also shift the Colt balls down a few tenths. But let’s pause to appreciate the absurdity of this logic, and then doubling-down to call it “unassailable.”

On pg 7, footnote 1, Goodell writes:

“I find unassailable the logic of the Wells Report and Mr. Wells’s testimony that the non-logo gauge was used because otherwise neither the Colt’s balls nor Patriots’ balls, when tested by Mr. Anderson prior to the game, would have measured consistently with the pressure at which each team had set their footballs prior to delivery to the game officials.”

Here’s what he’s referring to, echoed by Dr. Caligiuri on pg 364 of the hearing:

“Yes, he calculated, I rounded it up. 12.17, correct, okay. And then if you look at the Colts’ balls, if the same logo gauge was used, it’s reading 12.6, 12.7. We were told that the Patriots and the Colts were insistent that they delivered balls at 12 and a half and 13, which means, geez, looks like the logo gauge wasn’t used pre-game.”

OK, now let me assail it quickly — something that was already done at the hearing which Goodell provided over. The Logo gauge is inaccurate (reads too high) and the Non-Logo gauge is much closer to the “true” reading. Exponent tested a bunch of new gauges. Based on these two facts alone, Wells and Exponent have concluded that it’s improbable the Logo gauge was used because then then Colt and Patriot gauge would also have to be off by a similar amount, and that’s just, I mean, geez, that’s just insane.

Right?

Except for the pesky little problem that according to the Wells Report, Exponent tested one model only, Wilson model CJ-01. A model they describe as being “similar” to the Non-Logo gauge! So their sample size to make these “unassailable” conclusions is really one.

But there’s more: Exponent discovered gauges can “drift,” or grow more inaccurate with use. It’s quite possible that the Patriot and Colt ballboys both have older gauges that have “drifted” to a similar degree. At the hearing, this was scoffed at because it would be coincidental that they were off by the same amount. Again, this doesn’t actually matter — it’s a Red Herring — but it demonstrates how poor these people are at basic logic. On pg 295, Wells said:

“Maybe lightning could strike and both the Colts and Patriots also had a gauge that just happened to be out of whack like the logo gauge. I rejected that.”

The Patriots claim to set balls at 12.6 PSI, but Anderson did not remembering gauging them all at 12.6 in the pre-game. (He remembered 12.5, and had to re-inflate two balls that were under 12.5.) There are two likely explanations for this:

  1. The gloving procedure created some variability in the Patriot balls. This would make it more likely the Logo gauge were used base on Exponent’s logic.
  2. The Patriot gauge and Anderson’s pre-game gauge are off by about 0.15 PSI.

Either way, it’s impossible for the “lightning striking” concept to even apply (that the gauges were off by an identical amount). Using Wellsian logic — which means we ignore things like gloving or temperature changes from ball to ball — the very fact that the balls weren’t 12.6 as the Patriot say but some were under 12.5 for Anderson tells you that the two gauges are not identical. So there’s no need for “lightning to strike.”

Bonus Question: How closely did Roger Goodell read the Wells Report?

In his ruling, Goodell states that he relied on the factual and evidentiary findings (pg 1) of the Wells Report — but during the appeal, there are times during the appeal hearing when Goodell does not seem to know the basic case facts:

  • pg 49, he asks “John who?” when Brady is talking about John breaking the balls in. It’s possible this is Goodell’s way of confirming he is talking about John Jastremski, however it’s bizarre given the context of Brady’s explanation and Jastremski being one of a handful central figures in the case that he has to ask who John is. Does he know about Jim and Dave too?
  • pg 61, in reference to the October Jets game, he says, “Just so I’m clear, the Jets game is in New York.” This is a huge detail to not understand as it relates to the 13 PSI text mentioned above.
  • pg 177, while Edward Snyder is discussing the halftime period, he interjects “Just so I’m clear, you are saying it would take 4 minutes for 11 balls to be properly inflated? That’s your analysis or what analysis is that?” Here, Goodell is saying that he is completely unaware that the witnesses in the room at halftime provided those estimates to Wells, who relayed them to Exponent, and that those estimates are central to the scientific and statistical analysis in the case.
  • pg 180, in the discussion about “dry time” (vs moisture), Goodell asks in regards to moisture, “that’s a what-if, right?” How can the person ruling on the case, after reading a report that was designed to determine if environmental factors could explain the halftime measurements, ask if “rain” is a “what if” when it rained during the game?
  • pg 396, perhaps the clearest indication that Goodell either did not read or did not properly retain the information in the report is that he has no idea what the “gloving” issue is. This is the gloving referenced by Bill Belichick in his press conference and given an entire section by Exponent in their report.

Deflategate: Exponent’s Bias and the Master Error

With all of the publicized corrections to the science section of the Wells Report, I’ve been asked by more than one person whether Exponent, the author of said section, was simply incompetent, or whether they were biased. It’s a question that might have legal ramifications in the near future for Tom Brady.

As I’ll detail below,  there is a body of evidence suggesting that Exponent’s report was not merely the result of bad science, but conducted with a clear anti-Patriot bias. They repeatedly made errors or only looked at possibilities that weakened the Patriot’s position without ever making errors in their favor. The nature and frequency of these errors makes it unlikely to be a coincidence. Furthermore, Exponent committed a major error in one of their key figures, an error that allowed them to report, incorrectly, an anti-Patriot conclusion back to Ted Wells. What exactly am I referring to?

Not accounting for time of halftime measurements

At a high level, the biggest methodological error Exponent commits is not properly accounting for the time differences of when the balls were measured at halftime. This leads to a nonsensical statistical test that they publish to establish “statistical significance.” The problem is, they knew about this factor. They too considered it a salient factor. They made multiple transient curves mapping how things change depending on when they were measured at halftime.

They didn’t stop there.

They dedicated an entire section (Table 13, page 58) to perform a mini-version of the analysis I present here, using periods of “average measurement time” to compare the difference between expected PSI and observed PSI at a given time.

Wells writes, on page 122:

“According to Exponent, the environmental conditions with the most significant impact on the halftime measurements were the temperature in the Officials Locker Room when the game balls were tested prior to the game and at halftime, the temperature on the field during the first half of the game, the amount of time elapsed between when the game balls were brought back to the Officials Locker Room at halftime and when they were tested, and whether the game balls were wet or dry when they were tested. “

So they thought a lot about the impact of the timing of halftime measurements. On page 57, in one of many mentions of this:

“A similar effect is seen in the game day simulation data; the average pressure rises as the average measurement time is increased.”

Again on page 62:

“Based on the transient curves explained above, one would expect that if the Patriots footballs were set to a consistent or relatively consistent starting pressure, the pressure would rise relatively consistently as they were tested later in the Locker Room Period.”

Yet they still published their p-values on page 11 and conducted analyses in the opening pages without considering time! This cannot be due to incompetence since they are keenly aware of and explicitly call out the importance of time on multiple occasions. On page 64, in their concluding statement, their second point cites these statistical tests as critical pieces of evidence supporting their conclusion. Unless different people prepared different parts of the report, this is evidence of a clear bias against the Patriots. But it’s also just the beginning.

Switching Fig. 26 to the extreme low temperature of 67 degrees

The transient curve used in Figure 24 to project Non-Logo gauge results uses a pre-game room temperature of 71 degrees. The HVAC on the day of the game was set between 71 and 74 degrees. But Exponent measured the temperature in the room where the balls were gauged by officials in the pre-game to range from 67-71 degrees. It was a good 30 degrees colder outside on the day Exponent measured, and there wasn’t the same game day activity where numerous people give off extra heat in the room.

When they project the Logo gauge results on the transient curve used in Fig. 26, they switch the pre-game temperature to 67 degrees, the extreme end of the plausible spectrum that produces the lowest Patriot reading. Their explanation for using 67 degrees is so the Colt measurements align with the projections. This is a reasonable approach, given that the Colt balls “should” obey the laws of physics, but (a) it should not be the only scenario examined and (b) they did not need to drop the pre-game temp all the way to 67 degrees to achieve this! Doing so only increased the appearance of guilt for the Patriots. The Colt readings are still viable and withinin Exponent’s “range’ of what is predicted by physics even with a 69 degree pre-game temperature.

Misting the footballs to simulate rain

When accounting for water, as described on page 42 (footnote 36), footballs were sprayed every 15 minutes with a hand held spray bottle and then toweled off immediately. As has been demonstrated, this is a minimal attempt at simulating rain. This is critical to interpreting the results (that will be discussed below and that reflect those presented here); Exponent’s wet curves between Figure 24 and Figure 26 show an additional effect of about 0.25 PSI due to wetness simply from running the simulation again. Yet, as we’ll see in a second, they cannot imagine how the Patriot footballs would be a few tenths below where they were expected based on temperature-only projections.

Not calculating the actual PSI differences from expected

The mini experiment Exponent runs in Table 13 produces the following results: at the earliest plausible time (let’s use the 4:17 reading), Patriot averages should have been 11.54 PSI on the Non-Logo gauge. The actual Master-adjusted halftime average on the Non-Logo gauge was 11.09 PSI. So the Patriots are -0.45 PSI from expected. The Colts Non-Logo average was 12.29 according to Table 11 on pg 45. (This is because Exponent uses the “switch” option to correct for the anomalous 3rd Colt ball.) Therefore, the Patriot balls are about 0.4 PSI below the Colt balls relative to expected. Is that clear from Table 13?

Exponent Table 13

Not only is it unclear, Exponent never even publishes the differences. They fail to calculate or discuss perhaps the most specific and important detail of all of their experimentation, instead simply noting that the Colt readings are in-line with these simulations and the Patriot readings are not. This is not incompetence, it is a bias of omission. More importantly, are the Colt measurement times in Table 13 even plausible?

Assuming the Colt balls are measured before the Patriot balls

Exponent assumes, contrary to the evidence, that the Colt balls were gauged before the 11 Patriot balls were reinflated. This is yet another anti-Patriot “error” or instance where they refuse to examine other plausible scenarios. The repeated and consistent manner in which this happens is hard to chalk up to coincidental incompetence.

Wells does not explicitly state that the Colt balls were gauged before the Patriot balls were re-inflated. Exponent should have asked about this and should have clearly stated it if it were provided such information. If not, they should have, “to be fair,” at least considered the possibility that the Colt balls were gauged later in the locker room period as an explanation for the differences of a few tenths of air pressure.

Burying the Logo and Non-Logo average PSI results

So, what happens if they were to explicitly note the PSI differences in their table as well as including Colt measurements at 11 or 12 minutes, the times that they were likely to be gauged?

Table 13 Updated

An updated version of Exponent’s table 13, showing Non-Logo Gauge Master-Adjusted results with a 71-degree pre-game temperature. This table includes a later measurement time for the Colts as well as explicitly calling out the differences between the expected and observed halftime values.

Now, for example, it’s crystal clear that an approximate 4-and-a-half minute measure time for New England and 11-minute measure time for Indianapolis result in a difference of 0.3 PSI on the Non-Logo gauge between the Patriot and Colt balls. This is similar to what has been observed in more detailed analyses.

Forget the inclusion of a later Colt measurement though. Why doesn’t Exponent call out that differential since it’s perhaps the single most salient data point in their entire report? Without any corrections, it would reveal differences of a few tenths of PSI between the control (Colts) and Patriot Non-Logo readings. Would publishing that number have impacted people’s reactions to their conclusions?

What about the Logo gauge experiment in Table 14? The Patriot Master-adjusted Logo halftime average value was 11.21 PSI, hidden in the paragraph on the following page, meaning that their experiment again found Patriot balls 0.3-0.4 PSI below expected on the Logo gauge, with the pre-game temperature at 67 degrees.

Table 14 Updated

An updated version of Exponent’s table 14, showing Logo Gauge Master-Adjusted results with a 67-degree pre-game temperature. This table includes a later measurement time for the Colts as well as explicitly calling out the differences between the expected and observed halftime values.

Could water account for that small difference? Or a different temperature? Placing the pre-game temperature at something like 69 degrees will bring the Patriot balls about 0.1 PSI closer to expected. Again, this is something Exponent conveniently does not even consider, despite providing a plausible temperature range of 67-74 degrees and running misting tests that demonstrate an effect of wetness.

The Master Error — failing to use master projections for master results

And then there’s this enormous error.

In Figure 26 (a figure recycled again in Figure 30), Exponent used a Master-adjusted transient curve to demonstrate where the footballs are projected to be as they heat up at halftime. Only they fail to present an adjusted curve! Figure 26 is simply wrong.

The curve shows a dry starting halftime value of over 11.5 PSI for the expected Patriot values. But a Master-adjusted Patriot ball would actually be 12.17 PSI in the pre-game according to Exponent. A dry football is expected to be 11.20 PSI at 48 degrees if it were set at 12.17 PSI in a 67 degree environment in the pre-game, as Exponent is attempting to model. The graph is not master-adjusted, even though Exponent claims it is. It is a clear error and needs to be corrected.

What happens when it is corrected?
Screen Shot 2015-07-24 at 10.14.59 PM

The Logo scenario that Exponent presents to support its case suddenly contradicts it. It makes their primary conclusion on page 55 simply wrong:

“Based on the above conclusions, although the relative ‘explainability’ of the results from Game Day are dependent on which gauge was used by Walt Anderson prior to the game, given the most likely timing of events during halftime, the Patriots halftime measurements do not appear to be explained by the environmental factors tested, regardless of the gauge used.

Correcting this huge error would fundamentally alter this conclusion.

Incorrectly claiming that the pre-game temperature is set to help the Patriots

They continue to write, on page 54, that

“it is important again to note that values for the pre-game and halftime locker room temperatures shown in Figure 27 put the Patriots transient curves at their lowest possible positions.”

But this is completely backwards — yet another anti-Patriot error. In order to generate the lowest starting transient curve within the HVAC parameters, the pre-game temperature would be 74 degrees, producing a starting halftime value of 10.86 PSI. 67 degrees is actually the worst starting value for the Patriot differentials.

Inability to conceive of wetness as the explainable natural factor

The icing on the cake is that the differences in the Colt and Patriot measurements are in all likelihood the difference in their exposure to rain. For the uninitiated, this can be clearly seen in the gradient of differences among the Patriot balls that suggests some Patriot balls were exposed to more rain, and in particular those balls on the final drive of the half.

Yet on page 55, when discussing wetness as a factor, they write:

“According to Paul, Weiss, [a majority of wet balls] were most likely not present on Game Day.”

How can they say that, given the factors around wetness? They mention nothing of the Patriot balls being used more, and being in play at the end of the half. This is yet another ant-Patriot oversight. Remember, they presented back-to-back graphics in which water made on order of 0.2 PSI-0.4 PSI differences from the “dry” condition, based on their own misting procedure. Despite the game being played in rain, Exponent concludes that results of the exact same magnitude cannot be explained by rain.

Conclusion

All told, the only time they seem to do something that isn’t anti-Patriot is when they create a row in Tables 13 and 14 for average measurement times that are improbably early in the locker room period. Otherwise, every misstep, omission and blatant error is decidedly anti-Patriot, and often committed in inexplicable fashion. In summary, Exponent demonstrates the following biases by:

  • Failing to account for halftime measurements in publishing p-values, despite knowing time of measurement is critical
  • Switching to an (unnecessarily) extremely low temperature projection for the Patriot Logo gauge
  • Misting footballs to simulate rain (and immediately toweling them off)
  • Not publishing the actual PSI differences between halftime measurements and expected measurements
  • Assuming the Colt balls are measured improbably early in the locker room period, and not considering later measurement times
  • Presenting Figure 26 and 30 with completely false transient curves, thereby altering their conclusions vis-a-vis the Logo gauge
  • Incorrectly claiming the pre-game indoor temperature of 67 degrees is a best-case for the Patriots
  • Not considering wetness as an explanation for the few tenths difference despite finding a few tenths difference from wetness

The Deflategate Science and Data Wiki

If you haven’t seen it, I’ve compiled a wiki (found in the menu above) that comprehensively analyzes all science and data related to deflategate. Many thanks to the readers who contributed — if anyone has further comments or contributions please continue to send them in.

Video: Why Wells Report is Wrong and Actually Exonerates Patriots

 

 

 

 

Below is the table presented at the end of the video summarizing the findings of the time-based permutations of each scenario. Note, this video uses Exponent’s “wet” curve projection. Unadjusted data can be seen the Deflategate science wiki.

Deflate Gate Summary

Do the Patriots Fumble Less at Home?

Despite the mountain of evidence that there was no tampering in Deflate Gate, there’s a very basic and very obvious question to be asked: Do the Patriots fumble less at home? After all, that’s where the Wells Report alleges they deflate footballs for a competitive advantage — during home games where New England personnel can take possession of the balls outside of the referee’s watch. Is there a smoking gun in the form of their home game fumbling rates?

Spoiler: Nope. Not even close.

Ironically, out of 32 NFL teams, the Patriots come away looking like the least likely team to have a home-away discrepancy in football quality. New England has actually fumbled more frequently at home than on the road since 2007!

Home Fumble Improvement 07-14

2014 was actually the Patriot’s best home year of the period, fumbling at a 0.45% lower rate at home than on the road…good for 4th in the league this year in home fumbling improvement. (data from PFR across all offensive plays, save for kneels) Tampa Bay and Oakland led the league, with a fumble rate that was 1.5% lower at home than on the road. If you aren’t familiar with how big of a difference that is, here’s a refresher on fumbling data. Also worth noting is that the three dome teams thrown out in the infamously bad “Sharp” fumble analysis — Atlanta, Indianapolis and New Orleans — are either similar on the road (New Orleans) or fumble less away from the dome; the top road fumbling teams over the period are (1) New England, 1.08% (2) Atlanta, 1.21% (3) Indianapolis, 1.35%.

Additionally, since 2007, the Patriots have the most consistent performance in the NFL between home and away fumbling rates. Year by year, they fumble at similar rates in home and away games. Below are the year-by-year fumbling rate improvements for each team from 2007-2014 — notice how the Patriots straddle the league average like a metronome, exhibiting the least amount of variation among any team:

Home Fumble Variance 07-14Curiously, Tampa Bay also holds the best single-season improvement for home fumbling rates over the last eight seasons, fumbling at an amazing 2.38% lower rate at home in 2011.

The Patriots are alleged to have deflated footballs at home, yet the Patriots fumble slightly more at home when the average team fumbles less at home, Tom Brady is statistically better on the road and the measurements at halftime of the AFC Championship are incredibly similar to the Colts non-tampered footballs. I think it’s pretty clear what’s going on here…

The best team of the last decade can’t figure out how to properly deflate footballs.

 

Debunking Exponent’s Methodology in the Wells Report

Let’s ignore smaller little hand-waving techniques like deciding to ignore one of the measurements for the Colt balls because it “looked strange.” Or that there might be compelling evidence in the report that Walt Anderson did indeed use the Logo gauge in his pre-game measurements. Instead, let’s just focus on the two major methods Exponent used to reach its faulty conclusion in Deflate Gate’s Wells Report.

  1. The “p-value” without controlling for time
  2. The use of a “visual proof” that is biased toward dry balls measured later in the locker-room period

To show exactly what’s so silly (and biased) about this methodology, imagine the following thought experiment:

  • All footballs, for whatever reason, are slightly below where we’d expect them at halftime, about 0.4 PSI below expected
  • The Patriots balls are dry instead of wet
  • The Patriots balls are measured second instead of first
  • The Colts balls are wet instead of dry
  • Finally, imagine that the Patriots deliberately released 0.15-0.3 PSI from some of their footballs after referee Walt Anderson approved them

Doing that produces a table with the following hypothetical halftime measurements:

Fake Exponent Table

As you can see, the Patriot balls have a much higher halftime average measurement because they were dry, inflated to 13.0 PSI in the pre-game and were measured after the Colt balls, allowing them more time to recalibrate to pre-game PSI levels. Now let’s use Exponent’s two main methodologies to see if we can detect the tampering that we’ve built in to the hypothetical.

Exponent Method #1: p-value independent of time

First, let’s run a statistical (independent t) test to see the likelihood that these two groups of footballs are from the same (untampered) groups. If we do what exponent did, which is ignore time of measurement and compare the pre-game level of the Patriot balls (13.0 PSI in our hypothetical) and Colt balls (12.5 PSI) to where they measured at halftime, we get a p-value of 0.0097. That’s almost exactly what Exponent came up with in the real-life scenario for the Patriots.

Only in our thought-expirement it’s the Colts who are the ones found likely of tampering. Exponent’s method literally picks out the wrong team because it ignores major, confounding variables like time. Pause for a second and appreciate how bad this is: I’ve created a scenario where the Patriots cheat, and Exponent’s methodology points to the Colts…because Exponent’s methodology is biased toward any team that was measured second in the locker room period (and also had wet footballs).

In our hypothetical, The Colt balls were measured first, and thus had less time to recalibrate. Furthermore, the Colt balls were treated as the wet group in this scenario, so they drop further in PSI compared to the dry Patriot balls (the opposite of the real-life AFC title game). So, using Exponent’s nonsensical test, it’s fairly easy to demonstrate “statistical significance” that one team tampered with the balls…even though it was not the team that actual tampered. It was just the team who had their wet footballs measured first.

Exponent Method #2: a picture that lies

Exponent also attempts to use a “visual proof” of sorts to demonstrate that something is wrong with the Patriot footballs and not the Colt footballs. This approach is Exponent’s acknowledgement that time (“transient curves”) is a relevant variable in the measurement process, but their demonstration is simply incorrect.

Without getting into the nitty gritty, we can simply draw the exact same picture that Exponent draws (starting with Figure 24, pg. 210) to completely disprove their “proof.” (Note: they draw a picture because if they ran an actual statistic test on the data using a transient curve, they would reach the opposite conclusion.) I’ve taken the same data and drawn an Exponent-like picture below. According to Exponent’s “logic,” if the “window” between a dry and wet football doesn’t intersect with the band of what was actual measured (within +/- 2 standard errors from the mean) then it demonstrates something outside the physical boundaries of what is possible.

Here’s our hypothetical data presented Exponent style:

Fake Exponent Graph

Low and behold…the Colt band does not intersect with a 12.5 PSI projection curve and the Patriot band does intersect with its 13.0 PSI projection curve. According to Exponent, this shows that Colt balls must have had something additional done to them. Except that in this scenario, it was the Patriot balls that were tampered with!

This is possible because this “visual proof” approach is biased toward a dry ball that was measured later in the locker room. Just like their p-value is biased toward a dry ball measured later in the locker room period. In the case of the visual, if normal variance (from any non-tampering factor) moves dry-ball measurements slightly below what is expected — as we’ve done in this hypothetical for the Patriots — it simply shifts the team’s band below the dry-ball upper boundary but still above the wet-ball lower boundary. If a team actually had a wet ball, they are instead shifted below the lower band, outside of what Exponent considers physically acceptable.

With regards to time, the Patriot balls in our hypothetical comfortably intersect with the acceptable region. This is because they were measured later in the locker-room period; despite dropping in PSI from the fake deflation of seven balls, there is still a band with which to intersect earlier in the locker room period. Conceptually, this is simply taking the point where the hypothetical region and bounded region (the two red regions) intersect and “shifting it left.” The team measured later in the locker-room period has room to “shift left.”

The Colts, however, by virtue of being measured first in this hypothetical, can only “shift left” for about 2 minutes, because that’s when their balls were measured. The Colts can’t “shift left” 4 minutes, because they would no longer be in the locker-room period. The team measured later can “shift left” 4 minutes, because it just means their “possible scenario” occurred earlier in the locker-room period.

I’ve tampered with the Patriot balls only, but both of Exponent’s major methods strongly suggest it’s the Colt balls who have been tampered with! For the record, if we use a proper methodology as shown in the last post — one that accounts for time — a t-test will produce a statistically significant result (p-value of 0.046) that correctly identifies the Patriot balls as being tampered with.

Conclusion

There are other peripheral weirdnesses in Exponent’s methods, but we don’t need to move beyond the two major core issues here that lead them to their conclusions. In our fake scenario, in which the Patriots deflated seven balls, both of Exponent’s methods would find the wrong team guilty of tampering with the balls! The method used in the last post that controls for time — specifically, taking the difference of each ball at the time it is measured and seeing how far it is from the projected PSI — instead correctly identified the tampered balls with statistical significance.

Yes, a proper method can demonstrate this despite a sample size of just four footballs from the control group. This is possible because of the consistency of the measurements in our hypothetical. You know what set of data did not show the same consistency? The real Colt balls, measured at halftime of the AFC Championship game. That Exponent jumps through hoops to try and demonstrate a lack of variability in measurements is fine and dandy…but as Rasheed Wallace once said, “ball don’t lie.” And the four Colt balls don’t lie — there is enough variability in the data set that, unsurprisingly, a 0.2 PSI difference in expected measurements at halftime is not statistically significant, and in many cases, not even close.

A proper analysis from Exponent, given the real halftime data presented in the Wells Report, would have found this.

 

 

The Statistical Improbability of Deflate Gate

On Sunday I broke down some of the common misconceptions surrounding the Wells Report, including the social science involved, the statistical misinterpretations and the lack of coherence in the NFL’s story based on its own evidence. Then, on Wednesday provided a time-based visualization of the all the measurements presented in the Report based on where we’d expect them to be at a given time as the balls warmed up in the locker room. Visually, it’s fairly clear that the Colt balls and Patriot balls have similar issues, as many are “under-inflated” by similar degrees. But what does this mean in terms of probability if we actually run some statistical tests on the data?

To reiterate, time is a major variable in this case because the PSI of the balls was increasing with every minute that they were in the locker room at halftime. Thus, the time that each ball was measured becomes critical in trying to analyze the discrepancy between where a ball was measured and where a ball “should” be using Ideal Gas Law parameters. Below is one such scenario presented in the previous post. The blue line is where we’d expect a Colt ball to measure given the time indoors and the gold line where we’d expect a Patriot ball to be (based on Fig. 22 of Wells Report):

Deflate Gate Logo Scenario

What we want to calculate is the difference between each ball at a given point in time (a circle) and where we’d expect the ball to be based on how long it’s been in the locker room (the solid line). For instance, in the above graph, Patriot ball #1 is about 0.25 PSI above where we’d expect. Ball #2 is 0.4 below where we’d expect. These values will be different depending on when the balls where measured, so our parameters for simulating the actual measuring circumstances are (assuming the report is correct in that the balls were correctly recorded in order, and were set to 12.5 and 13.0 PSI respectively in the pre-game):

  • Set Up time (2-4 minutes according to accounts)
  • Measurement time (21.8 – 27.3 seconds per ball)
  • Inflation time (2-5 minutes)
  • Packing time (unstated, but assumed to require some small degree of time between last measurement and re-emergence from locker room)

If we use Exponent’s Ideal Gas Law calculations that assumes 71 degrees pre-game — which may be slightly low, as noted in the last post — and add a small “wetness” factor per their report, we can then simulate a bunch of these scenarios to see what was likely and unlikely. The scenario above attempts to average all the accounts; set up time is “medium” (3 minutes, halftime between 2 and 4), measurement time is “medium” (~25 seconds) and inflation time is medium (3.5 minutes). But we can also examine other scenarios — instances where the Patriot balls were tested after 2 minutes or 4 minutes, quickly or slowly re-inflated, etc. If we do that, we’re left with a number of basic permutations we can study:

Deflate Gate p-value

Experiment parameters: Footballs were gauged at 71 degrees pre-game and were 48 degrees coming off the field with an atmospheric pressure of 14.636. Transient recalibration curve based on Fig. 22 of the Wells Report.

So what do these numbers mean? The “Patriot-Colt mean difference from expected” column calculates where each ball should be based on the time it’s measured, takes the average of all such Patriot balls and subtracts it from the average of all such Colt balls. If we take the mean of all six hypothetical scenarios, the average Patriot ball is about 0.003 PSI below where it should be at the time of measurement relative to the Colt balls. (i.e. using the Colt balls as a control group.) The p-value is the statistical likelihood that the balls come from different populations, i.e. that one set of balls had something done to them that the other set didn’t.

  • The best-case statistical scenario for the Patriots is that Walt Anderson used the Logo Gauge pre-game, that the balls were measured at 2 minutes, each took about 22 seconds to measure and that the officials took 5 minutes to re-inflate the balls (labeled “Early Start, Fast Measure, Long Inflate” above). That produces a mean where the Patriots balls are higher than the Colts, meaning it’s impossible for the Patriots balls to come from a population that is inherently lower than the Colts balls.
  • Three of the six scenarios in which the Logo Gauge was used pre-game completely exonerate the Patriots
  • The worst-case scenario for the Patriots is that Walt Anderson used the Non-Logo Gauge in the pre-game, and that the balls were measured at 4 minutes, each took 27 seconds to measure and that the officials took 2 minutes to re-inflate the balls (labeled “Late Start, Slow Measure, Quick Inflate” above). That produces a p-value of 0.247, which means that if our assumptions are true, there is a 75.3% chance the Patriots balls come from a different population.

Although it’s far below “statistical significance,” 75.3% might sound like a lot. But what is that number actually saying? For that, we have to look at the observed difference in the averages to put this into perspective: there’s a 75% chance that the 0.3 PSI difference is not simply from variance and is part of a different population (i.e. tampered balls).

Depending on the distribution, 0.3 PSI could easily be 99.99% likely to come from a different sample…which would suggest, what? There’s a 99.99% chance that the Patriots systematically released an average of 0.3 PSI per football? And that’s the worst-case scenario? That strains common sense.

In total, the (independent t-tests) results show that it was incredibly unlikely that the Patriot balls behaved any differently from the Colt balls using the assumptions presented in the Wells Report. Additionally, in the small unlikelihood that they are different  — roughly a 15% chance if Anderson used the Logo Gauge in the pre-game and 57% if he used the Non-Logo gauge — the degree to which the balls are different is nonsensically small. We would expect a “small” degree of deflation to be something like 1.0-2.0 PSI; the initial reports were “11 more than 2 PSI below regulation,” with another ball falsely labeled by the NFL itself at 10.1. But the data presents a completely different story — the Patriot balls are sometimes higher than the Colt balls relative to what we’d expect, and the worst-case scenario for New England suggests a “non-significant” likelihood of tampering to a degree that is so small it’s equivalent to the variance seen between the two gauges used to measure the balls.

The next post details exactly how Exponent used faulty methods to reach faulty conclusions in the Wells Report. 

PS What happens if the Colts balls were measured immediately after the Patriots balls and then the Patriot balls were re-inflated? This still produces results that strongly suggest non-tampering, as shown below. If the Colt 4-ball group is indeed treated as a control group, we would expect to see the Patriot measurements 13.6% of the time in their “worst-case scenario” — not an occurrence considered “significant” in the scientific community. If Anderson used the Logo Gauge in the pre-game, Patriot measurements are roughly 0.1 to 0.2 PSI below the Colt measurements and far from statistically significant. 

Deflate Gate Fast Recal Patriots Inflate Last

Experiment parameters: Footballs were gauged at 71 degrees pre-game and were 48 degrees coming off the field with an atmospheric pressure of 14.636. Transient recalibration curve based on Fig. 22 of the Wells Report.

June 19, 2015 Update: Joe Arthur has asked me to run the calculation using a flatter transient curve — in other words, what happens if we expect the footballs to heat up more slowly than projected in Fig. 22 of the Wells Report? For a “slower” expected re-calibration, we can use Fig. 24 of the Wells Report. If any physics experts out there can tell us at what exact rate the balls are expected to recalibrate, it would be much appreciated. In the meantime, the aforementioned numbers are derived with a “fast” recalibration curve, and the below with a “slow” recalibration curve. Based on other amateur experiments I’ve found, it seems likely recalibration takes place somewhere between these two extremes. Below are the results of a “slower” expected recalibration. Worth noting is that on the Logo Gauge, the Colts calls are almost exactly where we’d expect. 

Slow Recalibration Model Deflate Gate

Experiment parameters: Footballs were gauged at 71 degrees pre-game and were 48 degrees coming off the field with an atmospheric pressure of 14.636. Transient recalibration curve based on Fig. 24 of the Wells Report.

Follow-Up: The Evidence for Non-Tampering in 2 Pictures

The last post on the cognitive and statistical biases in Deflate Gate included a visual of the “Logo Gauge” scenario that was added to the post based on the timeline provided on page 70 of the Wells Report. In that post I discussed the problems with the Colt balls, but failed to include a visual for the scenarios. Such a visual, for both the Logo and Non-Logo Gauge readings, clearly demonstrate how similar the Colts balls are to the Patriots balls relative to what a given ball’s PSI should be at a given time.

Below are the measurements taken at halftime, both from the “Logo” gauge and the “Non’Logo” gauge. Each line is the expected PSI of the balls (blue for Indianapolis, gold for new England) as they heat up during the locker room period. The dots are the actual measurements of the balls. Note the differences between the actual balls and where we expect them to be based on temperature and the Ideal Gas Law:

Deflate Gate Logo Scenario

Deflate Gate Non Logo 12.95

The projections use the following assumptions: The temperature indoors was 71 pre-game, 48 degrees outdoors with an atmospheric pressure of 14.636; The balls were indeed measured in order (this is alluded to but not explicitly stated); It took 3 minutes in the locker room before testing began (an average of the 2-4 min guess made by Exponent); It took 25 seconds to test a ball (Exponent’s 4-5 min estimation for measuring Patriots balls produces a range of 22-27s); It took 3.5 minutes to re-inflate the Patriots balls (again, an average of Exponent’s 2-5 min estimation); It took just over 90 seconds to pack up and leave, using the assumption that packing would take less time than set up;The Patriots balls were all of the same wetness, following a “wet” curve estimated by Exponent’s wet test; The Colts balls were all of the same dryness; Most importantly, that every Patriot ball was exactly 12.5 PSI pre-game and every Colt ball 13.0 PSI pre-game, as claimed by referee Walt Anderson.

 

Notice how the Colts balls are “shifted down” below where they should be in a similar manner to the Patriots balls. It’s likely the measurements reflect natural variance we see in measuring actual game-play footballs, as both Patriot and Colts balls aren’t where we’d “expect” based on the assumed parameters Exponent uses and the Ideal Gas Law. (This variance can come from the operator or from other subtle environment factors not captured by temperature and atmospheric pressure. It can also come from the balls not all being perfectly 12.5 pre-game, as well or the temp not being exactly 71 degrees F.) We can say this for two major reasons:

  1. Some Patriots balls are above where we’d expect them based on a 12.5 PSI pre-game measurement and the Ideal Gas Law
  2. 7/8 Colts balls are also below where we’d expect them based on a 13.0 PSI pre-game measurement and the Ideal Gas Law

The Colts balls are actually the best evidence for the Patriots, as they are the only four other footballs ever measured at halftime of a game and they show a departure from what’s expected despite not being tampered with. (Note: Here I’m not arbitrarily treating Colts ball #3 as a transcription error as Wells does.)

Screen Shot 2015-05-20 at 3.38.52 PM

This essentially exonerates the Patriots in the Non-Logo scenario, which is what Exponent used to reach its conclusion. Because in that scenario, three of four Colts balls are more than 0.5 PSI under the expected range, with one of four about 0.75 PSI below expected. 7/11 Patriot balls were more than 0.5 PSI below expected with 5/11 more than 0.75 PSI under expected. As stated in the last post, Exponent overlooked this because they ignored the variable of time (the balls heating up) and presented all balls as being measured at the same time.

The final post in this series examines the statistical improbability that there was any tampering during Deflate Gate.

EDIT: Reader “George” astutely noted that simply increasing the indoor temperature pre-game by a degree or three, due to a slight temperature discrepancy between the HVAC and the actual room temperature (from bodies giving off heat in the room) helps explains much of the slightly below-expected values from both teams. In that case, each team’s expected PSI line would be shifted down slightly, helping to explain the results of both sets of balls, as shown below: 

Deflate Gate Logo 74 Degrees