Saturday, March 6, 2010

Good Offense beats good defense?

I suspect I’m like a lot of you in that I often have more faith than I ought to in Pomeroy’s adjusted efficiency numbers, and more specifically in the single game predictions.  I sometimes take it as gospel that, given a team’s adjusted offensive efficiency and their opponent’s adjusted defensive efficiency, and no other information, the best prediction we can make for the team’s offensive efficiency is:

(Team Off Eff + HFA) x (Opponent Def Eff + HFA) / Ave Eff

This is the formula is used as the basis both for creating the adjusted numbers from the raw numbers, and for the individual game predictions shown on kenpom.com.  In general it does a very good job of predicting game efficiencies – there’s the gospel part – but I wondered if might break down at the extremes.  Why?  Simply because close games demand a different level of effort from a team than blowouts do.  When Kansas played Alcorn State earlier this year, the game was over before it started, and I wouldn’t have blamed a soul on either team if they didn’t give 100% effort that night.  Maybe this evens out in the end – the offense plays at 90% effort, the defense plays at 90% effort, and it cancels out.  But maybe offense is more fun, so players don’t slack off as much at that end.  Or maybe effort is more important to offensive rebounding than to defensive rebounding, and so the offensive efficiency suffers more than the defensive.  At any rate, I wanted to check, so I did what I do – dumped thousands of data points into an Excel spreadsheet and made some pretty charts.

I took the offensive and defensive efficiencies of all the teams from 2009 and grouped them into fifths by their percentile rank.  For example, Utah State was ranked 17th in offense, but 158th in defense, so their offense would be in the “top 20%” group and their defense would be in the “average” group.  I then took every game from the 2009 season and calculated the predicted offensive efficiency for each team, using the formula above.  I subtracted that from the actual recorded efficiency to get a value for how much the offense over- or under-performed in that game.  Then I binned those numbers according the offensive and defensive groups to produce the chart below.

image

To read this, look up the quality of the offense on the left, and the quality of the defense at the top – the cell where their respective row and column intersect shows how the offense performed on average, compared to what the efficiency formula predicted (in points per 100 possessions).

You can see a clear pattern – the highest values are in the upper left and lower right corners, with positive values strung out between them, while the other two corners are severely negative.  What this says is that when both the offense and the defense were very good or very bad, the offense did better than you’d expect.  On the other hand, when there was a severe mismatch – either the offense was way better than the defense, or the defense was way better than the offense – the offense did worse than expected. (Keep in mind that the quality of the offense/defense should already be taken into account by the formula).

Why would this pattern exist?  I’m sure it’s a combination of reasons, but I think the main underlying factor might be how close the game is.  Those negative corners should be where the blowouts are concentrated, while the positive areas are between more closely matched teams.  It seems natural that either A) offenses play a little sloppier once a game is lopsided, or B) once the scrubs take the court, their poor shooting and execution lowers offensive efficiency.

Hopefully soon I can take a look at the numbers from other years and see if the pattern is consistent.

No comments:

Post a Comment