Using statistical evidence to judge defensive performance
Nick Bishop’s recent article on the apparent growing divergence between analysis and statistics was surprising and disappointing. The Leinster analyst – who also worked with Stuart Lancaster for England – uses a single metric to support this, and the article seems to suggest only that there is a growing divergence between analysis and the use of tackle percentage as a viable metric of defensive performance. Indeed, some of Bishop’s own work is testament to how using statistics alongside technical and visual analysis can be persuasive and effective.
Bishop’s piece is a classic example of sporting scepticism of statistical study, and is of a piece with articles by mainstream journalists citing a number to support their case and feeling the need to include the standard disclaimer ‘statistics don’t tell the whole story’. A particular instance of poor statistical analysis does not invalidate the entire field, in the same way that a particular instance of poor tactical analysis does not invalidate that field. If anything, Bishop’s piece serves as an excellent piece of statistical analysis itself, disputing any presumption of correlation between defensive performance and tackle completion percentage by using evidence to support his argument. The next step is to work out what are effective metrics for describing different types of defence, and the 2016 Hurricanes are a good case study.