Why a scorecard of quality in the arts is a very bad idea
- Written by Julian Meyrick, Professor of Creative Arts, Flinders University
Foxes were introduced into Australia from Britain in the 19th century for the recreation of faux-English huntsmen. They destroyed dozens of native species. In the 21st century a parallel is at hand in an export of cultural metrics from Australia to the UK. The impact may be equally damaging.
Culture Counts, developed in Western Australia by the Department of Culture and the Arts, is a computer dashboard data program, designed to be used across art forms. It is currently being trialled for wider rollout by Arts Council England. Its aim, according to a Manchester-based pilot, is “a sector-led metrics framework to capture the quality and reach of arts and cultural productions”.
What is proposed is substantial, serious, and no doubt well-intentioned. Unusually for a government-led measurement scheme, arts practitioners as well as policy experts have helped develop it. Yet we at Laboratory Adelaide - an ARC Linkage research project into culture’s value - view the venture with dismay. We argue that the approach is wrong-headed and open to political abuse.
In essence, Culture Counts is a quantitative scorecard for artistic quality, with a set of standardised categories translating a set of verbal descriptions into numbers.
For example, if a surveyed audience can be prompted to say of a cultural experience that “it felt new and different” or “it was full of surprises”, it would rate highly on a 5-point scale for “originality”. That number would then sit on the dashboard beside other numbers for “risk” and “relevance”. Andy Maguire/flickr, CC BY
The categories are nuanced enough to provide usable feedback for practitioners and bureaucrats with the time and desire to think hard about what the numbers mean. And we understand the pressure cultural organisations face to justify their activities in quantified ways.
But will funders analyse the numbers with care? Will artists resist the temptation to trumpet “a 92 in the ACE metric” any more than vice chancellors have refrained from boasting of their rankings in university league tables?
We think not. A quantitative approach to quality betrays naivety about how people look at dashboard data, privileging a master figure or, at best, two or three figures. Context is lost to the viewer, and the more authoritative a number is presumed to be, the more completely it is lost.
A dread homogeneity
The second problem with a metric for artistic quality is the homogeneity of purpose it implies. A theatre in Leeds, an orchestra in London and a gallery in Loughborough not only do different things in different places, their values are different too. They can be compared, but it requires critical assessment not numerical scaling. Neil Hall/Reuters
This was a view discussed at length by the UK 2008 McMaster Review Supporting Excellence in the Arts – from Measurement to Judgement. It is to be regretted the current UK government has failed to heed the advice of this insightful document.
A third problem with the approach is the political manipulation it invites. Metrical scores look objective even when reflecting buried assumptions. If a government decides it wants to support (say) “innovation”, different projects can be surreptitiously graded by that criterion and “failures” de-funded. The following year the desideratum might be “excellence” and a different crunch would occur. Supposition is camouflaged by abstraction and the pseudo-neutrality of quantitative presentation.
Arts Council England’s metrics will be expensive. They will demand time, money and attention from resources-strapped cultural organisations who cannot spare them. Is it worth it? This is a vital point. The introduction of a new quantitative indicator should tell us something we didn’t know before. It is not enough to translate verbal descriptions into numbers as a matter of course. There has to be knowledge gained by doing so that we didn’t already have.
If the only answer is “by using numbers we can benchmark cultural projects more easily”, then we have a fourth problem. The incommensurability inherent in concrete instances of creative practice is not something that will be addressed by improving standardized measurement techniques.
In fact, the more sophisticated the Council’s approach becomes, the more its numbers will stand out as two-dimensional. In this way a scorecard of artistic quality is not only misrepresentative, it is self-defeating. Maria Baranova
More than other areas of life, art and culture are full of outliers and singularities, things that do not fit easily into standardized categories. A good example is The Record, a recent production at Adelaide’s OzAsia Festival. The Record was a genre-less, story-less dance piece with 45 strangers in the cast who moved around a 40ft square stage in their work clothes, at varying speeds. It had little interest in meeting conventional audience expectations, promoting an explicit message, or displaying visual or choreographic prowess. Yet in its originality and social engagement, it showed a profound sense of human connection.
We doubt it would score well on Arts Council England’s metrics.
“So much, so obvious”, we think at Laboratory Adelaide. A deeper question is why. What’s driving our insatiable desire to quantify things that self-evidently do not lend themselves to enumeration?
Addicted to ratings
One answer is that modern society has become addicted to ratings and rankings that convey a comforting sense of clarity and control (league tables for everything). When a football match is decided by a point, arguments that the margin is not statistically significant hold no water. There is a clear winner and loser. In arts and culture, however, all-too-human processes of judgement lie behind ostensibly impersonal outcomes. This fact gets lost when numbers step forward as a mark of value.
Arts and culture are not the only domains that must be wary of going down the metrics route. For some time, academic research has also been locked in a struggle with dead-souled quantification that reduces it to simplistic aggregates of “outputs” and questionable citation indices.
Authors: Julian Meyrick, Professor of Creative Arts, Flinders University
Read more http://theconversation.com/why-a-scorecard-of-quality-in-the-arts-is-a-very-bad-idea-66685