We must not measure for measuring’s sake

August 18, 2021

Sandy Gould examines the dangers of conducting too much performance measurement now that we are moving into a hybrid working environment where technology is likely to be more widely used.

Sandy is a Senior Lecturer at the School of Computer Science and Informatics at Cardiff University. He has previously worked at the University of Birmingham’s School of Computer Science, where he co-authored research published by Microsoft on remote-working strategies.

Borges’ ‘On Exactitude in Science’ concerns an empire that obsesses with producing maps in ever finer detail. Eventually, they produce a “Map of the Empire whose size was that of the Empire”. It is worth reflecting on this folly when talking about workplace tracking.

Tracking workers with technology isn’t a new phenomenon, but the pandemic might have encouraged organisations to bring surveillance ‘bossware’ into wider use. These tools relay the output of hardware like cameras, keyboards, and microphones, and ‘virtual sensors’ like window switching or time-on-task to a central location for analysis.

Like Borges’s mapmakers, it is easy to believe that recording everything possible can yield a more useful picture of reality. Advances in computational statistics mean that we are better than ever at finding needles in haystacks, but that shouldn’t be an invitation to just pile on the hay.

Collecting excessive amounts of workplace activity data isn’t in keeping with the good practice of data minimisation: collecting only what is necessary and keeping it only as long as needed. It exposes organisations to risks associated with holding more personal data than required. It impinges on workers’ autonomy and privacy and having mountains of data makes finding that buried needle considerably more difficult, inviting statistically dubious inferences from ‘fishing expeditions’ through datasets.

The concern is ‘construct validity’ – the extent to which something measures the thing it claims to measure. There isn’t always a way of measuring what we want. Instead, we are left with proxy measures: carefully recording the shadows cast by the actual phenomenon. Given the diversity of organisational activity, how likely is it that a ‘productivity score’ generated from measures like email response time has construct validity for productivity? There is a risk that the tail begins to wag the dog and one chooses certain measures simply because they can be measured. As James C. Scott noted in ‘Seeing Like a State’, what we can easily measure repeatedly becomes the thing that is valued. This is not the same as saying it ought to be valued.

These are epistemological questions – questions about how knowledge is created and used. In this case, how organisations build knowledge about themselves. Does the system you’re being sold measure what you need to measure? Or does it measure that which constraints permit it to measure? A better alternative might be to ask individual workers and their teams what kinds of things they need to track, and to give them tools that would allow them to create bespoke measures to better support their activities. After all, if it were so easy to operationalise these aspects of work, you’d probably have already automated them.