What counts?

After hating the book of the moment, Shoshan Zuboff’s much-praised Surveillance Capitalism, perhaps it underlines my contrariness if I tell you how much I loved my latest read, a book about classification. It was Sorting Things Out by Geoffrey Bowker and Susan Star, quite old now (1999). I can’t remember how I stumbled across it, but it absolutely speaks to my preoccupation with the fact that we see what we count & not the other way around.

The book investigates the confluence of social organisation, ethics and technologies of record-keeping as manifest in the establishment of systems of classification and standards. The examples it uses are medical systems such as diagnostic manuals, but the arguments apply more broadly. The point it makes about the role of record keeping technologies reminded me of a terrific book I read last year, Accounting for Slavery by Caitlin Rosenthal, which explored the role of commercially produced record books in the managerialism of large slave plantations in the US. The argument that a classification system lends the authority of something seemingly tehnocratic to highly political or ethical choices echoes Tom Stapleford’s wonderful book The Cost of Living in America.

As Bowker and Star point out, classification systems shape people’s behaviour. They come to seem like natural rather than constructed objects. They also fix perceptions of social relations, as a classification framework or set of standards, “[M]akes a certain set of discoveries, which validate its own framewor, much more likely than an alternative set outside the framework.” To switch frameworks requires overcoming a bootstrapping problem – you can’t demonstrate that a new one is superior because you don’t yet have the units of data on which it relies. People can’t see what they take for granted until there is an alternative version not taking the same things for granted.

And, although this book was written early in the internet era, the authors note that “Software is frozen organisational and policy discourse” – as we are learning with the burgeoning debate about algorithmic accountability. The essential ambiguity of politics is impossible to embed in code. The big data and AI era will force some of the fudged issues into the open.

 

Share

An unpopular confession

It isn’t often I give up on a book, still less one that has arrive garlanded with praise, and which I’m predisposed to agree with. However, I can’t manage another word of Shoshana Zuboff’s Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power.

From what I’d gathered from early reviews, her argument is that Google and Facebook have become too big, deploy great power unaccountably, pose grave threats to democracy and accumulate for profit masses of data about all the individuals using their services. It’s rather hard to disagree with this, and indeed legislators and regulators around the world are gearing up to respond, albeit more slowly than would have been ideal. Only this month Germany’s Bundeskartellamt banned Facebook from connecting data about individuals from different sources, and India forbade Amazon to sell its own products on its platform. Those with powers to fine and to put people in prison are coming for the digital usurpers.

Having read a few chapters, this is still what I take the argument to be. You just wouldn’t know it from the extraordinarily impenetrable language. For example:

“Surveillance capitalism rules by instrumentarian power through its materialization in Big Other, which, like the ancient tyrant, exists out of mankind while paradoxically assuming human shape.”

Or:

“As for the Spanish Data Protection Agency and the European Court of Justice, the passage of time is likely to reveal their achievements [with regard to the right to be forgotten] as stirring an early chapter in the longer story of our fight for a third modern that is first and foremost a human future, rooted in an inclusive democracy and committed to an individual’s right to effective life. Their message is carefully inscribed for our children to ponder: technological inevitability is as light as democracy is heavy, as temporary as the scent of rose petals and the taste of honey are enduring.” [italics hers]

There are over 500 pages of this, and it was too much when I found myself having to read everything several times to work out the meaning. What’s more, there are some analytical lacunae – no Bentham, no Foucault in a book about surveillance? And Zuboff clearly believes what the digital titans claim about the effectiveness of their data gathering in selling us; yet we’ve all had the experience of being followed by ads for the thing we just bought or being creeped out by evidence of such joining up in a way that will make us never shop at a certain outlet again. They have become conduits for alarming shifts in people’s beliefs and behaviour, for sure, but in an accidental way. I don’t know if it would be more or less scary if they were actually in control of the social trends they’ve unleashed.

Mine is a minority view, as all the reviews of the book I’ve seen have been almost adulatory. No doubt you should believe those who had more patience than me and read the whole damn thing.

Share

The globots are coming

Richard Baldwin’s latest book, The Globotics Revolution, is a terrific primer on two trends promising to disrupt the world of middle class work in the rich economies. One is competition from ‘Remote Intelligence’ or in other words a tidal wave of talent in countries such as China and India increasingly well able to compete with better-paid professionals in the OECD. The other is comptition from AI, increasingly well able to compete with etc etc. Combine more globalisation and robotics and you get the ‘globotics’ of the title (a terrible word, but never mind). The book argues that the combination is something new and significant in scale, more than just a bit more of existing trends.

The bulk of the book considers each of the two elements in turn, providing excellent, accessible summaries of the economic research and the projections of the likely impact on work. Some of the forces identified may not manifest as fast as expected –  the spread of autonomous vehicles, for instance. The book is also more gung-ho about the continuation of Moore’s Law than many others who pay close attention to the computer industry.

I’m also a little sceptical about the extent to which remote workers will substitute for highly paid professionals, mainly because there is something separately valuable in the know how and experience gained from face to face contact in specific places. With hindsight, it was a mistake for so much manufacturing to be offshored because of loss of engineering know how (see for example this great article by Gregory Tassey); this will be truer in services. Mancur Olson’s point in Big Bills Left on the Sidewalk – that an immigrant to a rich country from a poor one becomes more productive overnight because of the social and physical capital around them in their new environment – applies.

Even so, the bottom line is that job disruption at the lesser and slower end of the range of possibilities will still have a profound impact on people’s livelihoods. We should be getting prepared. Baldwin argues that there is no mystery about the policies needed. He argues for Denmark-style flexicurity, with ease of being fired compensated by significant transitional funding and training – or even for slowing down the pace of change by making it harder to fire people (despite the evidence this contributes to high unemployment rates). With the need to prepare – and to implement far more effective policies than was the case in the earlier phases of deindustrialisation and automation – it’s surely impossible to disagree.

The book ends on an oddly positive note, given the jobs-ocalypse it predicts: “I am optimistic about the long run.” In the very long term it forsees an economy where the things machines (and I guess offshore workers) cannot do: more local, more human and more prosperous (thanks, robots!) society. “Our work lives will be filled with far more caring, sharing, understanding, creating, empathizing, innovating and managing …. The sense of belonging to a community will rise and people will support each other.” This is wonderfully upbeat, a world where machines do all the drudge work and humans brew craft beer and care for each other. It’s hard to see how to get there from today’s fractious world where the absence of a sense of community is pretty manifest in many places and only the few can afford the craft beer. I hope he’s right, though.

Agree with the book’s rosy long-term vision or not, it’s a thorough introduction to the economic debates about globalization and automation, and the forces that are going to change our world in the next few decades, populist backlask or no.

Share

Vision and serendipity

As the year hurtles toward its end, and what looks sure to be a tumultuous 2019, I’ve been retreating under the duvet with Mitchell Waldrop’s The Dream Machine, published in a handsome edition by Stripe Press. The book is a history of the early years of the computer industry in the US, centred around JCR Licklider and his vision of human-computer symbiosis.

Price: Check on Amazon

It has therefore quite a narrow focus, being a detailed history of the people involved in a small slice of the effort that went into creating today’s connected, online world. Licklider played a decisive role at DARPA in prompting and funding the creation of the Arpanet and hence ultimately the Internet. I got quite caught up in the detail – the triumphs and setbacks of particular researchers, their job moves, who fell out with whom, and so on. (Better than the painful minutiae of our Brexit humiliation, for sure.)

One of the striking aspects of the tale is how serendipitous the outcomes were. There are some popular Whig interpretations of digital innovation, as if the creation of the personal computer, GUI, Internet etc were purposive. It wasn’t like that at all. Licklider for sure had a vision. It might or might not have worked. It was sort of chance that he ended up in DARPA with his hands on a suitable budget to fund the networking. It certainly wasn’t an intentional US government industrial strategy, as some accounts would have it. The Dream Machine was a Heath Robinson contraption. There are lessons in such histories both for scholars of innovation and for would-be industrial strategists.

Share

Humanity’s future…

I read On the Future: Prospects for Humanity on the train back from the Festival of Economics. (See the #EconomicsFest hashtag – recordings will go online soon.) This short and compelling book by Martin Rees, the Astronomer Royal (and a Cambridge colleague), was a bit of a dampener on my good cheer. Our prospects are not great. It turns out that the risk of a large asteroid causing mass extinction is one of the lesser worries about our future. Other existential risks have a higher probability with the same mass death/end of civilisation impact.

Take biotech terrorism: “Whatever can be done will be done by someone, somewhere,” the book calmly states in passing. Even more exotically, another: “Scary possibility is that the quarks [produced by high energy physics experiments] would reassemble themselves into compressed objects called strangelets. That in itself would be harmless. However, under some hypotheses, a strangelet could, by contagion, convert anythign else it encountered into a new form of matter, transforming the entire Earth into a hyperdense sphere about a hundred meters across.” I gather this is a remote prospect indeed, but it takes some of the gloss off the Large Hadron Collider. Strangelets, eh.

The early scientists (natural philosophers, as they called themselves) were considered ‘merchants of light’ yet science and technology have come to seem pretty scary. This book is a perfect antidote to worrying about Brexit or Donald Trump or neo-fascism, as it offers so many much bigger problems to worry about.

It tries to strike a positive note by saying science and tech offer potential solutions too. Martin Rees thus ends by calling for scientists to engage more with philosophy. I think they should be engaging more with social scientists. The barriers to taking action to safeguard humanity from any devastating effects of climate change or AI are not mainly about science and technology, but rather about what people believe and how they behave.

 

 

Share