Tech self-governance

The question of how to govern and regulate new technologies has long interested me, including in the context of a Bennett Institute and Open Data Institute report on the (social welfare) value of data, which we’ll be publishing in a few days’ time. One of the pressing issues in order to crystallise the positive spillovers from data (and so much of the attention in public debate only focuses on the negative spillovers) is the development of trustworthy institutions to handle access rights. We’re going to be doing more work on the governance of technologies, taking a historical perspective – more on that another time.

Anyway, this interest made me delighted to learn – chatting to him at the annual TSE digital conference – that Stephen Maurer had recently published Self-governance in Science: Community-Based Strategies for Managing Dangerous Knowledge. It’s terrifically interesting & I recommend it to anyone interested in this area.

The book looks at two areas, commerce and academic research, in two ways: historical case study examples; and economic theory. There are examples of success and of failure in both commercial and academic worlds, and the economic models summarise the characteristics that explain whether or not self-governance can be sustained.

So for instance in the commercial world, food safety and sustainable fisheries standards have been adopted and largely maintained largely through private governance initiatives and mechanisms, synthetic biology much less so, having an alphabet soup of competing standards. Competitive markets are not well able to sustain private standards, Maurer suggests: “Competitive markets can only address problems where society has previously addressed some price tag to the issue.” Externalities do not carry these price tags. Hence supply chains with anchor firms are better able to bear the costs of compliance with standards – the big purchasing firm can require its suppliers to adhere.

Similarly, in the case of academic science the issue is whether there are viable mechanisms to force dissenting minorities to adhere to standards such as moratoria on certain kinds of research. The case studies suggest it is actually harder to bring about self-governance in scientific research as there are weaker sanctions than the financial ones at play in the commercial world. Success hinges on the community having a high level of mutual trust, and sometimes on the threat of formal government regulation. The book offers some useful strategies for scientific self-governance such as building coalitions of the willing over time (small p politics), and co-opting the editors of significant journals – as the race to publish first is so often the reason for the failure of research moratoria to last.

The one element I thought was largely missing from the analytical sections was the extent to which the character of the technologies or goods themselves affect the likelihood of successful self-governance. This is one aspect that has come up in our preparatory work – the cost and accessibility of different production technologies. The analysis here focuses on the costs of implementing standards, and on monitoring and enforcement.

This is a fascinating book, including the case studies, which range from atomic physics to fair trade coffee. It isn’t intended to be a practical guide (& the title is hardly the airport bookstore variety) but anybody interested in raising standards in supply chains or finding ways to manage the deployment of new technologies will find a lot of useful insights here.

51lk-iF9V9L

Humans in the machine

There’s some very interesting insight into the human workforce making the digital platforms work in Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary Gray and Siddarth Suri. The book as a whole doesn’t quite cohere, though, nor deliver on the promise of the subtitle. The bulk of the book draws on interviews and surveys of people who work via platforms like Amazon’s famous Mechanical Turk, but also the internal Microsoft equivalent, UHRS, and a smaller social enterprise version, Amara.

This is all extremely interesting, about how people work – in the US and Bangalore – their tactics for making money, dealing with stress, how many hours they have to work and when, how much or little agency they have, and so on. Not least, it reminds or informs readers that a lot of AI is based on the labelling done by humans to create training data sets. However, not all the ghost work described is of this kind and some, indeed, has little to do with Silicon Valley except that a digital platform mediates the employer and the seeker of work. As the authors note, this latter type is a continuation of the history of automation, the role of new pools of cheap labour in industrial capitalism, and the division of labour markets into privileged insiders and contingent – badly paid, insecure – outsiders. The new global underclass is just one step up from the old global underclass; at least they have a smartphone or computer and internet access.81uywR4bPoL._AC_UY218_ML3_The survey results confirm that some of the digital ghost workers value the flexibility they get reasonably highly – although with quite a high variance in the distribution. Not surprisingly, those with least pressing need for income most value the flexibility. Some of the women workers in India also valued the connection to the labour market when they were unable to work outside of their home because of childcare or family expectations. Similarly, with the Amara platform, “Workers can make ghost work a navigable path out of challenging circumstances, meeting a basic need for autonomy and independence that is necessary for pursuing other interests, bigger than money.”

The book’s recommendations boil down to recommending that platforms should introduce double bottom line accounting – in other words, find a social conscience alongside their desire for profit. Without a discussion of their (lack of) incentives to do so, this is a bit thin. Still, the book is well worth reading for fascinating anthropological insights from the field work, and for the reminder about the humans in the machine.

 

Varieties of neoliberalism

I’ve polished off Angus Burgin’s The Great Persuasion, recommended to me by a colleague after Daniel Stedman-Jones’s Masters of the Universe cropped up in conversation. I read the latter when it came out in 2012. It made a strong impression on me with its account of the way the creation of the climate of ideas conducive to Thatcherism and Reaganism came about as a conscious long-term project on the part of a number of free market think tanks on both sides of the Atlantic, as well as some leading economists.

Of these, Milton Friedman, features as the key figure in The Great Persuasion. This is an account of the same political project, told through the evolution of the Mont Pelerin society specifically. At its 1947 formation, it was both firmly interdisciplinary, fostering a debate about society’s moral values as well as its economic organisation, and also resigned to the interventionist climate of the times. The Victorian laissez faire version of liberal markets was firmly rejected. The aim was to ensure there remained space for market forces in the context of the Keynesian, planification spirit of the times.

However, as the postwar decades went by, and founding figures moved on (with some acrimony in some cases), Milton Friedman’s vision came to predominate. The society turned into an economics-only shop. Conservative foundations funded ever-more free market research. The restrained neoliberalism of the 1950s – so named as a contrast to laissez faire liberalism –  gave way to the harsher, ‘neoliberal’ version of the 1980s on.

Two things particularly struck me in reading the book. One was the importance of patient funders, not demanding short term political ‘impact’, but instead understanding that the wider intellectual climate needed to change and they would be in for the long haul. The other was the conviction of both Hayek and Friedman that ideas can move mountains. Burgin writes: “Friedman maintained a relentless faith in the ability of unpopular ideas to gain recognition and over the course of decades to effect political change.” I share the view that ideas are powerful but am less sure it always needs decades: people seem able to jump from one moral universe to another relatively quickly, as we have seen in other contexts.

51rxMRQiXbL._SX315_BO1,204,203,200_

Coffee table economics

I’ve been enjoying paging through Steven Medema’s The Economics Book: from Xenophon to Cryptocurrency, 250 Milestones in the History of Economics, not least because it has lots of lovely pictures. It’s a history of economic concepts –  that starts in 700 BCE with Hesiod to cryptocurrencies in 2009. Each entry has a beautiful illustration, no mean feat when it comes to illustrating Dynamic Stochastic General Equilibrium (a photo of the ECB), National Income Accounting (women washing dishes at home and hence not contributing to GDP), or Utilitarianism (Jeremy Bentham’s catalogue of the different sources of pleasure and pain). The pictures make it exactly the kind of book you’d be happy to have on the coffee table but it’s more than that: the selection of concepts and the capsule explanations do make it a useful starting point for people who’ve maybe read the terms or think they ought to know something about economics but have no idea where to start. They can start here without embarrassment (Hicks-Allen consumer theory, the School of Salamanca, the Shapley value….) and follow up elsewhere. It’s also a bargain – get the hardback, not the Kindle version.

51yUYd8eOnL

 

The wreck of welfare economics

I just re-read Will Baumol’s Welfare Economics and the Theory of the State, first published in 1952, based on his doctoral dissertation, with a 2nd edition in 1965. I started mulling over welfare economics while writing my latest book, Markets, State and People: Economics for Public Policy (OUT THIS WEEK – TA-DAH!).

51+TCbB-jpL._SX373_BO1,204,203,200_41tDMjIpEQL._SX329_BO1,204,203,200_

This is the area of economics concerned with the question of what it means for society to be better off. As a branch of theory, welfare economics is highly abstract and mathematical, covering the existence of a general equilibrium, its optimality properties (and the extent to which these are delivered by the market economy), and the various impossibility theorems about aggregating individual utlities into social welfare. As a matter of practice, some hazy sense of all of this theory lies behind policies such as the use of cost-benefit analysis. In writing the book, I became increasingly and uncomfortably aware of the theory-practice gap.

Better economists than me were on to this earlier. In 2001 the late, great Tony Atkinson wrote a powerful article noting the ‘strange disappearance of welfare economics’, largely ignored since the 1970s – it was published in a journal (Kyklos) unknown to many economists unfortunately. As it turns out, Baumol skewered the basic problem in this book. “Mathematical manipulation can yield no more than is contained in the premises which are being examined. Walras [in his work on general equilibrium], by assuming that every indivudal independently sought his own ends, obtained mathematical statements which amounted to the not excessively surprising assertion that every individual did as well for himself as possible under the circumstances.”

In other words, if you assume individuals are wholly independent, you conclude that the optimal economic organisation simply requires individual decision-making. But as Baumol points out in the final chapter, titled The Wreck of Welfare Economics, any brush with empirical reality underlines the interdependence of both production and consumption decisions. His conclusion: if economics is to say anything of practical use about economic progress, we need to start filling – with both theory and empirics – the currently empty boxes with labels like ‘externalities’ and ‘increasing returns’.