How do we know if things have got better?

As someone interested in public policy, from the doing perspective as well as for research and teaching, thinking about social welfare is at the heart of matters for me. What does it mean to think about a government intervention or other collective action ‘making things better’ – for whom, and by how much? I think economists have been pretty poor at addressing these questions, despite past waves of interest in welfare economics – the last of them quite some time ago.

So I pounced on Matthew Adler’s Measuring Social Welfare: An Introduction when my colleague Anna Alexandrova pointed it out to me. The book does a thorough job of setting out, in not too technical a manner, how to apply a social welfare approach to public choices. Adler advocates a ‘welfare-consequentialist’ approach: decisions need to be evaluated according to their outcomes, which depend on the pattern of individual well-being outcomes. He also argues that interpersonal comparisons – and hence rules for ranking these – are essential. The inability to take account of distributional issues, and different marginal utilities of income and consumption at different income levels, makes cost-benefit analysis – ranking policies by summing the monetary equivalents of individuals’ wellbeing –  inadequate. I agree: CBA pretends policy decisions can be non-normative, which is clearly incorrect (and what’s more this pretence has had significant distributional and hence ethical consequences).

Although it spares the reader the algebra of social choice theory, the book is quite theoretical but does include some examples in later chapters, looking at regulations aiming to limit risks (such as health and safety rules or pollution rules). I’m still thinking through what Adlers’ social welfare approach might mean in practice, though. Its approach is a long way from being translatable into a set of policy rules like the Treasury’s Green Book. Nevertheless, this is an important read if economists are going to renew welfare economics.

414AzFL41DL._SX351_BO1,204,203,200_

Share

Tech self-governance

The question of how to govern and regulate new technologies has long interested me, including in the context of a Bennett Institute and Open Data Institute report on the (social welfare) value of data, which we’ll be publishing in a few days’ time. One of the pressing issues in order to crystallise the positive spillovers from data (and so much of the attention in public debate only focuses on the negative spillovers) is the development of trustworthy institutions to handle access rights. We’re going to be doing more work on the governance of technologies, taking a historical perspective – more on that another time.

Anyway, this interest made me delighted to learn – chatting to him at the annual TSE digital conference – that Stephen Maurer had recently published Self-governance in Science: Community-Based Strategies for Managing Dangerous Knowledge. It’s terrifically interesting & I recommend it to anyone interested in this area.

The book looks at two areas, commerce and academic research, in two ways: historical case study examples; and economic theory. There are examples of success and of failure in both commercial and academic worlds, and the economic models summarise the characteristics that explain whether or not self-governance can be sustained.

So for instance in the commercial world, food safety and sustainable fisheries standards have been adopted and largely maintained largely through private governance initiatives and mechanisms, synthetic biology much less so, having an alphabet soup of competing standards. Competitive markets are not well able to sustain private standards, Maurer suggests: “Competitive markets can only address problems where society has previously addressed some price tag to the issue.” Externalities do not carry these price tags. Hence supply chains with anchor firms are better able to bear the costs of compliance with standards – the big purchasing firm can require its suppliers to adhere.

Similarly, in the case of academic science the issue is whether there are viable mechanisms to force dissenting minorities to adhere to standards such as moratoria on certain kinds of research. The case studies suggest it is actually harder to bring about self-governance in scientific research as there are weaker sanctions than the financial ones at play in the commercial world. Success hinges on the community having a high level of mutual trust, and sometimes on the threat of formal government regulation. The book offers some useful strategies for scientific self-governance such as building coalitions of the willing over time (small p politics), and co-opting the editors of significant journals – as the race to publish first is so often the reason for the failure of research moratoria to last.

The one element I thought was largely missing from the analytical sections was the extent to which the character of the technologies or goods themselves affect the likelihood of successful self-governance. This is one aspect that has come up in our preparatory work – the cost and accessibility of different production technologies. The analysis here focuses on the costs of implementing standards, and on monitoring and enforcement.

This is a fascinating book, including the case studies, which range from atomic physics to fair trade coffee. It isn’t intended to be a practical guide (& the title is hardly the airport bookstore variety) but anybody interested in raising standards in supply chains or finding ways to manage the deployment of new technologies will find a lot of useful insights here.

51lk-iF9V9L

Share

Humans in the machine

There’s some very interesting insight into the human workforce making the digital platforms work in Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass by Mary Gray and Siddarth Suri. The book as a whole doesn’t quite cohere, though, nor deliver on the promise of the subtitle. The bulk of the book draws on interviews and surveys of people who work via platforms like Amazon’s famous Mechanical Turk, but also the internal Microsoft equivalent, UHRS, and a smaller social enterprise version, Amara.

This is all extremely interesting, about how people work – in the US and Bangalore – their tactics for making money, dealing with stress, how many hours they have to work and when, how much or little agency they have, and so on. Not least, it reminds or informs readers that a lot of AI is based on the labelling done by humans to create training data sets. However, not all the ghost work described is of this kind and some, indeed, has little to do with Silicon Valley except that a digital platform mediates the employer and the seeker of work. As the authors note, this latter type is a continuation of the history of automation, the role of new pools of cheap labour in industrial capitalism, and the division of labour markets into privileged insiders and contingent – badly paid, insecure – outsiders. The new global underclass is just one step up from the old global underclass; at least they have a smartphone or computer and internet access.81uywR4bPoL._AC_UY218_ML3_The survey results confirm that some of the digital ghost workers value the flexibility they get reasonably highly – although with quite a high variance in the distribution. Not surprisingly, those with least pressing need for income most value the flexibility. Some of the women workers in India also valued the connection to the labour market when they were unable to work outside of their home because of childcare or family expectations. Similarly, with the Amara platform, “Workers can make ghost work a navigable path out of challenging circumstances, meeting a basic need for autonomy and independence that is necessary for pursuing other interests, bigger than money.”

The book’s recommendations boil down to recommending that platforms should introduce double bottom line accounting – in other words, find a social conscience alongside their desire for profit. Without a discussion of their (lack of) incentives to do so, this is a bit thin. Still, the book is well worth reading for fascinating anthropological insights from the field work, and for the reminder about the humans in the machine.

 

Share

Varieties of neoliberalism

I’ve polished off Angus Burgin’s The Great Persuasion, recommended to me by a colleague after Daniel Stedman-Jones’s Masters of the Universe cropped up in conversation. I read the latter when it came out in 2012. It made a strong impression on me with its account of the way the creation of the climate of ideas conducive to Thatcherism and Reaganism came about as a conscious long-term project on the part of a number of free market think tanks on both sides of the Atlantic, as well as some leading economists.

Of these, Milton Friedman, features as the key figure in The Great Persuasion. This is an account of the same political project, told through the evolution of the Mont Pelerin society specifically. At its 1947 formation, it was both firmly interdisciplinary, fostering a debate about society’s moral values as well as its economic organisation, and also resigned to the interventionist climate of the times. The Victorian laissez faire version of liberal markets was firmly rejected. The aim was to ensure there remained space for market forces in the context of the Keynesian, planification spirit of the times.

However, as the postwar decades went by, and founding figures moved on (with some acrimony in some cases), Milton Friedman’s vision came to predominate. The society turned into an economics-only shop. Conservative foundations funded ever-more free market research. The restrained neoliberalism of the 1950s – so named as a contrast to laissez faire liberalism –  gave way to the harsher, ‘neoliberal’ version of the 1980s on.

Two things particularly struck me in reading the book. One was the importance of patient funders, not demanding short term political ‘impact’, but instead understanding that the wider intellectual climate needed to change and they would be in for the long haul. The other was the conviction of both Hayek and Friedman that ideas can move mountains. Burgin writes: “Friedman maintained a relentless faith in the ability of unpopular ideas to gain recognition and over the course of decades to effect political change.” I share the view that ideas are powerful but am less sure it always needs decades: people seem able to jump from one moral universe to another relatively quickly, as we have seen in other contexts.

51rxMRQiXbL._SX315_BO1,204,203,200_

Share

Coffee table economics

I’ve been enjoying paging through Steven Medema’s The Economics Book: from Xenophon to Cryptocurrency, 250 Milestones in the History of Economics, not least because it has lots of lovely pictures. It’s a history of economic concepts –  that starts in 700 BCE with Hesiod to cryptocurrencies in 2009. Each entry has a beautiful illustration, no mean feat when it comes to illustrating Dynamic Stochastic General Equilibrium (a photo of the ECB), National Income Accounting (women washing dishes at home and hence not contributing to GDP), or Utilitarianism (Jeremy Bentham’s catalogue of the different sources of pleasure and pain). The pictures make it exactly the kind of book you’d be happy to have on the coffee table but it’s more than that: the selection of concepts and the capsule explanations do make it a useful starting point for people who’ve maybe read the terms or think they ought to know something about economics but have no idea where to start. They can start here without embarrassment (Hicks-Allen consumer theory, the School of Salamanca, the Shapley value….) and follow up elsewhere. It’s also a bargain – get the hardback, not the Kindle version.

51yUYd8eOnL

 

Share