Prospects for the Swedish model

There’s an interesting new book, Digitalization, Immigration and the Welfare State, by Marten Blix of Sweden’s Research Institute of Industrial Economics. It brings together two deep trends, technology and immigration, in the context of the relatively rigid labour market structures of Sweden and some other European countries. Blix asks, what are the implications for the welfare state, the high tax, high spend social contract? He argues that the combined trends are increasing inequality, and the longstanding social support for redistribution and high taxation is eroding. Sweden has been at the forefront of both trends. It ranks high on measures of digitzation, and has taken in more refugees per capita than most other European countries. It has consequently had one of the biggest increases in income inequality in the OECD (the level of inequality is still relatively low – similar to Canada or Germany).

Ultimately, the book suggests a Swedish model of social democracy can potentially survive, thanks to the country’s high productivity and high initial levels of social capital. Sweden’s public finances are also in better shape than in many other countries. However, it certainly doesn’t look like an easy path. Absorbing the new immigrants will require a focus on enhancing their skills – and also those of the already-resident. One prescription is reducing the rigidities in the labour market and housing market. Another area where greater flexibility will be needed is in accommodating the increase in work – via digital platforms for instance – outside the traditional collective wage bargaining. Some Swedish unions are apparently working to establish employment standards on the digital platforms.

As the book concludes, however, the obstacles to the reinvention of the Swedish model – or any other social contract – are not problems of economic analysis but political obstacles. Economists often talk of the need for ‘structural reform’ when this is code for ‘politically bloody difficult.’ Immigration makes the politics harder, Blix argues: “Sweden is no longer the homogeneous country it used to be and the social contract holding people together is at risk of disintegrating.” All the more dangerous, then, he says to pretend everything is fine and nothing needs to change. The newcomers have to be brought into the fold or the future of the Swedish model looks to be in doubt.

Much of this debate is of course familiar to those of us more familiar with the UK and US economies, as is the kind of political lunge to the populist right or left that accompanies these tech and migration trends. It’s interesting to read about the challenges in the context of a country that has so long been an admired model for the centre left (and even some of the centre right). I accept that it’s essential to try the kind of policy response the book suggests, hard as that is, given the do-nothing alternative. But it’s quite hard to feel optimistic these days. Even Sweden!


It’s what happens after innovation that matters for productivity

Having been guiltily reading a thriller or two, as well as David Olusoga’s Black and British, this is a brief post about an economics paper I’ve read, Paul David on Zvi Griliches and the Economics of Technology Diffusion. (Zvi was one of my econometrics teachers at Harvard, a very nice man who was still so obviously brilliant that he was a bit scary. He would ask a question which might be completely straightforward but one would have to scrutinise it carefully before answering, just in case.) Anyway, the Paul David paper is a terrific synopsis of three areas of work which are implicitly linked: how technologies diffuse in use; lags in investment, as new technologies are embodied in capital equipment or production processes; and multifactor productivity growth.

As David writes here: “The political economy of growth policy has promoted excessive attention to innovation as a determinant of technological change and productivity growth, to the neglect of attention to the role of conditions affecting access to knowledge of innovations and their actual introduction into use. The theoretical framework of aggregate production function analysis, whether in its early formulation or in the more recent genre of endogenous growth models, has simply reinforced that tendency.” He of course has been digging away at the introduction into use of technologies since before his brilliant 1989  ‘The Dynamo and the Computer‘. Another important point he makes here is that there has been little attention paid to collecting the microdata that would permit deeper study of diffusion processes, not least because the incentives in academic economics do not reward the careful assembly of datasets.

By coincidence, the paper concludes with a description of a virtuous circle in innovation whereby positive feedback to revenues and profits from a successful innovation lead to both learning about what customers value and further investment in R&D. Here is the diagram from the paper.

diagThis was exactly the argument made yesterday at a Bank of England seminar I attended by Hal Varian (now chief economist at Google, known to all economics students as author of Microeconomic Analysis and Intermediate Microeconomics, and also with Carl Shapiro of Information Rules, still one of the best texts on digital economics). Varian argued there are three sources of positive feedback: demand side economies of scale (network effects), classic supply side economies of scale arising often from high fixed costs, and learning-by-doing. He wanted to make the case that there are no competition issues for Google, and so suggested that (a) search engines are not characterised by indirect network effects because search users don’t care how many advertisers are present; (b) fixed costs have vanished – even for Google-sized companies – because the cloud; (c) experience is a good thing, not a competitive barrier, and anyway becomes irrelevant when a technological jump causes an upset, as in Facebook toppling MySpace. I don’t think his audience shed its polite scepticism. Still, the learning-by-doing as a positive feedback mechanism argument is interesting.

The social life of electricity, continued

I devoured Then There Was Light: Stories Powered by the Rural Electrification Scheme in Ireland on my train journey back from the Bristol Festival of Economics yesterday. It’s a delightful collection of reminiscences about this scheme taking electrification to the countryside (ie most of Ireland) during the 1940s and into the 1950s. In fact, coverage only reached 100% of all the remote islands in 2000, and some hamlets were eventually depopulated as electricity never got to them.

The book is a reminder of what a poor and agrarian country Ireland was until, well EU accession really. The essays by men who worked digging post holes and driving trucks often start with the relief they felt on getting, not only a job, but one for a government body paying decent wages. Being hired for the work changed their lives. Most are written by people who spent at least part of their childhood in darkness lit only by candles and paraffin lamps, with mothers doing all the laundy by hand. One writer calls the scheme one of the most transformative events in Ireland’s 20th century history as a nation, and by the end of the book, this doesn’t seem like hyperbole.

It’s fasinating to read about the hesitations some people had: about the cost not only of getting hooked up to start with but also the vulnerability to having to make ongoing payments for power. For many new customers, this was the first regular bill payment they experienced; about the dangers – would it harm the cattle or burn down the house? And about whether it would change the character of life for the worse. Parish priests were often key advocates for electrification, and so were women, who quickly saw the potential benefits of electrical domestic appliances. The company also had a PR and sales team – the book has illustrations showing some of the demos and delightful adverts: “Electricity saves money in the farmyard!”

Above all, the book is a reminder that all technology is social. Not only do new technologies need complementary investments  – households paying for their internal wiring and switches, or street lights, for example – people have to be able to see the benefits as well as the costs (faster milking, no more washing by hand), and these often need demonstrating in practice before they are believed. Electricity in particular is also a social technology, involving collective decisions to create and sustain the incentives to make it function. The investments are long-term, they change places dramatically; and although the technology is now old, it is both essential and dangerous.

So it is that many countries still cannot provide a consistent universal electricity supply and even advanced economies experience power cuts and underinvestment. If western political systems lose their ability to create consensus and collective, long-term action there will be many bad consequences but one of them might well be disrupted electricity supplies. This is not alarmist: I spent chunks of my teenage years doing homework by candlelight too. We had the power stations and the wires, but not the social and political infrastructure in the years of unrest and strikes in the 1970s.


Algorithms and (in)justice

It’s been one of those weeks. One of those years, actually – David Bowie *and* Leonard Cohen. Listening to ‘Democracy‘ as I write this.

Still, I have managed to read Cathy O’Neil’s excellent Weapons of Math Destruction, about the devastation algorithms in the hands of the powerful are wreaking on the social fabric. “Big data processes codify the past. They do not invent the future. Doing that requires moral imagination, and that’s something only humans can provide. Sometimes that will mea putting fairness ahead of profit.”

The book’s chapters explore different contexts in which algorithms are crunching big data, sucked out of all of our recorded behaviours, to take the human judgement out of decision-taking, whether that’s employing people, insuring them or giving them a loan,  sentencing them in court (America, friends), ranking universities and colleges, ranking and firing teachers….. in fact, the scope of algorithmic power is increasing rapidly. The problems boil down to two very fundamental points.

One is that often the data on a particular behaviour or characteristic is not observed, or unobservable – dedication to work, say, or trustworthiness. So proxies have to be used. Past health records? Postcode? But this encodes unfairness against individuals, those who are reliable even though living on a bad estate, and does so automatically with no transparancey and no redress.

The other is that there is a self-reinforcing dynamic in the use of algorithms. Take the example of the US News US college ranking. Students will aim to get into those with a high ranking, so they have to do more of whatever it takes to get a high ranking, and that will bring them more students, and more chance of improving their ranking. Too bad that the ranking depends on specific numbers: SAT scores of incoming freshmen, graduation rates and so on. These seemed perfectly sensible, but when the rankings they feed into are the only thing that potential students look at, institutions cheat and game to improve these metrics. This is the adverse effect of target setting on addictive crystal meth. Destructive feedback loops are inevitable, O’Neil points out, whenever numerical proxies are used for the criteria of interest, and the algorithm is a black box with no humans intervening in the feedback loops.

The book is particularly strong on the way apparently objective scoring systems are embedding social and economic disadvantage. When the police look at big data to decide which areas to police more harshly, the evidence of past arrests takes them to poor areas. A negative feedback loop – they are there more, they arrest more people for minor misdemeanours, the data confirms the area as more crime-ridden. “We criminalize poverty, believing all the while that our tools are not only scientific but fair.” Credit scoring algorithms, those evaluating teachers using inadequate underlying models, ad sales targetting the vulnerable – the world of big data and algos is devastating the lives of people on low incomes. Life has always been unfair. It is now unfair at lightning speed and wearing a cloak of spurious scientific accuracy.

O’Neil argues that legal restraints are needed on the use of algorithmic decision-making by both government agencies and the private sector. The market will not be able to end this arms race, or even want to as it is profitable.

This is a question of justice, she argues. The book is vague on specifics, calling for transparency as to what goes in to the black boxes and a regulatory system. I don’t know how that might work. I do know that until we get effective regulation, those using big data – including especially the titans like Facebook and Google – have a special responsibility to consider the consequences.


The trade-investment-service-intellectual property nexus

I’ve managed to resist reviewing Richard Baldwin’s new book The Great Convergence: information technology, trade and the new globalization until now, and it has taken serious self-restraint as the book is so relevant to (among other things) the Brexit debate. I would for one thing force every Cabinet member to read it and not allow them to keep their jobs unless they could pass an exam based on it. Anyway, the book’s published on 14th November and now it’s November my self-denying ordinance can end.

The Great Convergence offers a compelling framework for thinking about how trade is organized and why and how it benefits whom. The first part is a historical overview of trade leading up to the first, the Old Globalization or the 19th century. This phenomenon, due to steam power reducing trading costs, industrialization and a context of relative global peace led to the Great Divergence: the major economies of Asia, which had been richer than the West, fell behind, dramatically so over the course of two centuries. The New Globalization, since the 1980s, driven by the new information and communication technologies, has taken the rich countries’ share of global output back to its 1914 level in little over two decades. China is the standout story, going from uncompetitive in 1970 to 2nd biggest in the world by 2010, but other rapidly industrializing nations in the New Globalization are Korea, India, Poland, Indonesia and Thailand (ie. a different group from the notorious BRICs).

However, as the book goes on to document, the New Globalization is a completely different kind. Trade over distance has three costs: the costs of moving goods, ideas and people. When moving goods got cheap, the first explosion of trade occurred, but ideas were costly to move so the innovations of the industrial revolution were not easily exported. The Old Globalization was the result of low shipping costs and high communication costs. ICTs have reduced the latter significantly, so industrial competitiveness is defined in terms of production networks, interlinked supply chains, that cross national borders. Knowledge has been offshored, and the rapid growth in a few previously poorer countries has come about because of their geographical location, close enough to G7 industrial centres that managers can travel there, sharing knowledge within the confines of the production network.

This means the New Globalization happens at the level of stages of production and occupations. This makes it harder to predict who will be affected – which jobs will be offshored, which areas most affected. “Nations are no longer the only natural unit of analysis”. Much of the book describes a new data set making it possible for economists to begin to explore the ‘value added’ pattern of trade created by the switch from trading finished goods toward trading components in global production chains. The picture is going to be utterly different – the famous example being the iPhone which is sourced conventionally as a Chinese export to the US but where the value added is concentrated in the American business and the Chinese import a lot of the components they assemble and re-export with not much value added at that stage.

This is one insight the Brexiteers need to appreciate, although the Nissan letter suggests at least some members of the government realise the signficance. British businesses are woven into supply chains with our near neighbours: we aren’t importing prosecco and salami so much as gear boxes. Brexit threatens to tear apart these links. If the cost appears to be too high, the multinationals at the head of the supply chains will relocate chunks of their production networks, and won’t care if they’re exporting gear boxes to the Czech Republic rather than Britain.

The book adds: “Twenty-first century supply chains involve the whole trade-investment-service-intellectual property nexus, since bringing high quality, competitively priced goods to customers in a timely manner requires international coordination of production facilities via the continuous two-way flow of goods, people, ideas and investments. Threats to any of these flows become barriers to global value chain participation…” Baldwin adds that the movement of people is still a binding constraint on globalization, and face-to-face communication – and so distance – remain important. He argues that the improving quality of telepresence is changing this, but I think that remains to be seen.

Ultimately, trade policy today is not just about trade nor about nations. It involves deploying the nation’s productive resources through overseas connections. This is why 90% of the economics profession thought, and thinks, Brexit so damaging, and the idea that the UK has more economic self-determination outside the EU a delusion. The Great Convergence is not about Brexit – it ranges far wider. I can’t imagine a better and more accessible analysis of trade and globalization in the digital era.


(20 November: minor typos corrected)