Where *is* the internet?

I’m not sure ‘read’ is the right verb for Networks of New York by Ingrid Burrington, but I looked at it this week. It’s a sketchbook of various bits of internet kit from cables below ground to antennae on rooftops, with notes about the physical kit that makes up the system getting people online, and about the ownership of the kit. I’ve been growing  slightly obsessed by how little is known about the physical internet. So far, James Ball’s The System and Andrew Blum’s Tubes are the only entries on my list of books about this. I was pleased to find out about Ingrid Burrington, who has a number of articles on this and related subjects. But if anybody knows of other books or papers about the physical infrastructure of the internet and the ownership of it (especially for the UK) I’d love to know.




Big Tech – the contrarian view

There’s a torrent of material to read about competition (lack of) in digital markets, which of course goes much wider than economics and law. Indeed, I’ve contributed to the many pages written, including in the form of being a member of the Furman Review team. The general theme is that Big Tech does indeed pose a challenge for competition policy, with the individual conclusions running from ‘some adjustment within current framework is needed’, all the way to ‘destroy them’.

What makes Big Tech and the Digital Economy by legal scholar Nicolas Petit so refreshing is its absolutely contrarian perspective. He has coined the phrase ‘moligopoly’ to describe Big Tech, and argues that while there has certainly been increasing competition in digital markets, there is also vigorous competition unremarked by all the commentators. I’d describe what he calls competition as oligopolistic rivalry, but the book does document the ways in which the GAFA and others compete with each other.

Some of the emprical evidence is rather interesting. For example, the book looks at what the big companies describe as the major risks facing them in their SEC filings and all but Facebook claim competition is their 1st or 2nd biggest threat – they would say that of course, but it intrigues me that Facebook doesn’t bother (number 4 or 5 in its ranking). The others do all see each other as their main rivals. Among the book’s other evidence is their high rate of spending on R&D – but I’d like to know about what it is they’re researching, though.

The ultimate question is not about current competitors, however, but about potential competitors. If you believe digital markets tend to winner-takes-all because of network effects, and you can live with concentration because of the large consumer benefits, then what matters is whether new rivals with great technology and products can take the current Big Tech markets. In that case, moligopolistic rivalry along various dimensions is not only fine but anyway inevitable.

The book didn’t win me over in the sense that I concluded there is no reason to be concerned about digital competition. Without intervention, it’s hard to see anybody rivalling Google in search, or Apple and Android in mobile operating systems. To be fair, the author doesn’t argue that there’s no cause for concern, quite. He is issuing a useful warning that we should think carefully and in detail about what harms we believe Big Tech is causing. This book is a distinctive corrective against the current tendency toward groupthink on this subject. As we said right at the start of the Furman Review, Big Tech has brought many benefits, and there is growing evidence about how much people value its products. Anyone certain they know Big Tech needs fixing should read this more nuanced argument with an open mind.



Are humans or computers more reasonable?

This essay, The Long History of Algorithmic Fairness, sent me to some of the new-to-me references, among them How Reason Almost Lost its Mind by Paul Erickson and five other authors. The book is the collective output of a six-week stint in 2010 at the Max Planck Institute for the History of Science in Berlin. That alone endeared it to me  – just imagine being able to spend six weeks Abroad. And in Berlin, which was indeed my last trip Abroad in the brief period in September 2020 when travel was possible again. I started the book with some trepidation as collectives of academics aren’t known for crisp writing, but it’s actually very well written. I suspect this is a positive side-effect of interdisciplinarity: the way to learn each other’s disciplinary language is to be as clear as possible.

The book is very interesting, tracing the status of ‘rationality’ in the sense of logical or algorithmic reasoning, from the low status of human ‘computers’ (generally poorly-paid women) in the early part of the 20th century, to the high status of Cold War experts devising game theory and building ‘computers’, to the contestation about the meaning of rationality in more recent times: is it logical calculation, or is it what Herbert Simon called ‘procedural rationality’? This is a debate most recently manifested in the debate between the Kahneman/Tversky representation of human decision-making as ‘biased’ (as compared with the logical ideal) and the Gerd Gigerenzer argument that heuristics are a rational use of constrained mental resources.

How Reason… concludes, “The contemporary equivalents of Life and Business Week no longer feature admiring portraits of ‘action intellectuals’ or ‘Pentagon planners’, although these types are alive and well.” The arc of status is bending down again, although arguably it’s machine learning and AI – ur-rational calculators – rather than other types of humans gaining the top dog slot nowadays. As I’ve written in the economic methodology context, it’s odd that computers and also creatures from rats to pigeons to fungi are seen as rational calculators whereas humans are irrational.

Anyway, the book is mainly about the Cold War and how the technocrats reasoned about the existentially lethal game in which they were participants, and has lots of fascinating detail (and photos) about the period. From Schelling and Simon to the influence of operations research (my first micro textbook was Will Baumol’s Economic Theory and Operations Analysis) and shadow prices in economic allocation, the impact on economics was immense. (Philip Mirowski’s Machine Dreams covers some of that territory too, although I found it rather tendentious when I read it a while ago.) I’m interested in thinking about the implications of the use of AI for policy and in policy, and as it embeds a specific kind of calculating reason, thought How Reason Almost Lost its Mind was a very useful read.



Technology old and new

For the usual kind of slightly random reason, I re-read David Edgerton’s excellent book The Shock of the Old this past week (having read it when published in 2006 as he was an interviewee on an Analysis I was presenting [http://news.bbc.co.uk/nol/shared/spl/hi/programmes/analysis/transcripts/27_07_06.txt]). It’s generally aged very well, and is of course a necessary corrective to technology hype. The main argument is that the history of technology tends to be told as a a breathless account of inventors and shiny new inventions, rather than the more representative but complicated story of economic conditions and uneven diffusion and use. So at any moment in time, many overlapping technologies serving the same basic needs will be in use around the world.  What’s more, the same hype gets recycled. For example there’s a quotation from George Orwell in 1944 complaining that people were over-hyping the ‘death of distance’ due to the airplane and radio, when the same claims had been made before 1914!

It is undoubtedly true that different technologies overlap in use, and indeed there’s quite a large economics literature about diffusion and the need for complementary investments before inventions and innovations deliver productivity benefits.  To this extent, Edgerton is railing against an imaginary foe. He is also very sniffy about the concept of ‘weightlessness’, which he misinterprets as a claim about declining employment shares for primary and secondary sectors of the economy. It is not this, but rather a description of the distribution of value added in the economy, and one that has been borne out fully by trends in the past 2 or 3 decades.

The other point that he seems to me to under-play – oddly, given his emphasis on the importance of contest for the use of technologies – is that they are all social. There are countries unable to provide a reliable electricity supply not only because they are low or mid-income but because they do not have the institutions to support the complex supply arrangements: not just sub-Saharan Africa but also California. Or take the book’s example of the Pill, which it argues is an incremental change in contraceptive technologies. Yes and no. Each of the Pill’s characteristics – women in control, reliable, and not requiring a fitting by a doctor – might seem a small shift from condoms, douches, IUDs and diaphragms, but together they did deliver a compelling new method and a radical change in social relations.

Having said all this, The Shock of the Old is a bracing corrective to techno-hype, something certainly still needed.

41HeTZgQv0L._SX324_BO1,204,203,200_(This is the new edition – I have the old one so haven’t read the new intro.)


Making the impossible happen

I have totally enjoyed One Giant Leap: The impossible mission that flew us to the moon, by Charles FIshman, out now as a paperback. It covers everything about the Apollo mission, from the Cold War context (the shock of Sputnik and Gagarin) and JFK’s political calculations and Congressional debates, to the practicalities of the science, design and manufacturing, to the lasting consequences for global society. The Soviet lead in space stimulated the US space effort, but Kennedy himself was lukewarm about America being first on the Moon. Fishman argues that the assassination of the President ensured the continuation of the mission because it became a memorial to him.

One key long-term consequence is that the mission to get humans on the moon brought about the digital revolution. Fishman makes a totally persuasive case that NASA was such a large-scale and demanding, perfectionist purchaser of integrated circuits that it ensured they became faster, more reliable and cheaper with every passing year. Transistors had only been around for 10 years but were too large and power hungry for the new performance demands of manned space flight. NASA bought most of the chips made in the US during the 1960s. The first ones cost $1000 each, in 1962 they were under $100 each, in 1963 $15 each and $7.68 by 1965.

The other long-term impact was to turn ‘technology’ from something scary and Dr Strangelove-like to do with nuclear weapons and mutually assured destruction into something benign and aspirational, the challenge of conquering space for all humanity, albeit planting US flags on the Moon. “The race to the Moon … invoked the wonders of science, with about as much drama as could be imagined.”

The sections about managing the huge engineering project across multiple suppliers, manufacturing to the essential high standards, obessing over details, making key design decisions are all totally fascinating. MIT’s Instrumentation Lab was writing all the software – itself a new word in the early 1960s – and this threatened to delay the launch beyond Kennedy’s ‘before the decade is out’ deadline, so complex and crucial was the task. “It was the first of a whole new kind of engineering projects,” Fishman writes. There was no prior know-how about how to run these. Indeed, big, complex software engineering projects all too often still go wrong. Humans got to the moon and safely back because of the attention to detail on the part of NASA engineers.

The Apollo project was made all the harder by the fact that the onboard computer had to fit within one cubic foot, and its memory contained just 589,824 0s or 1s. So its software was – literally – woven by hand. MIT and NASA HQ had tapes and punch cards. On the spacecraft itself, the programs required to get to the Moon, land the Lunar Module, take off again, dock in space with the Command Module, and return to Earth, there was no room for these bulky items. The punch cards were taken to an old textile factory in Waltham, Massachussetts, where women who had woven fabric, or manfactured watches, in previous jobs now wove software into ‘core rope memory’ at special looms. Their old skills made them the only kind of workers with the know-how to weave computer memory. When the women struck for a while in the mid-1960s, everything their supervisors and managers produced until the strike was over, had to be scrapped.

This is the kind of detail that made me love the book. But the wonder of the Apollo Mission is also part of the enjoyment. I have a vague memory of watching Neil Armstrong, sitting in my PJs along with my older siblings; our family had got our first TV for the occasion. I ended One Giant Leap feeling vaguely optimistic as we approach the end of a dreadful year. Human societies can do impossible, wonderful things, with a combination of political vision and support, and engineers.