Science Policy, Economics, the Diffusion of Technology and Growth
Science research is the ultimate merit good. There is no mechanical connection between science and near term economic growth. Diffusion of technology turns on open economies and flexible markets.
Science leads to advancement of knowledge and innovation and improvements in how economies do things. The extent of the impact of potential innovation and improvements in technology depends on how quickly and how extensively available technology is diffused throughout the economy. The process of innovation, its full consequences, and the speed of the realisation of the full potential of a technical improvement are difficult to get a purchase on. The invention of the steam engine in 1712 did not have a pronounced effect, as a key change facilitating the industrial revolution until some eighty years later. Moreover, steam’s full impact came with the development of railways in the 1830s, and its ultimate effect came when the steam turbine was invented to coincide with electric power generation and distribution.
In the second half of the 20th century science was harnessed as part of national policy and became associated with economic success. Yet high spending on research and a large science base will not translate into economic growth without a complimentary set of market institutions and economic incentives that support the diffusion of technology. Being open to trade and ideas is essential to making the best of the technology available. Science has, since the first transformation of the various cabinets of curiosities in the 17th century through the various corresponding learned societies exemplified by the Royal Society in London, always been international.
OECD data that scores science and research shows that there is more research activity than ever before. The US as the world’s largest and richest society has the largest science base. It is followed in second place by China, then a long way behind by Japan, Germany, Korea, the UK, and France.
Gross domestic expenditure on R&D, selected economics, 2000-21 USD million in constant PPP prices
However, in terms of intensity several smaller economies are much more intense in the relative effort that they put into science and technology. Israel and Korea are the most research intensive economies in terms of research as a ratio of GDP. Followed some way behind by Taiwan. In general the latest OECD data show that more resources are being devoted to R&D across its member countries.
R&D intensity: Gross domestic expenditure on R&D as a percentage of GDP
America is uncomfortable with the rapid emergence of the Chinese science and research base and perceives it as not only an economic or competitive challenge but a threat to US national security. The Congressional Research Service note expressed it in blunt terms in 2021: ‘A primary challenge to American military technological pre-eminence is the emergence of China as a potential military adversary and as a science and technology powerhouse. China’s emergence as a global science and technology leader is evidenced in part by its rising position among nations in the funding of R&D. China’s share of global R&D rose from 4.9 per cent in 2000 to 23.9 per cent in 2019. During this same period, the United States, Japan, and Germany saw their collective share of global R&D fall from 62.6 per cent to 44.5 per cent. Moreover, while the United States remained the world’s single largest funder of R&D in 2019, spending 25 per cent more than China, China’s R&D funding has been growing at a much more rapid pace. As a result, China’s R&D expenditures passed Germany’s in 2004 and Japan’s in 2009’.
Nations with the Largest Gross Expenditures on R&D, 2019
Michael Dumont, Principal Deputy Assistant Secretary of Defence for Special Operations/Low Intensity Conflict in the Obama Administration said that many US ‘adversaries have acquired, developed and even stolen technologies that have put them on somewhat equal footing with the West in a range of areas....the U.S. government no longer has the leading edge developing its own leading edge capabilities, particularly in information technology’.
Growth in Gross Expenditures on R&D for Selected Nations, 2000-2019
China is perceived to be taking a multi-dimensional approach to acquire the technology that will enable it to match the US militarily. This will be through its own research supported by attracting foreign direct investment, getting Chinese based venture capital often state or publicly funded venture capital to purchase early startup businesses, Chinese investment in American venture capital, use of private equity and straightforward intellectual theft through industrial espionage and cybertheft. In many respects this takes US science policy back to its post-World War II genesis. The big difference is that the source of American anxiety in the 1950s was not an economy with a large functioning market sector that was exposed to and involved in international trade. Apart from a properly convertible capital account, China is a close, albeit not complete, approximation of a functioning market economy. China possesses an economy and political institutions that appear supportive of the diffusion of technology. Even before the market reforms initiated after the death of Mao, scientists from the British Royal Society were impressed by the work they were shown during a visit in 1962. In the 1960s China made good progress with the development of nuclear technology and managed the chemical synthesis of insulin.
World War II and the nationalisation of science
A strong and powerful science base, supported by public and private R&D expenditure is perceived as critical for defence and national security and for generating improvements in productivity and growth in GDP. The permanent co-option of science and research as part of a national policy objective is a legacy of World II. Until 1939 science was considered by governments to be part of general cultural life in the same way as literature or scholarship, such as history and philosophy. Individual government departments in the US and UK got involved with specific matters that affected them and supported learned societies, such as the British Royal Society and the American National Academy of Science, but there was no overarching science policy or effort to harness science and technology to either national security or economic policy. World War II changed that. The Manhattan Project that developed the nuclear bomb, the development and use of radar, along with other applications of science in other things, such as the use of statistics in Operations Analysis changed governments’ relationship to science radically.
Vannevar Bush, President Roosevelt’s science adviser, was asked by the President in 1944 to draw on America’s wartime experience and make suggestions for peacetime. The letter Science: The Endless Frontier was delivered to President Truman in 1945. Bush argued that science is a proper concern of government and argued for ‘a strong and consistent federal government commitment to scientific research to insure our health, prosperity, and security as a nation in the modern world’. Bush wrote there must be ‘continuous and substantial’ to enable ‘more jobs, higher wages, shorter hours, more abundant crops, more leisure for recreation, for study, for learning how to live without the deadening drudgery which has been the burden of the common man for ages past … for higher standards of living … the prevention and cures of diseases … conservation of our limited national resources, and … means of defence against aggression’.
The Cold War, Sputnik, and the American Missile Gap with the Soviets
If anything, the post-war years reinforced the notion of the need for a national policy for science and for the application of R&D, as a form of national policy essential for defence and future economic welfare. The Cold War with the USSR amplified this. It created a national security neurosis. In October 1957 the successful Soviet launch of the Sputnik rocket into space, which highlighted the technological achievements of the USSR and further fed this anxiety. The US Gaither Report in 1957 identified there was a difference in the size of ballistic missile arsenals between the US and USSR, in the Soviets’ favour. Senator John F Kennedy, as an ambitious Democrat, astutely made political use of this concern, coining the vivid term ‘missile gap’ to describe the national security position.
During the Cold War, paranoia was not confined to national security. In the late 1950s and early 1960s, for example, there was a serious debate about whether Soviet planning and its application of modern technology could result in the socialist economies achieving higher standards of living and economic capacity than the market economies in the western democracies. There was a lively and acrimonious debate on ‘Revisionism’ between different wings of the Labour Party focusing on the role of nationalisation and economic planning. The principal protagonists were two Labour MPS and former Oxford dons, who went on to serve as senior cabinet ministers in the 1960s. Anthony Crosland the author of the Future of Socialism argued that public ownership and economic controls had been overused and should not be extended. While Richard Crossman maintained the case for socialist planning and nationalisation. Much of the debate was conducted in the columns of Encounter in the early 1960s. Richard Crossman wrote that history will force capitalist countries such as America and Britain to pay attention as ‘the Communist countries demonstrate with ever increasing force the efficiency of nationalisation. Progressively year by year we shall see that, judged in terms of national security, scientific and technological development, popular education, and finally, even of mass living standards, free enterprise is losing out in the peaceful competition between East and West. It would be strange indeed for the Labour Party to abandon its belief in the central importance of public ownership at the precise moment when the superiority of socialised economies is being triumphantly vindicated in world affairs’.
Technology and economic growth
Richard Crossman was the Labour shadow parliamentary opposition science spokesman and played a central role in getting the British Labour Party after 1961 to commit to fighting the 1964 General Election on a ‘scientific’ platform. A variety of Labour figures were involved in this enterprise originally convened by Hugh Gaitskell. They included Alfred Robens, later as Lord Robens the Chairman of the National Coal Board, a physician Lord Taylor, a publisher of scientific publications who later became a Labour MP, Robert Maxwell, along with Tam Dalyell a Scottish aristocratic landowner, science writer and Labour MP.
Modern British Socialism Forged in the White Heat of Technology in the 1960s
This group agreed on a clear diagnosis of related problems that is summarised by Hilary Rose and Steven Rose in their book Science and Society. ‘Britain was facing an industrial, educational, and scientific crisis. Industrially many of our firms were uncompetitive, technologically backward, and inferior not only to the US but also to some European countries. The need was to force industry to rationalise and innovate, increase its research effort and above all to apply the results of research already done’. The Roses writing from a left-wing perspective noted that Labour ‘did not look for what might be regarded as a conspicuously socialist answer to the problems presented by technological and scientific explosion which had become largely removed from democratic control’. The problems were seen, in the Roses’ judgement, as part of Labour’s political and economic agenda, ‘that is how to modify the existing capitalist order so as to enable it to work more effectively’. This work culminated in a new plan for science introduced by Harold Wilson the Leader of the Labour Party in his keynote speech to the Labour conference at Scarborough in 1964. Mr Wilson said that science would be married to socialism to create a New Britain that ‘is going to be forged in the white heat of this scientific revolution.’
The Roses observe ‘that the placing of science and technology at the centre of a political platform during the 1964 election clearly demonstrated the extent to which the interaction of science and society had, nineteen years after the first truly science-based war in history, emerged in public consciousness’. British success and failure in relation to technology was perceived in the context of a competitive race that went beyond straight forward national security to wider anxieties about economic and political decline. There was huge popular interest in examples of British technological innovation such as the development of the hovercraft and the development of the supersonic jet Concorde, in conjunction with France. This popular interest was reflected in the long running BBC television programme Tomorrow’s World. In British sensibility much of this interest was framed in a nostalgia provoked by a fear that the world’s first industrial economy was being left behind.
France and General de Gaulle’s excise in scientific autarky
Britain was not alone in having a neurosis about technological and wider economic and political decline. France exhibited many of the same neuralgia. Science and technology were harnessed to national security in the same way as in the UK and US, but with a significant difference. Under the leadership of President Charles de Gaulle after 1958 French national security was expressed in terms of the extent that it had the technology to pursue geopolitical goals independently of the USA. In the 1960s it developed an independent nuclear deterrent Force de Frappe. France has a rich tradition of state sponsorship of both science and industry that can be traced back to Colbert in the Grand Siècle. Its network of supporting institutions was invigorated by the Commissariat du Plan and the indicative planning regime developed by Jean Monnet immediately after World War II. This tradition of dirigisme fitted in easily with President de Gaulle change of priority. While the first four Plans placed little emphasis on science and technical research, the Fifth Plan 1966-70 set out objectives to research and development as a ratio of Gross National Income. The plan for the first time brought together private sector research with state planning, hoping to achieve more rational and systematic use of research and development.
Superficially French science policy in the 1960s appeared to be one of the most clearly articulated and best organised in the world. Yet this apparent national Gallic Cartesian achievement came at a price and exhibited several limitations. There is a limit to the extent that science is simply ever a singular national enterprise. Distinguished French scientists were under pressure to publish in French and to present in French at international conferences, rather than to engage with international journals. The objective of effective autarky in relation to nuclear weapons, computers, aircraft and science in general resulted in a huge waste of resources. This political direction of France’s science effort also provoked huge irritation among working scientists and researchers. Their complaint was that everything was too top down, too bureaucratic and too hierarchical resulting in teaching and research using out of date and irrelevant material. French science institutions and culture exhibited an internal malaise that alienated its practitioners as well as failing to emulate the success and progress of US science. That malaise and alienation contributed to the prominent part that scientists played in revolutionary insurrection against General de Gaulle’s regime in May 1968.
Japan’s postwar economic success
A distinguishing feature of the post-war economic era was the recovery and success of the Japanese economy. In the 1950s and 1960s Japan displayed huge progress in manufacturing, cars, motor bikes and electronics. Japan was open to trade and ideas and hoovered up much of the technology available and often appeared to make better use of than the countries where it originally came from. Japan in the 1960s was estimated by the OECD to spend as a ratio of GNP half of what was being spent by the US and 60 per cent of that spent in the UK. Per capita spending on research was $9.3 in Japan compared to $10.5 in the US, $39.8 in the UK and $27.1 in France. Unlike France, the UK and US, none of Japan’s spending was for military purposes. Yet it is impossible not to conclude that Japan’s economic success was achieved without a marching effort in science and research but a huge effort as a purchaser of other countries' technology by importing goods and buying patents. These imports of technology reflected the local science and technology base that ensured these acquisitions of technology were astute. In recent decades as a rich country Japan has had a large science and research base, but that has not prevented it from stagnating economically for thirty years.
Economists offer their intellectual assistance
Economists began to take a serious interest in science or what they termed ‘technical progress’ during the late 1950s and early 1960s. The neo-classical growth model arranged by Robert Solow in A Contribution to the Theory of Economic Growth published in the Quarterly journal of Economics in 1956 showed that growth could not be explained solely by inputs of labour and accretions to the capital stock. This left a residual that had to be explained by improvements in productivity arising from technical progress. This construction of the Solow neo-classical growth model coincided with a huge interest in the rate of economic growth and its compounding effects. This in turn reflected the rapid recovery of industrial economies in Europe from the effects of World War II and the tendency for previously slower growing economies with large agricultural sectors such as West Germany, France and Italy to catch up with economies such as the UK that had industrialised more comprehensively earlier and faster over the proceed 130 years.
In the UK, which exhibited a disappointing relative economic performance in the 1950s and early 1960s, there was huge interest in much faster than historically experienced economic growth. The Prime Minister Harold Macmillan years was dubbed the Age of Mass Affluence. Rab Butler, the Chancellor Exchequer in the 1950s, had spoken to the Conservative Party Conference of the opportunity of doubling the standard of living within a generation. Yet a perception that other western European countries were growing faster and leaving the UK behind stimulated Harold Macmillan’s Conservative government to establish the National Economic Development Council and a framework of indicative economic planning on the French post-war model. A large part of Anthony Crosland’s revisionist social democrat agenda for the Labour Party was that growth enabled living standards to rise, and with rising incomes the opportunity to use buoyant tax revenues to finance higher public spending and collectivist goals in education and other areas of policy to secure a more egalitarian society. The heart of the agenda was economic growth. When Harold Wilson’s Labour Party took power in 1964, as well as its commitment to science and the creation of a Ministry of Technology, it established a Department of Economic Affairs and published a white paper setting out a National Plan with the objective of raising the rate of growth by 3 per cent. Historically it has neither been regarded as either a political or an economic success. The Department of Economic Affairs, however, had one lasting legacy that contributed to the economic welfare of many people, albeit a comic one. It was the model for the fictional Department of Administrative Affairs in the BBC television series Yes Minister.
Economic Disappointment event before the inflation and oil crises of the 1970s
As the 1960s went on and economies entered the 1970s ahead of the convulsions caused by the oil shock in 1973 that quintupled oil prices, growth had started to falter in both North America and western Europe. The connection between science, R&D and technical innovation and economic success appeared less straightforward. Several economies that were not noted for their contemporary contribution to science or research such as West Germany and Japan took off, apparently by making more effective use of research from the international community. While those that spent heavily and had a long tradition and strong science and put a huge effort into science policy such as the UK and France appeared to find it difficult to translate fundamental research and a huge interest in its application into practical commercial and economic applications. The US was not immune from this perception of disappointment. President Lyndon Johnson expressed concern that the federal government financed too much pure research science and not enough applied science that could deliver demonstrable economic benefits.
In the 1960s and 1970s this disappointment with the economic results of science policy among advanced economies was matched by a comparable intellectual disappointment. Robert Solow’s contribution to growth and the neo-classical model was seminal in clarifying the issues. Yet several of the principal features of the model that explained growth were outside or exogenous to the model and treated as assumed constants, such as population growth and technical progress. There was a perception among economists that economic growth theory was stuck in a cull-de-sac.
A New Growth Theory, Increasing Returns to Scale
In the 1980s a new generation of economists led in America by Paul Romer, Brad de Long and Robert Barro looked again at the Solow growth model with a view to explaining the parts of the model that were exogenous. Paul Romer made the running with two articles in the Journal of Political Economy. The first was Increasing Returns and Long-Run Growth in 1986 and the second was Endogenous Technological Change in 1990.
Paul Romer and his colleagues set out to explain and make endogenous population growth, human capital formation and technical progress. They were helped by an interesting aspect exhibited by economics from the late 1960s. This is best described as the intellectual imperialism of economics. This used economic analytical tools to examine education, training, sex and race discrimination, fertility, and demography as well as criminal behaviour. Much of it was done by economists associated with Chicago University by economists, such as Gary Becker, as well as economically literate lawyers such Ricard Posner and Robert Bork, who both served as US federal appeal court judges.
The result of Paul Romer’s work and what is best called the family of post neoclassical endogenous growth theory was available in the early 1990s. It identified a clear link between several related things and economic growth. These were a causal connection between: education, training, science, R&D, capital accumulation in general, manufacturing investment, in particular, and public sector infrastructure investment. This family of research effort also yielded several interesting propositions about the transmission mechanism involved in the process of causation as well as several arresting insights that warrant the sobriquet extraordinary. An important driver of growth came from beneficial externalities, such as spillovers from research and investment that did not necessarily accrue wholly to the firms or economic agents directly involved but were enjoyed by the wider community. The arresting observation was that in relation to much technical progress and investment diminishing returns may not be in play and instead there are constant or even increasing returns to scale. This also led to the observation that the process of catchup between less developed economies and richer economies may not be in play and that the lead enjoyed by a rich economy would become greater if it enhanced its institutions and policies to support growth. The new growth theory also produced research that emphasised the importance of the rule of law, appropriate incentives and controlled public deficits to lower the cost of capital.
Post Endogenous Growth Theory gives America a New Democrat Policy Agenda
New growth theory was the intellectual basis for a new policy agenda and political platform. It offered a series of highly stylised policy prescriptions. They included: increased spending on education, science, R&D, public infrastructure investment, tax relief and capital allowances to raise the rate of private investment and manufacturing investment in particular, tax credits and subsidies for private sector R&D and measures to enhance property rights such as patents to encourage greater private sector research. This policy agenda was taken up by the moderate reformist wing of the Democrat Party and was promoted by the Democratic Leadership Council, which informed the distinctive New Democrat platform that President Client was elected on in 1992.
President Clinton and New Growth Theory at the US Treasury Department
The interesting thing about the Clinton administration’s economic policy was how economically orthodox it was. The US Treasury Department and the Council of Economic Advisers marshalled a fiscally conservative policy framework rooted in a neoclassical analysis of the economy. In terms of a crude left-right spectrum, the Clinton Treasury was to the right of Nigel Lawson’s British Treasury under Mrs Margaret Thatcher. To be clear this was not simply the product of President Clinton’s Democrat Party winning both houses of Congress decisively in the 1994 mid-term elections, but was apparent when talking to Treasury officials and economists at the Council of Economic advisers before the November elections in 1994. Several of the leading lights that had contributed to the New Growth Theory economic policy project including Brad de Long and Alicia Munnell. Treasury officials were clear that the administration’s priority was to reduce the federal deficit to facilitate lower long term bond yields to encourage investment. They also were candid in explaining that many of the high returns yielded by things such as public infrastructure investment had been too optimistic and exaggerated in the recent research literature. There was therefore no big infrastructure programme, but instead deficit reduction and welfare reform. Fiscal caution was the hallmark of the Clinton administration culminating in the federal government surpluses when Gary Gensler served as the Under Secretary for domestic finance at the Treasury Department.
Broad consensus on the connection between science, R&D and productivity and economic growth
Following the lead of Robert Solow and the New Growth economic literature the broad consensus among economists advising governments and international organisations such as the IMF, OECD and World Bank is that there is a strong connection between science, research, innovation and economic performance. A flavour of this broad official judgement can be obtained by reading the period surveys produced by the OECD that provides the authoritative international score card of comparative data on science and R&D expenditure. The following gobbets taken from representative examples of OECD and IMF research illustrate this neatly.
The OECD Working Party of National Experts on Science and Technology Indicators in 2015 for example produced a paper The Impact of R&D Investment on Economic Performance: A Review of the Econometric Evidence. Its overall assessment was that the ‘econometric evidence reviewed by this survey generally speaks in favour of positive and substantial impacts of R&D on productivity and economic growth at firm, industry and country levels. Private rates of return to R&D (gross of depreciation) usually outmatch those found for ordinary capital investments and the benefits that accrue to society as a result of R&D typically exceed private returns by far.’
Likewise an IMF blog in 2021 Why Basic Science Matters for Economic Growth reported ‘that basic scientific research affects more sectors, in more countries and for a longer time than applied research (commercially oriented R&D by firms), and that for emerging market and developing economies, access to foreign research is especially important…….Basic research is not tied to a particular product or country and can be combined in unpredictable ways and used in different fields. This means that it spreads more widely and remains relevant for a longer time than applied knowledge. This is evident from the difference in citations between scientific articles used for basic research, and patents (applied research). Citations for scientific articles peak at about eight years versus three years for patents’.
The IMF blog notes at the start of its analysis that ‘surprisingly, productivity growth has been declining for decades in advanced economies despite steady increases in research and development (R&D), a proxy for innovation effort’. This could be perceived as either an interesting puzzle in the analysis that asserts the consistent causation between innovation and economic growth or it may suggest an important caveat or qualification to the much agreed on connection between science, technical innovation and steady growth. Over the last sixty years growth in advanced economies has steadily slowed. Each decade has recorded less growth than the previous one. When adjustment are made for the economic cycle and the peaks and troughs of output and the various shocks that have influenced economic activity such as the two oil shocks in the 1970s, the fall in equity prices – the tech wreck in 2000 or the credit crises and Great Recession in 2007-09, a consistent pattern has appeared a slower rate of economic growth and an apparent fall in the trend rate of economic growth. This culminated in the 2010s in what appeared to approach a form of ‘secular stagnation’ or what the classical economists would have called a ‘stationary state’. Given this puzzle it is perhaps time to interrogate the easy assertion of scientific progress automatically translating into economic growth.
Is the connection between science and economic growth so reliable?
One of the most powerful notes of dissent is Robert Gordon’s book The Rise and Fall of American Growth published in 2016. The book was rightly described by George Alkerlof, Janet Yellen the present US Treasury Secretary’s husband as a ‘tour de force’. The book looks at the extraordinary economic history of America after the Civil War in 1865. An economic revolution improved the welfare of Americans. Electric light, indoor plumbing, home appliances, motor transport and other changes transformed the welfare of people between 1870 and 1970. Many of these beneficial improvements were one off and cannot repeat their benefits in a dynamic manner. Gordon believes that these improvements failed to be captured in national accounts, given the limitation of GDP and the difficulty of constructing price indices that properly take account of transformations of quality. After 1970 the palette of economic progress became narrower and there were diminishing returns to innovations.
Robert Gordon’s book complements Edmund Phelps book Mass Flourishing published in 2013. Phelps argues that the process of innovation weakened decades ago as markets became distorted, and incentives were blunted by rent seeking and corporatism that places the community and state over the individual. Phelps sees that the key to economic dynamism turns on incentives and properly functioning product, saving and labour markets. The combination of slowing economic growth in advanced economies, the powerful perception of diminishing returns to innovation identified by Robert Gordon and the blunting of market incentives to make best use of technical progress emphasised by Edmund Phelps should be enough to caution simplistic assertions about economic growth and science and innovation.
The challenge of understanding the process of diffusion of technology
The diffusion of technology has been recognised as a different thing from simple scientific progress or innovation alone. Sir Patrick Vallance and Dame Nancy Rothwell the co-chairs of the Science Council Technology wrote to the Prime Minister Boris Johnson in August 2019 explaining their concern about the lack of diffusion of existing technologies that could be used to improve the productivity and performance of UK companies. They wrote:
‘The UK’s productivity gap is well known. We produce less per hour worked than our competitors and productivity growth has stalled since the 2008-9 financial crisis. The problem is not that the UK lacks companies at the cutting edge – indeed we score well in this regard – but rather that the tail of less productive firms is larger and longer than in other countries and growing. Technology is the key to productivity – technology in this context is not just devices and equipment but also business processes, management techniques and analytical methods. Firms in the “long tail” are those that have not implemented existing technologies. We see, for example, that the UK’s productivity gap between the top and bottom-performing companies in the services and manufacturing sectors is larger than our international competitors. The issue in this context is a lack of technology diffusion (the spread of existing technologies) rather than a shortage of innovation (the creation of new and better tools or methods). It is not sufficient for a small number of individuals in a handful of companies in a few sectors in one region to be using technology.’
Barriers to technological diffusion go beyond challenges relating to education, training and analysis. Looking at the process, diffusion would appear to confirm the anxieties that Edmund Phelps has about the market incentives that support successful innovation in America and Europe. It is not clear that diffusion is blunted simply by a lack of training and unfamiliarity with analytical methods. The process of diffusion turns on the social capital that education, training, and research expertise embody but also on relative prices, labour market practices, trade union customs, government regulation, controls on land and building use and rent seeking lobbies such as agricultural and manufacturing lobbies that seek economic rent by supporting regulation and constraints on trade to avoid competition from firms that use cost effective technology in production. A further constraint on the diffusion of technology arises from cultural opposition to science embedded in religious practice and from certain forms of environmental activism. In the UK for decades technical progress in printing in newspapers and logistics in ports was blocked by the print unions and the working of the Dock Work Regulation Act. In EU advances in biotechnology continue to be blocked by the Directive on Biotechnology and the jurisprudence of the European Court of Justice prohibits the commercial use of genetically modified crops and discourages research into genomes. International trade is an important mechanism for the diffusion of technology. The regulations of the EU Single Market in many instances coincide to protect forms of technology used by European firms from the procurement of nuclear power stations to management of animal husbandry and the processing of foods.
Powerful national bureaucracies with entrenched agendas can also operate to block the practical application of existing technology. The civil nuclear power generation of the UK was delayed for years by arguments about the merits of gas cooled nuclear reactors Advanced Gas Cooled reactor (A.G.R.) as compared to the American designed boiling water reactor system (B.W.R). This delayed the programme, increased its costs and lost international commercial opportunities for export. It was a design tailored by the Central Electricity Generating Council to the specific circumstances of the UK national grid, more complex and slower to build and more expensive to maintain. The Chairman of the National Coal Board, Lord Rubens, estimated in 1967 that the pursuit of atomic energy cost an irretrievable £800 million. The UK nuclear programme from its start in the 1950s was an example of a nationalised industry committed to a new technology regardless of cost, empowered by a sense of nationalist zeal, parochial and obtuse to foreign development.
Getting a purchase on the elusive process of technological diffusion
The pioneering research on the diffusion of technology did not come from economists involved in high theory but rural sociologists work on the diffusion of best practice in relation to animal husbandry and crop production in the American Midwest in the 1920s and 1930s. Researchers began to look at the way independent farmers were adopting hybrid seeds, equipment, and techniques during a period when agriculture technology was advancing rapidly. In 1943 Ryan and Gross, by looking at the work of farmers in Iowa, distilled the process of innovation into a paradigm that was accepted as the research literature developed.
The Rural American Sociologists who were the pioneers
Everett Rogers in Diffusion of Innovations published in 1962 synthesised this into a process of stages: awareness, interest, evaluation, trial, and adoption. Diffusion and how effectively and swiftly it proceeds turns on the complexity, ease and cost of the change involved and the social capital and the social networks available to support the process. Rogers thought that this stylised process could be applied to other sectors than agriculture such as health care.
While the need to understand properly the process of diffusion is widely appreciated, in practice as Nancy Stokey points out in her research note Technology Diffusion that ‘good data on diffusion are not readily available. Indeed, for many innovations, there are none at all.’ She looks at several examples of innovation in the US where she can find data and shows that the costs of adoption and the wage structure of the labour markets touched by change play a central role in the diffusion of technology.
New technologies are often designed to enable users of them to economise on labour, by making greater use of the new capital and technology. The relative attractiveness of an innovation therefore turns the relative prices of capital and labour inputs. Wage rates vary greatly across different economies. In contrast the cost of capital varies much less. Relative wage rates and the cost of labour offer an important explanation for differences in the diffusion of capital among countries.
Hybrid corn
The process of the adoption of hybrid corn between 1933 and 1956 was studied by Griliches and reported in 1957. It is considered a classic, where the purpose of the paper was to understand a body of data: the percentage of all corn acreage planted with hybrid seed, by states and by years’.
In principle commercial seed companies in more profitable regions would have a greater incentive to be early movers. Agricultural research stations serving those areas would also have a greater incentive to encourage commercial growers in those areas to adopt the new growing technique. Griliches found that two factors affect supply cost: market density that reduces marketing costs, and a geographic factor that related to a given area representing a contiguous market, which lowers R&D costs. The pace of adoption of hybrid corn seed in different geographical areas is explained by the relative rate of profit across regions. The greatest use of the new seed was in the Corn Belt of the American Midwest where the new seed was introduced first and yielded its greatest success.
Percentage of total corn acreage planted with hybrid seed. Source: U.S.D.A., Agricultural Statistics, various years
Tractors versus horses
Relative costs played a similar role in explaining the adoption of motorised tractors and the replacement of the use of horses and mules in American agriculture in the first half of the 20th century. As well as price, quality and reliability also played an important part in the rate of diffusion. Later models were better in terms of durability and the functions they could perform. It is therefore important to distinguish tractors by their vintage. While tractors were introduced in 1910, there was for example very little adoption before 1920. Farms steadily introduced tractors between 1920 and 1960. This was matched by a fall in the number of horses and mules used as the number of tractors rose. At the same time quality-adjusted tractor prices fell significantly.
During the decades in the middle of the 20th century wages rose. Higher labour costs were a relative price effect stimulating farm business to replace teams of horses that required more labour to manage them than a tractor did. So while costs of tractors were going down and they became more reliable and could do more things, the cost of the alternative was increasing because of higher wages.
Cost of tractors, horses, and labour
Horses, mules, and tractors in farms: 1910-1960.
Diesel Railway Locomotives versus Steam and the Puzzle of Slow Adaptation on US Railroads
The rate of adoption of diesel railway locomotives in the US offers a puzzle to economists. The rate of adoption of this new technology was slow. From the beginning diesel was a more efficient technology that uses fuel more efficiently, is faster in terms of speed, requires fewer refuelling stops and less maintenance. Yet the introduction of this new technology in the US was slow. Diesel locomotives were invented 1912 and first used in the U.S. in 1925. Measured by the share of locomotives that were diesel, penetration of the US railway network was slow during the first twenty years of the use of the new technology and then quite rapid. The rate of penetration rose from 10 per cent in 1945 to almost 90 per cent by 1955 and over 95 per cent by 1960. The slow pace of adoption is surprising. Not least to any visitor to the Henry Ford Museum of American Innovation Technology at Detroit who looks at the massive steam locomotive built to haul coal in the Allegany mountains in the 1940s. It was massive because the only way to generate the necessary power was to build a very big engine given the limited efficiency of steam when converted into energy. The slow pace of introduction of diesel may have reflected large initial investment costs during the economic crisis in the 1930s. After World War II higher wages would stimulate the substitution of labour for capital resulting in rapid increases in diesel locomotives.
The clear take away from these painstaking attempts to interrogate episodes of technical innovation in the US economy is that the key is relative price structures. Moreover, the labour market and wages costs involved in the processes determine the pace at which capital is substituted for labour.
Soviet Planned Economy vitiated the diffusion of technical progress
Between 1950 and 1990 after the USA the two large government supported science establishment were in the USSR and India. The Soviet commitment to science and technology was manifest in the priority it gave to defence and its technological achievements in rocketry were plain from the success of Sputnik. Soviet commitment to science was rooted in Marxist ideology and the belief that science and technology were inseparable from the social infrastructure. From the start in 1917, the Bolshevik revolutionaries were clear that socialism could only be achieved on the basis of a scientific and transforming revolution. Lenin’s bromide that socialism followed electrification was from the beginning carried out with a commitment to science. John Bernal in The Social Function of Science published in 1939 estimated that in the mid-1930s the Soviet science budget amounted to one per cent of GNP compared to 0.1 per cent in Britain and 0.3 per cent of GNP in the USA. The Soviet Academy of Science played a central role in drawing up the Five Year Plan and science’s role in it. A network of elite research institutes was created on the Moscow -Leningrad axis. To promote de-centralisation and regionalisation in 1957 as part of the Khrushchev policy, a new ‘Science City’ was announced in Siberia with twenty-two new research institutes, Akademgorodok. In the late 1960s a Soviet Academician Trapeznikov, the Vice Chairman of the State committee for Science and Technology estimated that each rouble invested in scientific research increased national income by 1.45 rubbles. Yet despite this resources, the scale of the material effort and the estimated return estimated by Trapesnikov the results in terms of diffusion of technology and broader economic growth were disappointing. So disappointing that the Soviet General Secretary embarked on a wholesale reform programme in a speech in May 1985 where Michel Gorbachev set out the disappointing economic performance of the Soviet economy. The Soviet economy did not have the freedom, flexibility or information, prices or institutions to make the best of technology.
India’s licence raj constipated both science and economic development
Much the same could be said of India. Pandit Nehru India’ first Prime Minister. Professionally Nehru is often referred to as a lawyer, he practised at the Allahabad bar, but he had done the natural science tripos at Cambridge completing his degree in 1910. As prime minister of India his principal leisure activity was reading about science and he saw the promotion of science and technology as being central to India’s development as a modern secular society and a successful economy. He was instrumental in establishing India’s ‘all Indian’ science and engineering academies. This commitment to science did not lead to a general diffusion of technology and obvious economic success. Nehru’s interest in science was marched by a commitment to Fabian socialism and Soviet planning which India replicated. India’s economic policies were distinguished by several features: planning, controls, regulations, a concentration on heavy industries such as coal and steel and autarky taking the form of local production substituting for imports. In this context during the first forty years neither science nor the economy flourished. Research was done in government sponsored institutes with little carried out in either universities or industry. There was a perception that the research that did take place was carried out despite the bureaucracy that hindered its administration and approval. Professor Meheshwari’s Presidential address to the Indian National Academy of Sciences rehearsed the nightmare, delays and process of applying to import a very small piece of equipment needed. The correspondence and import clearance could take up to two years. At best in the first forty years of independence could only be described as anaemic. India’s science and its economy were constipated by the so-called ‘licence raj' that vitiated progress.
In the 1990s as the India economy was reformed, became more open to trade and its onerous regulations were dismantled it started to reap the benefits of the Nehru legacy with a large mathematically trained workforce and a culture and an education culture that has embraced engineering. The Indian economy exhibited rapid growth in the 1990s and 2000s. This growth has arisen from sectors where its science base is decisive. These include software; information technology services as well as information technology-enabled industries (ITES), including the outsourcing of business processes, such as accounting, customer services, finance, and human resources. The lesson of India’s experience is that once the right market conditions were permitted its economy has been able to flourish using its science and technology base.
The tenuous immediate links between science and technological diffusion
There is little connection between fundamental research in science and practical applied innovations that can be diffused as work a day technology. And moreover, technology that can be straightforwardly proved to raise the trend rate of growth of an individual economy. What the diffusion of technology requires is learning to use the devices and technology that we already have. This dimension of technical progress is explored in David Egerton’s book the Shock of the Old. Scientific speculation and apparent discoveries that appear only of intellectual or recondite interest can appear to be useful but of often much later.
When Michael Faraday demonstrated that electromagnetism could be used to drive a motor in 1831, the field of electricity was one of novelty often stimulating excited interest but appeared hugely remote from any great practical application. The apocryphal story is that the Prime Minister asked Faraday what the use of the invention may be. Faraday’s reply was that it was as much use as a new born baby. It was some forty years later that it demonstrated its extraordinary utility. The work of Thomas Edison and Joseph Swan inventing the electric light bulb and Edison’s work along with that of George Westinghouse progressing the generation of electricity and its transmission as a source of power combined with Nikolai Tesla’s development of the three phase induction motor that the full potential of Faraday’s demonstrations became apparent.
Einstein’s work on relativity was first presented to the Prussian Academy of Science in 1915. The theory of relativity appeared to be firmly in the category of recondite, even though when Einstein was developing his theory he was working in the Swiss Federal Patent Office. The British Astronomer Royal Frank Dyson recognised the fourth coming solar eclipse in 1919 would present an opportunity to test the theory. Two British physicists Arthur Eddington who was based in Cambridge and Andrew Crommelin, who worked at the Royal Greenwich Observatory in London, were sent respectively to West Africa and to Brazil to make the observations. They photographed the effect of the light passing planets close to the sun, taking appropriate measurements. Their results were presented to the Royal Society in London and written up in Transactions and Proceeding of the Royal Society as A Determination of the Deflection of Light by the Sun’s Gravitational Field, from Observations Made at the Total Eclipse of May 29, 1919, Dyson et al in 1920. Eddington’s statistics demonstrating the bending of light through gravity in 1919, confirming Einstein’s theory was and remains an important scientific observation. Yet in the 1920s appeared remote from any practical application. Many decades later, the Global Positioning System (GPS) is only able to determine location and velocity accurately at or near the earth’s surface by considering the effect of relativity on the timing signals from the satellites that it uses.
Science represents and merit good that yields benefit through serendipity
Many scientists involved in great discoveries have expressed scepticism about the alleged potential of what they demonstrate. Lord Rutherford’s dismissal of harnessing the power of the atom for practical purposes anytime in the near future, in an interview in 1933 as ‘moonshine’ is a vivid example of this caution. His scepticism was shared at the time by Albert Einstein and Niles Bohr. In supporting science Lord Snow warned governments in his Harvard Godkin lecture not to get carried away by what he called gadgets, that is innovations that involved easy practical benefits or rapid economic prosperity. The Tomorrow’s World enthusiasm with the Hovercraft in Britain in the 1960s and the Conservative Government’s enthusiastic promotion of graphene, a new and very strong light flexible material, in the 2010s are examples of what Lord Snow had in mind. Science deserves support as a merit good regardless of its likely immediate benefits. There is huge hit and miss involved in science and often a happy and unexpected serendipity.
The Internet, the US Defence Department, Tim Berners-Lee and the coming of AI
The modern Internet emerged from practical work done by the US Defence Department during the Cold War who wanted to ensure that they could communicate in the event of a nuclear attack. They wanted to ensure that communications should be maintained even if one or more nodes were destroyed. This led to the development of ARPANET (Advance Research Projects Agency Network) which in turn evolved into the Internet. Access initially was strictly limited to certain institutions and agencies, however over time the Internet became widely used within academia. Up to the 1980s commercial networks used different incompatible network protocols and while the Ethernet protocol (TCP/IP) emerged in the 1980s, there were debates about its relative merits compared to competing protocols such as IBM's Token Ring which was widely used as well. Consumers were limited to dial-up services such as CompuServe. The Internet and the Ethernet protocol had to be combined with developments in software, networking, fibre optics, telecommunications and computing power in order to realise its potential. A key development was the development of the HTTP and HTML protocols which were created at CERN by Tim Berners-Lee to make it easier to use the Internet to transfer information. It was only this last development, which we know today as the 'web' that enabled mass internet use as we know it today. It allowed useful applications to be created for consumers and new opportunities to make money which led to very high demand and constant innovation and development, almost entirely by the private sector. Adoption of a common communications standard led to scale which was also important as it resulted in cost reduction. Cost, affordability and practical application are critical to all technological diffusion. One innovation in isolation was not enough.
AI relies on the combination of the series of developments in mathematics, software, data storage and processing capability. Is not clear how a government or public sector guiding hand could have brought this about. For example, would governments have thought it worthwhile investing public money in graphics processors for the amusement of computer gamers? The parallel processing capabilities of these processors are now critical to AI, and the explosive growth of Nvidia. Much of the mathematics and software techniques would have been considered quite esoteric until recently. Nvidia enjoys many patent protections but much of the software used in AI is open source with no intellectual property limitations. Open source software in part has grown because the Internet has made it extremely easy to share code and work collaboratively.
While a military use initially sponsored by the public sector may result in an innovation that later has a much wider series of applications, the public sector is not normally a powerful dynamic in the transmission of the use of new technologies throughout an economy. Public sector bodies tend to be large bureaucratic and have few incentives to improve productivity and efficiency. Indeed, they normally exhibit a version of X-inefficiency. New ways of working will tend to be inhibited by caution, perverse incentives and a strong trade union presence that inhibits the adoption of technology.
There are also clearly cultural dimensions that influence the diffusion of technology. America and several Asian societies such as South Korea, Japan, Taiwan, and Singapore display a promethean capacity to adopt new technological developments. Technology and its dispersion is promoted by America’s distinctive capital markets, the venture capitalists, the private equity networks of investors and public equity markets such as the Nasdaq.
What will be the economic effects of AI?
Ian Stewart, the Chief Economist at Deloitte’s offers an excellent survey of the latest thinking on the economic effects of the latest irritation of the modern IT and tech era in Artificial Intelligence – jobs crash, productivity boom? Generative AI models like ChatGPT have generated much excited comment and anxious predictions of mass job losses and matched by hoped for gains in productivity. Among the various studies, Goldman Sachs estimates that two-thirds of jobs in the US are exposed to some degree of automation due to AI. Twenty- five per cent the workers in exposed occupations could have as much as half of their work removed. Economists at the University of Pennsylvania offer similar estimates. Goldman thinks that AI has the potential to raise growth in global productivity by 1.5 percentage points.
Ian Stewart rightly observes that ‘ it will take years for such effects to become apparent at a macro level. For now, trying to assess the effects of AI relies more on individual studies, cases and stories, most of which highlight the potential rather than the limitations of the technology’. He notes that
the US National Bureau of Economic Research has published a study suggesting AI could raise productivity by 13.8 per cent; and the chief executive of Octopus Energy in the UK has said that AI was doing the work of 250 customer service workers and writing emails that delivered 80 per cent customer satisfaction, well above the 65 per cent achieved by skilled, trained people. While AI demonstrates capacity in specific tasks, such as coding or customer relations. The question is whether it has the potential to be deployed much more widely across the economy in a far wider range of roles.
Ian Stewart’s point is that ‘history shows that it often takes a long time for work to be redesigned to harness the full benefit of new technologies. It took decades for personal computers to impact measured US productivity’, reminding us of the famous remark made in 1987, the US economist Robert Solow, the father of the neoclassical growth model that "You can see the computer age everywhere but in the productivity statistics”. As Ian points out ‘productivity did eventually respond, in the 1990s, but getting there took time, involved major disruption, the deployment of large amounts of capital and a lot of trial and error.’
Caution – take a ride on Robert Fogel’s railway
Advanced market economies are complex social enterprises. Their distinguishing features are their flexibility in the context of shocks and the huge scope they exhibit to substitution effects, along with diminishing returns at the margin. Whether one thing is substituted for another turns on relative prices and costs. This makes economies stable and efficient, but it also means that there is less scope for a silver bullet to transform their performance or overall economic welfare. People often superficially make assertions about transformative innovations and events yet when they are subjected to rigorous and careful interrogation have less overall macro-economic effect than is supposed. The best example of this is the claims made for America’s trans-continental railway infrastructure in the second half of the 19th century and Robert Fogel careful estimates of the overall societal gains made from replacing turnpikes and canals by railways.
The Royal Swedish Academy of Science in1993 in its Nobel Prize citation of what Robert Fogel achieved: ‘Robert Fogel’s scientific breakthrough was his book (1964) on the role of the railways in the American economy. Joseph Schumpeter and Walt W. Rostow had earlier, with general agreement, asserted that modern economic growth was due to certain important discoveries having played a vital role in development. Fogel tested this hypothesis with extraordinary exactitude and rejected it. The sum of many specific technical changes, rather than a few great innovations, determined the economic development. We find it intuitively plausible that the great transport systems play a decisive role in development. Fogel constructed a hypothetical alternative, a so-called counterfactual historiography; that is he compared the actual course of events with the hypothetical to allow a judgement of the importance of the railways. He found that they were not absolutely necessary in explaining economic development and that their effect on the growth of GNP was less than three per cent.’
The diffusion of technology requires open economies and flexible markets
In a letter to Robert Hooke, in 1675 Sir Isaac Newton famously wrote, referring to Rene Descartes that what he (as in Descartes) ‘did was a good step. You have added much several ways, and especially in taking the colours of thin plates into philosophical consideration. If I have seen further, it is by standing on the shoulders of Giants. This has been taken as a metaphor for the understanding that major thinkers have a debt to those who have gone before who have paved the way for their future progress. It is a statement of profound intellectual humility. Science offers knowledge and all sorts of puzzles that by happy accident over time yields huge practical benefits. Those benefits arise by serendipity. They cannot be planned or forecast. Surprise is one of the principal pleasures of science. Science deserves long term support for itself, not for some naïve perception of its near term utility. It is a merit good.
Neither science nor the pace or shape of technological diffusion should be directed by the state. Michael Polanyi, a medical doctor and physical chemist visited the USSR in 1936 to deliver a lecture series at the Ministry of Health. Polanyi had the opportunity to meet Bukharin who explained that in a socialist society all scientific research should be directed according to the requirements of the Soviet Five-Year Plan. The influential British Marxist scientist John Bernal backed a similar call for all British science to be planned by the state. Having seen the problem of the Soviet Government’s backing for a distinctive approach to biology advocated by Trofim Lysenko that rejected Mendelian genetics and advocated approaches to crop management that are credited with famine in the USSR in the 1930s and in China in the 1950s. Polanyi energetically opposed proposals for an all encompassing state direction of science. He helped to found the Society for Freedom in Science and wrote the Logic of Liberty in 1951. Polanyi’s central proposition is that science flourishes when there is open debate among specialists and scientists have the freedom to pursue science as an end in itself. In many respects this fundamental freedom is what has traditionally been embodied in the Haldane principles that have guided British science funding, leaving specialists in the field to decide science funding priorities and not having research spending directed by a political process directed by ministers.
There are two dimensions to this. The first is caution about political choice of detailed research priorities and the direction of research projects. The second is avoiding political prohibitions on certain forms of inquiry. Avoiding political enthusiasm for what Lord Snow would call gadgets is important. Yet equally important is avoiding political prohibition on scientific inquiry. In an emergency such as a war or a public health crisis identifying a narrow goal and throwing everything at it can yield a result, the Manhattan Project and the nuclear bomb and the Covid vaccines are good examples of this. Yet short of such an emergency it is difficult to identify similarly appropriate priorities. Closing down a line of inquiry is equally damaging. The EU’s biotechnology directive, ban on GM crops, refusal to fund genome research and the dismissal of the EU Commission’s Chief Scientific Adviser and the dismantling of the department she headed, given that she was unable to give a scientific judgement about the risk of GM crops that coincided with the political and cultural judgements of the European Parliament is to be avoided.
Technology should be diffused throughout an economy, but its precise process will be complex and contingent on the market structures and institutions of the society involved. The starting point should be making use of what you have. In the UK that would include a health service that uses computers and emails; a railway that uses driverless trains and train managers that make use of mobile telephony and agricultural businesses that are allowed to make full use of developments in biotechnology such as genetically modified crops and gene editing. Diffusion turns on knowledge, skills and education, but also on openness, realism about relative prices, institutions, capital markets that can finance innovation and trade that is free where new technology can easily be imported. Technology will be diffused where mercantilism is avoided, markets are open and product and labour markets are flexible. This requires careful thought about how economies allocate resources, price goods and services, invest and identify market failures that need to be corrected and structural defects in markets that invite structural reform. Identifying these changes goes beyond platitudes extolling the merits of STEM.
Warwick Lightfoot
12 June 2023
Warwick Lightfoot is an economist and was Special Adviser to the Chancellor of the Exchequer between 1989 and 1992. He became interested in science and science policy when he had to brief ministers at the Department of Employment on issues relating to the Hole in the Ozone Layer and Climate Change between 1987 and 1989. He had to undertake this, because at that time, there were no officials working at the department with either a training or interest in science available to do so. He acquired a continuing interest in science and was much encouraged in it, by his friend Dr Bryan Levitt, an academic chemist at Imperial College London. This article draws heavily on Hilary and Steven Rose’s seminal analysis Science and Society, benefitted from Ian Stewart the Chief Economist of Deloitte’s recent analysis Artificial Intelligence – jobs crash, productivity boom? It also draws on the seminars and conferences that Warwick Lightfoot organised, with the help of Professor Lord May FRS, a former President of the Royal Society and the first modern Chief Scientific Adviser to Her Majesty’s Government, as part of Celebration of Science Kensington and Chelsea between 2013 and 2017.