-
""

AI

Is AI (b)reaching its limits?

15 November 2024

Lukas Gehrig, Zurich, Switzerland, Quantitative Strategist; Nikola Vasiljevic, Ph.D., Zurich, Switzerland, Head of Quantitative Strategy; Robert Smith, CFA, London UK, Data Scientist

Key points

  • Artificial intelligence will change the world of work. But by how much and how quickly?
  • The degree to which AI replaces employees depends on its ability to perform core skills, which is still limited. 
  • The US and China lead the field in implementing large-scale AI systems. They are unlikely to give this up, to the detriment of others.
  • Investing in applications in fields ripe for productivity gains appears particularly promising.

Economist John Maynard Keynes argued that the increase in technical efficiency was happening too fast, in order for the labour market to adjust. And, that was back in 1933. The first televisions had just entered homes, revolutionising information distribution, and automobiles were ever more prevalent in cities. He identified a new ‘disease’, technological unemployment, defined as “unemployment due to our discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour”1.  

In today’s world, the promises of great productivity gains clash with Keynesian worries about job cuts and displaced labour. The time horizon over which these events might unfold, often found to be around ten years, suggests that much urgency is needed. 

In this article, we argue that despite artificial intelligence’s (AI) meteoric rates of adoption in recent years, it is still in its infancy, in terms of its ability to replace labour and to drive productivity gains. The challenges facing the broad implementation of AI across industries, range from its hunger for energy to the finer details around implementing it for job competency requirements that involve questions of trust. Moreover, while the technology is often discussed as being a global disruptor, research and implementation can vary from region to region.

The race is underway

Keynes argued, unsurprisingly, that countries not at the vanguard of progress, would suffer relative to those that were. Indeed, through productivity increases, the speed of automation has been correlated very strongly with most measures for living standards, such as income per worker. 

AI is interesting in this regard, because, while it may enable skill gaps to close and elevate workforces in developing economies, the race for dominance in AI capabilities is well underway. Indeed, large parts of the world may never catch up with the leaders.

It’s a battle royal between the US and China

The map shows the number of AI systems identified by country, where AI models are defined as “large-scale” when their training compute is confirmed to exceed 10²³ floating-point operations.

Sources: Our World in Data, Lightcast via AI Index (2024), Barclays Private Bank. Data accessed in September 2024

The key reason for the concentration of AI developments within just two economic heavyweights is the ever-expanding cost of training the systems. While the combined costs for hardware and energy usage were measured in the thousands of dollars back in 2016, frontier models were easily consuming millions of dollars by 2023, and that was just for initial training runs. This limits the number of contributors to frontier research. 

Costs of training AI systems have increased

The combined costs in hardware and energy for training of AI models is expressed in constant 2023 US dollars. The hardware expenses are amortised and calculated by multiplying the training chip-hours by the reduced hardware cost, with an additional 23% for networking expenses

Sources: Our World in Data, Lightcast via AI Index (2024), Barclays Private Bank. Data accessed in September 2024

Six nuclear reactors to train a model

The process of training these models, involves trillions of data points and vast numbers of calculations to improve accuracy and performance. This, in turn, demands large-scale data centres packed with specialised hardware, consuming much electricity.

Recent estimates suggest that the computational demands of AI models have grown by over 260% per year, on average, over the last five years2. This growth is expected to continue, as increases in model scale can contribute two-thirds of the performance gains of AI models.

However, this trajectory faces challenges, including the availability of high-quality training data, the scaling in production of the advanced chips needed and constraints on energy supply to data centres. Of all the constraints, power supply seems to be the most significant. If AI continues to scale at the current pace, it is estimated that by 2030 training a frontier model will require six gigawatts (GW) of power3, even after factoring in future efficiencies. To put that in perspective, six GW is equivalent to the output of around six nuclear power plants4

Ten Google searches for one GPT query

Training is only a part of the equation though; once models are deployed, they require energy for ‘inference’ — the process of using trained models to provide results and solve problems. Whilst inference is far less energy-intensive than training, a single Chat GPT query is estimated to require 10-times the electricity of a Google search5. Therefore, if Google were to power its entire search engine using AI models, it would consume 29.3 terawatt hours (TWh) per year6, equivalent to Ireland’s energy usage over the same period.

Data centres account for 2-4% of energy consumption in large economies today, but this is expected to double by 2030, making up 8% of US power demand7, driven by AI growth. The current geographical concentration of these data centres poses the biggest capacity issues, with datacentres already consuming over 10% of electricity in at least five US states and over 20% of electricity consumption in Ireland. 

Scrambling for energy

The question, then, is not only how AI will continue to scale, but also how the energy infrastructure will keep pace. With construction of nuclear reactors alone taking around eight years for the most recently finished reactors and only twelve reactors having been connected to the grid over the last three years globally8, the bottleneck is real.

Meeting AI’s future power needs will require large-scale investments in energy generation, likely far beyond what is currently planned. Google’s recent agreement to purchase nuclear power, which will supply 200 megawatts (MW) by 2030, highlights both the industry’s awareness of this issue and the massive gap that remains.

Output boost over the next decade: 1% or 100%?

Near-term physical limits to AI capabilities aside, gauging its impact on economies is difficult. It’s easy to extrapolate too far in the euphoria of the moment. By contrasting the complexity of tasks that are up for automation with present and future capabilities of AI, estimates can be derived for how much the technology may substitute labour and drive raw productivity gains. Furthermore, the shift may also generate new ways in which labour might be employed, an important concept in Keynes’ thoughts on technological progress.

Extremely optimistic estimates for efficiency gains suggest between 100% and 300% of global GDP growth over ten years, thanks to Artificial General Intelligence, which goes miles beyond current capabilities. More grounded forecasts include some that project a 7% increase in global GDP over ten years, based on more easily attainable generative intelligence (GenAI) capabilities. Cautious estimates see a 1.1% increase in GDP over ten years, not globally, but just for the leading country in AI research, the US9.

The difficulty of estimating AI’s potential impact is best shown by contrasting estimates on computerisation made just ten years ago with today’s data. In 2013, the Oxford University economists Carl B Frey and Michael A Osbourne dissected the US economy into 702 different occupations and estimated the probability of computerisation. Aggregating their estimates, they found that 47% of the US labour force were at high risk of being automated “relatively soon, perhaps over the next decade or two”. Especially workers in transport and logistics occupations, but also office and administrative support staff were regarded to be at risk.

E-commerce boosted transportation and logistics occupations

Number of employees in millions and growth rates from 2013 to 2023 for different occupations from US Occupational Employment and Wage Statistics

Sources: Bureau of Labour Statistics, Barclays Private Bank, April 2024

Ten years down the road, it is astonishing to find that the transport and warehousing sector has, among all US sectors, created the largest number of jobs (+44%). Zooming in on occupations, instead of sectors, we find that while office and administrative support occupations have indeed declines, e-commerce has actually boosted job growth in material-moving occupations, while also leading to above-average growth rates in motor vehicle operator occupations. This is a very powerful example of how advances in technology can create new jobs.

Current AI affects all skill levels

Whether AI ultimately replaces employees entirely, or uplifts their capabilities, depends on the AI capability and the difficulty of the tasks. Academic research divides the skills used for any job into core skills and the less important ‘side skills’. As long as AI is confined to automating side skills, like the drafting of letters for an advocate, it is complementary. Once the technology is powerful enough to automate core skills, it becomes a substitute and, depending on the cost, may replace labour entirely.

Recent research suggests that current AI capabilities already affect the side skills of low- and high-skill workers10. But the likelihood of a large-scale displacement of labour is still low for lesser-skilled workers, often found in physical jobs, where side skills are few and secondary. For high-skilled workers, where side skills are relatively more important, the current capabilities of AI are not refined enough. This is likely to limit the productivity gains economies can expect in the near term.

Are we imagining things?

AI makes mistakes, just like us. One of the main challenges, which is also holding it back from performing more of the mentioned side skills, is that systems can inadvertently perpetuate biases present in training data. Such embedded biases can systematically, and unfairly, discriminate against certain individuals or groups, potentially favouring others.

Moreover, GenAI introduces new risks, like so-called ’hallucination’, where models might confidently produce incorrect but plausible outputs, and these risks may persist despite efforts to improve data quality and transparency. 

Therefore, in customer-facing applications, GenAI-supported tools could offer inappropriate advice to non-experts, potentially harming both users and the reputation of the company. For this reason, some sectors may embrace full automation more quickly than others, where data is more sensitive. For example, GenAI use in critical industries such as finance or healthcare requires close human oversight, tailored to the level of risk posed by its application. 

Sectoral differences in implementation 

Ensuring robust AI performance has become essential to safeguarding public trust. Robustness in AI spans model accuracy and governance to prevent unethical outcomes – such as bias and exclusion – and address potential privacy shortcomings. 

Enterprise-level GenAI systems are being developed to mitigate privacy concerns found in public systems, potentially enhancing data security, such as in finance and healthcare. Enterprise-level applications may reduce certain risks associated with the public ones, though they may not be a cost-effective solution for smaller financial institutions.

Therefore, the issues like transparency, interpretability, data privacy, and potential biases challenge public confidence and the pace of AI adoption, which may vary significantly from one industry to the other.  

Overestimate the short run, underestimate the long run

In 1933, Keynes described a new disease that he feared would plague economies as a ‘temporary phase of maladjustment’. In the long run – over one hundred years – he claimed the standard of life would be four to eight times that of 1930. Ten years off the full century, we can state that US life expectancy at birth has shot up from 60 years to 79, across genders. Infant mortality dropped from over 100 deaths per 1,000 live births, to below ten. And after adjusting for inflation, per-capita GDP increased 7.5-fold11.

One could argue that anything AI can add from this point onward for the last ten years of Keynes’ projection, is just added bonus. Still, there seems limited potential for broad productivity gains across sectors and jurisdictions coming from generative artifical intelligence for now. As frontier model runs are becoming more resource-intense to develop, the growth of AI capabilities is very much tied to our ability to generate more energy – something we have been struggling with lately.

In this environment, investing in task-specific AI applications within fields where AI unlocks productivity gains, by automating a large share of an occupation’s tasks, appears more promising to us than banking on ‘breakthrough’ developments from the competing producers of frontier models. 

""

Outlook 2025

In the aftermath of the US election, our bumper “Outlook 2025” analyses what might drive financial markets next year.

Disclaimer

This communication is general in nature and provided for information/educational purposes only. It does not take into account any specific investment objectives, the financial situation or particular needs of any particular person. It not intended for distribution, publication, or use in any jurisdiction where such distribution, publication, or use would be unlawful, nor is it aimed at any person or entity to whom it would be unlawful for them to access.

This communication has been prepared by Barclays Private Bank (Barclays) and references to Barclays includes any entity within the Barclays group of companies.

This communication: 

(i) is not research nor a product of the Barclays Research department. Any views expressed in these materials may differ from those of the Barclays Research department. All opinions and estimates are given as of the date of the materials and are subject to change. Barclays is not obliged to inform recipients of these materials of any change to such opinions or estimates;

(ii) is not an offer, an invitation or a recommendation to enter into any product or service and does not constitute a solicitation to buy or sell securities, investment advice or a personal recommendation; 

(iii) is confidential and no part may be reproduced, distributed or transmitted without the prior written permission of Barclays; and

(iv) has not been reviewed or approved by any regulatory authority.

Any past or simulated past performance including back-testing, modelling or scenario analysis, or future projections contained in this communication is no indication as to future performance. No representation is made as to the accuracy of the assumptions made in this communication, or completeness of, any modelling, scenario analysis or back-testing. The value of any investment may also fluctuate as a result of market changes.

Where information in this communication has been obtained from third party sources, we believe those sources to be reliable but we do not guarantee the information’s accuracy and you should note that it may be incomplete or condensed.

Neither Barclays nor any of its directors, officers, employees, representatives or agents, accepts any liability whatsoever for any direct, indirect or consequential losses (in contract, tort or otherwise) arising from the use of this communication or its contents or reliance on the information contained herein, except to the extent this would be prohibited by law or regulation.