Skip to main content Skip to site footer

You are using an outdated browser. Please upgrade your browser to improve your experience.

Close Look: March 2024

3 months ago

Generative artificial intelligence- the next phases

Many words have been written about the transformative potential of the implementation of generative artificial intelligence (gen AI) across the industries and nations of the globe. And many billions of dollars have been made by investors willing to fund this revolutionary new technology, or simply to speculate on its success. The tally of mentions of AI by CEOs on investor conference calls has soared, and any company claiming a potential AI benefit has seen a hefty boost to their share price.

The first or implementation phase of gen AI, brought to wider public attention in 2022 by ChatGPT, is well under way. The graphics processing units (GPUs), or high powered microchips, required to ‘train’ the large language models (LLMs) which support gen AI by creating user responses, have been in hot demand. The leading designer of this technology, Nvidia, is held to be one of the few companies genuinely able to monetise the hype surrounding AI. Although future profit forecasts might be based more on speculation than on substance, Nvidia’s market value has powered through $2 trillion, only the third stock ever to reach that milestone.

The implementation phase of AI is now morphing into the acceleration phase. Nvidia recently showcased their new Blackwell chips. These chips, led by the B200, comprise 208 billion transistors, whereas the current H100 chips have only 80 billion. The chips will be twice as powerful in training LLMs. What is more, they are expected to be five times as powerful at the task of inference, greatly enhancing the speed at which apps such as ChatGPT or Google’s Gemini can respond to users’ queries, according to Nvidia’s CEO Jensen Huang.

However, the opportunity to make supernormal profits in the AI space has not gone unnoticed. Tech giants, such as Microsoft, Google and Amazon, while partnering with Nvidia at one level, are also pouring huge amounts of investment into designing their own chips. The aim is to reduce their dependence on Nvidia and to keep existing customers locked into their own hardware and software systems. Nvidia’s new focus on the inference phase could be seen as a response to the future increase in competition from these ‘hyperscalers’, even as their own customers move beyond training LLMs to the implementation phase of their AI investment.

The eventual size of the AI market remains open to debate, although estimates range from the huge to the stratospheric. Nvidia has forecast that the total value of all the data centre technology will reach $2 trillion over the next five years. And a recent forecast predicted a $40 trillion boost to the global economy by 2030. Regulators are monitoring developments to ensure a competitive market, while platforms failing to tackle AI-powered disinformation or deepfakes could face harsh fines. Meanwhile, Chinese and western AI scientists have agreed to co-operate, in the face of increasingly autonomous AI systems, to reduce the risk of biological or cyber attacks in the future.

We use cookies to give you the best possible experience of our website. If you continue, we'll assume you are happy for your web browser to receive all cookies from our website. See our cookie policy for more information on cookies and how to manage them.