Google claim their ‘faster’ AI chips use ‘less power’ than Nvidia’s

Google claim their ‘faster’ AI chips use ‘less power’ than Nvidia’s
Amaar Chowdhury Updated on by

Video Gamer is reader-supported. When you buy through links on our site, we may earn an affiliate commission. Prices subject to change. Learn more

The ‘AI arms race’, as it’s been dubbed by observers, is now stepping up a notch. Google are now claiming their AI supercomputer is ‘faster’ and consumes ‘less power than the Nvidia A100,’ potentially threatening Team Green’s foothold on the GPU and AI hardware market.

The Cloud TPU v4 is Google’s Tensor Processing Unit offering itself up to machine learning and data computation. First announced by Google CEO Sundar Pichai in 2021, the technical details of the TPU v4 were officially unveiled in a research paper published at the start of April.

Featuring 4,096 chips, it’s ten times faster than the previous v3 iteration. However – growth and technical advancement from its own predecessor is one thing – but improvements made over the market’s leader is another.

Nvidia, once known for producing graphics cards for gaming, are now an integral part of deep learning and artificial intelligence solutions. The recent growth and development of OpenAI, one of Nvidia’s partners, has certainly contributed to their booming stock and market dominance. Could we see this disrupted by Google’s Cloud TPU v4 which threatens to offer a faster, and more environmentally economical AI solution?

While the Cloud TPU has previously been used to train AI and provide the backbone to data centres, Nvidia’s A100 and H100 have established a lock-jaw grip on the industry. You’d be hard pressed to find a giant tech company that doesn’t rely on Nvidia’s chips in some way. In fact, reliance on them has allowed for the recent inflation of GPU prices, which has angered many in the gaming community.

Google is claiming the TPU v4 is “1.2x – 1.7x faster and uses 1.3x – 1.9x less power than the Nvidia A100,” though a comparison with the H100 was not provided as Google are not willing to compare systems developed two years apart. A suitable comparison would be a TPU developed in 2023 with a 4nm process size.

Nvidia’s chips were used to train GPT-4, which has contributed to much of Big Tech’s recent interests in AI. Whether Google’s recent growth has anything to say about that, we’re only going to be able to wait and see.

At the end of the day, can Google’s TPU v4 run Crysis?