Calamos Supports Greece
GreekReporter.comGreek NewsTechnologyGoogle Reveals Revolutionary New AI Supercomputer

Google Reveals Revolutionary New AI Supercomputer

Google
Google have revealed a new AI supercomputer which they claim is faster than competitors Credit: mikemacmarketing / Wikimedia Commons / CC BY 2.0

Google has revealed details about its new AI (artificial intelligence) supercomputer, which it claims is faster and more efficient than the systems manufactured by rival Nvidia.

Nvidia currently dominates the market for AI model training and deployment, with popular AI platforms like OpenAI’s ChatGPT and Google’s own Bard being powered by Nvidia’s A100 chips. However, Google is hoping it can catch up with the development of its chip, the Tensor Processing Unit (TPU).

With AI now widely recognized as the next big thing in the tech industry, the stakes could not be higher. All the major big tech companies, including Google, Meta, Microsoft, Amazon, IBM, and Nvidia want as large a slice of the market as they can possibly take. The result is a race to innovate and outperform competitors.

Google’s new AI supercomputer

AI products and platforms like ChatGPT, Notion, or Bard require many computers and hundreds of thousands of chips working in concert to train models. The computers themselves are left running constantly for weeks or months on end.

Google announced on Tuesday that it had developed a state-of-the-art system consisting of more than 4,000 TPUs and customized components, designed specifically for running and training AI models. This system has been operational since 2020 and was utilized to train Google’s PaLM model, which competes with OpenAI’s GPT model, over a period of 50 days.

According to Google researchers, the TPU-based AI supercomputer, named TPU v4, is “1.2x–1.7x faster and uses 1.3x–1.9x less power than the Nvidia A100.”

“The performance, scalability, and availability make TPU v4 supercomputers the workhorses of large language models,” the researchers added.

Competition

Nvidia currently dominates the market, with 90% of AI model training and deployment relying on its systems. However, Google’s new AI supercomputer could challenge Nvidia’s market dominance.

It remains to be seen how Google’s TPU stacks up against the the latest Nvidia AI chip, the H100, which was unveiled last year. Google researchers did not have access to the H100, which also takes advantage of more advanced manufacturing techniques, at the time they were testing the TPU.

Nvidia CEO Jensen Huang is confident that the new H100 is capable of meeting the competition, however. On Wednesday, after the results and rankings from an industrywide AI chip test called MLperf were made public, Huang said that the H100 is significantly faster than the previous generation.

“Today’s MLPerf 3.0 highlights Hopper delivering 4x more performance than A100,” the Nvidia CEO commented in a blog post. “The next level of Generative AI requires new AI infrastructure to train Large Language Models with great energy-efficiency.”

The significant computational power required for AI is costly, leading many in the industry to concentrate on creating novel chips, components (like optical connections), or software approaches that minimize the computational demands. For the firms who successfully develop the technology to do this, the potential financial gains will be massive.

See all the latest news from Greece and the world at Greekreporter.com. Contact our newsroom to report an update or send your story, photos and videos. Follow GR on Google News and subscribe here to our daily email!



Related Posts