Calamos Supports Greece
GreekReporter.comGreek NewsTechnologyWorld’s Largest Computer Chip Can Power Supercomputer 8 Times Faster

World’s Largest Computer Chip Can Power Supercomputer 8 Times Faster

The world's largest computer chip is capable of train 10 time larger AI models
The world’s largest computer chip is capable of training 10 times larger AI models. Credit: ESO / Wikimedia Commons / CC BY 4.0

Researchers have developed the largest computer chip ever made. It’s packed with 4 trillion tiny switches called transistors. The creators of this chip say it’s destined to run an incredibly strong artificial intelligence (AI) supercomputer in the future.

The new chip, called the Wafer Scale Engine 3 (WSE-3), is from a company called Cerebras. It’s the third version of their supercomputing technology meant to fuel AI systems, like OpenAI’s GPT-4 and Anthropic’s Claude 3 Opus.

The chip has 900,000 AI cores and is made from a silicon wafer that measures 8.5 by 8.5 inches (21.5 by 21.5 centimeters), just like its previous version, the WSE-2 from 2021, as reported by Live Science.

“When we started on this journey eight years ago, everyone said wafer-scale processors were a pipe dream. We could not be more proud to be introducing the third-generation of our groundbreaking water scale AI chip,” said Andrew Feldman, CEO and co-founder of Cerebras.

Twice as powerful as its predecessor

Company representatives stated in a press release that the new chip uses the same amount of power as its predecessor but is twice as powerful. In comparison, the previous chip had 2.6 trillion transistors and 850,000 AI cores.

This indicates that the company has roughly followed Moore’s Law, which suggests that the number of transistors in a computer chip roughly doubles every two years, according to Live Science.

Comparatively, one of the powerful chips used for training AI models is Nvidia’s H200 graphics processing unit (GPU). However, Nvidia’s powerful GPU has only 80 billion transistors, which is 57 times fewer than Cerebras’.

Based on a separate statement released on March 13, company representatives mentioned that the WSE-3 chip will eventually fuel the Condor Galaxy 3 supercomputer, which will be located in Dallas, Texas.

The Condor Galaxy 3 supercomputer, currently in the works, will consist of 64 Cerebras CS-3 AI system “building blocks,” all powered by the WSE-3 chip. When these blocks are connected and switched on, the whole setup will deliver 8 exaFLOPs of computing power.

Moreover, when linked with the Condor Galaxy 1 and Condor Galaxy 2 systems, the entire network will be a total of 16 exaFLOPs.

For reference, Floating-point operations per second (FLOPs) is a measure that gauges the numerical computing performance of a system — where 1 exaFLOP equals one quintillion (10^18) FLOPs, according to Live Science.

Condor Galaxy 3 will train AI models 10 times larger than ChatGPT

The current titleholder for the most powerful supercomputer is Oak Ridge National Laboratory’s Frontier supercomputer, producing approximately 1 exaFLOP of power.

Company representatives stated that the Condor Galaxy 3 supercomputer will be utilized to train future AI systems, which could be up to 10 times larger than GPT-4 or Google’s Gemini.

For example, GPT-4 supposedly employs around 1.76 trillion variables (referred to as parameters) during training, according to rumored information. However, the Condor Galaxy 3 could manage AI systems with around 24 trillion parameters.

See all the latest news from Greece and the world at Greekreporter.com. Contact our newsroom to report an update or send your story, photos and videos. Follow GR on Google News and subscribe here to our daily email!



Related Posts