A company has managed to create the largest ever processing chip that far exceeds anything that Intel or AMD has ever produced. With an insane 1.2 Trillion transistors on the silicon wafer, the processor is by far the biggest semiconductor chip ever built. The company behind the processor is planning to dedicate the chip to boost Artificial Intelligence (AI).
The Cerebras Wafer Scale Engine, made by new artificial intelligence company Cerebras Systems is the largest semiconductor chip ever built. The Central Processing Unit or CPU has 1.2 trillion transistors, which are the most basic and essential on-off electronic switches of any silicon chips. The recently manufactured processor by Advanced Micro Devices processor has 32 billion transistors. Needless to mention, the number of transistors on Cerebras Wafer Scale Engine far exceeds even top-end AMD and Intel CPUs and GPUs.
Cerebras Wafer Scale Engine Is The Largest Single-Chip Processor Ever Built:
The Cerebras WSE is a humongous 46,225 square millimeters of a silicon wafer that houses 400,000 AI-optimized, no-cache, no-overhead, compute cores and 18 gigabytes of local, distributed, superfast SRAM memory as the one and only level of the memory hierarchy. In comparison, the largest NVIDIA GPU measures 815 square millimeters and packs 21.1 Billion transistors. Simple math will indicate the Cerebras WSE is 56.7 times larger than the high-end NVIDIA GPU.
Today is the perfect day to share our first tweet as we have announced the largest chip ever built to accelerate AI compute! Read all about the Cerebras Wafer Scale Engine (WSE) via @WIRED and @tsimonite https://t.co/ATaBRnxnGD pic.twitter.com/dX0xetyX6X
— Cerebras Systems (@CerebrasSystems) August 19, 2019
The Cerebras WSE’s memory bandwidth is 9 petabytes per second. In other words, the world’s largest processor boasts of 3,000 times more high-speed, on-chip memory and 10,000 times more memory bandwidth. The processor’s cores are linked together with a fine-grained, all-hardware, on-chip mesh-connected communication network. Owing to the simplified architecture and the huge die size, combined with ultra-high bandwidth, the processor can deliver an aggregate bandwidth of 100 petabits per second. Simply put, the Cerebras WSE’s large number of cores, more local memory, and a low-latency, high-bandwidth fabric make it an ideal processor to significantly accelerate Artificial Intelligence tasks.
Why Aren’t Intel And AMD Making Such Custom-Designed Huge CPUs And GPUs?
Intel, AMD, and most other silicon chip makers adopt a completely different and traditional approach. The commonly available powerful GPUs and CPUs are actually a collection of chips created on top of a 12-inch silicon wafer and are processed in a chip factory in a batch. The Cerebras WSE, on the other hand, is a single chip interconnected on a single wafer. Simply put, all the 1.2 Trillion transistors on the largest processor are truly working together as a single giant silicon chip.
There’s a rather simple reason why companies like Intel and AMD do not invest in such insanely large silicon wafers. A single silicon wafer does have a few impurities, which can have a cascading effect and eventually cause failure. Chipmakers are well aware of the same and build their processors accordingly. Hence, the true yield of the silicon wafers in terms of silicon chips that work reliably is quite low. In other words, if the silicon wafer has just a single chip, then the chances of impurities and failure are quite high.
Cerebras Wafer Scale Engine (WSE) is a single chip that contains more than 1.2 trillion transistors pic.twitter.com/LvyQuBPdSE
— HPC Guru (@HPC_Guru) August 19, 2019
Interestingly, while other companies haven’t figured out a workable solution, Cerebras has reportedly designed its chip to be redundant. Simply put, one impurity won’t disable the whole chip, noted Andrew Feldman, who cofounded Cerebras Systems and serves as CEO. “Designed from the ground up for AI work, the Cerebras WSE contains fundamental innovations that advance the state-of-the-art by solving decades-old technical challenges that limited chip sizes — such as cross-reticle connectivity, yield, power delivery, and packaging. Every architectural decision was made to optimize performance for AI work. The result is that the Cerebras WSE delivers, depending on workload, hundreds or thousands of times the performance of existing solutions at a tiny fraction of the power draw and space.”
AI Tasks Will Continue To Demand Larger Chips:
The new processor is custom-built to handle AI tasks primarily because larger chips process information more quickly, producing answers in less time. Most tech companies claim that the fundamental limitation of today’s AI is that it takes too long to train models. Hence, a few tech leaders are attempting to optimize their AI algorithms to rely on fewer data sets. However, any good AI will obviously get better with larger data sets. Reducing training time by increasing the CPU size is one way to boost processing and bring down training time without compromising on the quality of resulting AI.
The inter-processor communication fabric deployed on the Cerebras WSE is one-of-a-kind as well. The low-latency, high-bandwidth, 2D mesh links all 400,000 cores on the WSE with an aggregate 100 petabits per second of bandwidth. Additionally, the cores on the processor are Sparse Linear Algebra Cores (SLAC), which is optimized for neural network compute primitives. Both aspects put the chip far ahead for AI tasks. Hence, it is unlikely that gamers will be able to buy the biggest and most powerful CPU or GPU for their PCs.