No other processor contains as many computational cores (400,000) and transistors (1.2 trillion) or is nearly as large (215 x 215 mm 462 cm²) as Cerebra's Wafer-Scale Engine, an AI accelerator already unveiled in 2019. Compared to current high-end GPUs and CPUs, the chip surpasses their size in all Categories by a factor of 50. Now, during the 2020 Supercomputing Conference scholarly paper according to which the Cerebras Wafer-Scale Engine can also convince in practice.
Supercomputer Joule 2.0 clearly beaten
According to Kamil Rock's team, the performance of a Cerebras CS-1 computer the supercomputer Joule 2.0 by a factor of 200. In the list of 500 fastest supercomputer Joule 2.0 is still in 82nd place. The Joule 2.0 cluster uses 4,000 Xeon Gold 6148 processors (20 cores each) and thus has a total of 86,400 cores. The supercomputer is operated by the U.S. National Energy Technology Laboratory of the U.S. Department of Energy (DOE).
The performance in solving a complex system of linear equations was measured. In science, such calculations are needed for weather forecasts and flow simulations, among other things. According to Cerebras The remarkable performance of the CS-1 is based on its high memory performance and the fast interconnection of individual cores, which exceeds the performance of usual Cloud system clearly surpassed.
In the meantime Cerebras has introduced the even faster Wafer Scale Engine 2. This has 850,000 cores and 2.6 trillion transistors. As in Wafer Scale Engine 1, the individual cores are connected via a 3D mesh.