“We are the Accelerated Data Center”: GPU’s, CPUs, Networking and Systems.
As anyone not comatose knows, today’s modern data center workloads — like AI, HPC, and machine learning — absolutely demand acceleration. And the appetite for acceleration seems insatiable, in part because CPU’s can no longer keep up with Moore’s Law, but more importantly algorithmic advances in parallelism are opening our eyes to what is now possible, but inconceivable just a few years ago. This trend impacts chemistry, physics, atmospheric sciences, astronomy, medicine, transportation, communications, pharmaceuticals, photography, retail, finance, and entertainment. Have we left anything out? If so, please add it, because accelerated computing is becoming completely pervasive.
There’s another term for accelerated computing: “NVIDIA”. The company has been accelerating workloads and algorithms since it was founded in 1993, and in the data center for over 10 years. Of course success breeds mimicry, and there are now over 100 companies developing hardware to compete with Jensen Huang’s juggernaut. And we hope some will be successful. Competition is good. But even so, the absolute scale of NVIDIA’s impact on the industry is unassailable, a fact other large-scale semiconductor companies don’t like to hear.
About four years ago, Mr. Huang audaciously declared that he intends to be in the optimized data center business. Not chips. Not even servers or switches. But the data center in its entirety. Since then he has delivered, building three of the world’s largest supercomputers from the ground up, using NVIDIA GPU’s and networking. And next year he will add NVIDIA’s own Grace Arm CPU to that story. Thats because accelerated computing requires data center scale innovations and intense focus on software co-design with the hardware.
So, it isn’t hard to imagine what the NVIDIA schedule at HotChips ‘22 will look like next week:
- Grace – NVIDIA’s first data-center grade CPU
- Hopper – The next generation GPU with a Transformer engine
- NVLink – fast networking with some new tricks up its sleeve
- Orin – SOC – the industry’s go-to smart edge controller.
I’m especially keen to learn how NVIDIA’s new-found penchant for openness might apply to NVLink, which is twice the performance of any contender for chip to chip interconnectivity and could play a new role in the emerging chiplet revolution.
What sets NVIDIA apart is their ecosystem and their design philosophy, where GPU, DPU and CPU act as peer processors, orchestrated by software, to relieve platform bottlenecks and deliver optimal application performance. NVIDIA’s primary competitors AMD and Intel don’t even have a server business; they remain in the component business and so will have to compete there, which is not easy, while NVIDIA touts higher system-level designs and the benefits of end-to-end integration across an accelerated data center.
Until people realize that the game has changed, and it isn’t just about fast chips, they will be unable to challenge NVIDIA.