How GPUs are Disrupting High-Performance Computing


This post is by Chris Dickert from Visual Capitalist


Published on
The following content is sponsored by Hive Digital
A diagram comparing graphic processing units (GPUs) to traditional central processing units (CPUs) showing that the former's multiple parallel cores make it ideal for many high-performance computing applications.

GPUs and High-Performance Computing

Graphics Processing Units, or GPUs, have moved beyond their original role of rendering video game graphics and are now used in a variety of high-performance computing applications (HPC), from AI training to zooplankton classification.   To help understand this pivot, we’ve teamed up with HIVE Digital to look at how GPUs differ from traditional CPUs and what gives them an edge.

CPU vs. GPUs

CPUs, or Central Processing Units, and GPUs, generally have three main elements:
  • compute elements—technically ALU or arithmetic logic units—that perform calculations and carry out operations;
  • a control element that coordinates the operations of the above; and
  • various levels of memory, including dynamic random access memory (DRAM), a kind of RAM or short-term memory used in the main memory of computers, and caches.
CPUs, or Central Processing Units, typically have one or more extremely powerful cores, made up of independent compute, control, and cache elements. A GPU, on the other hand, has many more less-powerful cores, each with multiple ALUs that share common cache and control elements. 

Core Values and the Value of Cores

The number of cores is important, especially when it comes to image processing. In order to display an image on your screen, the computer has to read, process, and display data for each pixel, which (Read more…)