The Insane Leaps in GPU Technology That Seemed Impossible!

From the Introduction of the Graphics Card All the Way to the Tensor and Ray Tracing Cores!

In partnership with

Ready to learn all things graphics? 👀 

We all hear so much news about the best new GPU that is blowing away the scene but I think we should take a step back to remember where it all started. Let’s start by diving back to 1999 and taking a look at the very first graphics card made by NVIDIA, the GeForce 256!

The Cuda Core

Now I should say first that the term GPU or graphics processing unit has been around since the 1980s. The CPU or central processing unit which is the brains of the computer had used integrated graphics processors back then to produce their graphics but the size of the chip allowing all these functions was very limited. NVIDIA was the company to upgrade the graphics technology to new heights in 1999 by unveiling the first GPU board called GeForce 256 that you could add to your computer.

NVIDIA’s very first GPU, the GeForce 256!

The introduction of this GPU was exciting yet distrusting to some people as the general performance in graphics for gaming was barely any faster than the current tech at the time, even though it was powerful enough by itself to process a minimum of 10 million polygons per second! It was still enough for standard gaming at the time as it utilized 64 MB of DDR SDRAM and ran at speeds up to 166 MHz. Shortly after, NVIDIA’s rise was accelerated due to the improving quality of its GPU’s. Fast-forwarding to 2006, we see the introduction of CUDA cores by NVIDIA. This is where we see the launch of the then flagship GPU, the GeForce GTX 8800.

NVIDIA’s GeForce GTX 8800!

Since a GPU operates in a parallel processing structure, the addition of the CUDA core technology significantly improves the performance as they are highly parallel themselves. This allows for multiple processes to be done simultaneously in each CUDA core. They also utilize higher memory bandwidths, so big amounts of data can be accessed faster and easier. The GTX 8800 came out with 128 CUDA cores, 575 MHz processing speeds, and 768 MB of DDR3 memory. People were blown away by this graphics card as not only could they run games at better resolutions and detail, but they could now play Crysis without any issues! Now this was by far a HUGE advancement in graphics technology from the first GeForce 256 GPU 7 years earlier, however, CUDA cores are the very stepping stones for the immense quality and high-definition gaming that we see today.

From Keplar to the Tensor

The next impressive generation of GPUs came along with NVIDIA introducing the new Keplar microarchitecture of 2012. This architecture focused on keeping the GPU more energy efficient while pushing out just as much graphics processing power as previous GPUs. This focus was shown by NVIDIA as they announced that 2 Keplar cores would only use 90% of the energy used by previous 1 previous gen Fermi core.

Remarkably it not only allowed for more space to be utilized by the GPU efficiently but also allowed for increased performance per watt used. This new technology along with the addition of more CUDA cores and upgrades to memory size and speed allowed for GPUs to excel in HD gaming. 4k resolution also started to make its way into the picture, especially with the release of the epic GeForce Titan X in 2015!

Inside shot of a GK110 A1 GPU Die that is found in GeForce GTX Titan cards!

Everyone thought the release of the GeForce Titan X would be the GPU to end all other GPUs. Given how it has a base clock speed of 1000 MHz, 3072 CUDA cores, along a fast 12 GB of GDDR5 video memory, even I thought it would be the king for years to come. Well, safe to say this was not true as NVIDIA was already working on the new next-gen graphics technology. The Keplar microarchitecture was a great improvement in the timeline of graphics technology but NVIDIA had taken the world by storm once again with the introduction of Tensor cores and Ray Tracing cores in their new RTX GPUs! The beauty of real-time ray tracing in gaming was born.

The effects of Ray Tracing vs without it while gaming!

The Turing microarchitecture was introduced in 2018 for the new Quadro RTX cards. This brought the addition of Tensor cores and Ray Tracing cores into the architecture, which improved the speed of large complex processes being completed along with the details of real-time ray tracing. So basically graphics in video games got insanely more realistic looking as they were being rendered.

Well, what is a Tensor core? To improve upon the Cuda core, NVIDIA developed their first-generation core in the Volta architecture of cards in 2017. Their original and first usage was in deep learning to help with the large data load throughput. Today, they’re given a number of applications mainly in GPU’s to improve the speed at which frames are processed, and more importantly the development of AI. In that short-time we’ve seen increasingly accelerated research speed of many AI based projects, only time will tell where we’ll end up in 5 years.

Before you go, a word from AE Studio as they showcase the Tensor core in action.

85% of all AI Projects Fail, but AE Studio Delivers

If you have a big idea and think AI should be part of it, meet AE.

We’re a development, data science and design studio working with founders and execs on custom software solutions. We turn AI/ML ideas into realities–from chatbots to NLP and more.

Tell us about your visionary concept or work challenge and we’ll make it real. The secret to our success is treating your project as if it were our own startup.

Thank you for reading about these rapidly growing developments, we hope you enjoyed! Please if you have any questions or want to leave feedback, we offer a survey, comments, and our email below!

Reply

or to participate.