Unleashing the Power of Local AI: Nvidia’s New Graphics Cards


Nvidia’s New Graphics Cards Focus on Local AI

Nvidia found itself at the forefront of the artificial intelligence boom last year with its powerful server graphics processors. Now, the company is shifting its focus to consumer GPUs for “local” AI applications that can run on PCs and laptops.

Introducing the New Graphics Cards

Nvidia recently announced three new graphics cards: the RTX 4060 Super, RTX 4070 Ti Super, and RTX 4080 Super. These cards are equipped with additional “tensor cores” designed specifically for running generative AI applications. They range in price from 9 to 9 and will be available in laptops from companies like Acer, Dell, and Lenovo.

The demand for Nvidia’s enterprise GPUs, which cost tens of thousands of dollars each and often come in systems with multiple GPUs, led to a surge in sales and a market value of over trillion.

Improvements for AI Models

While Nvidia’s GPUs have traditionally been used for gaming, the company has made significant improvements to this year’s graphics cards to accommodate AI models. These cards are now capable of running AI applications without relying on cloud services.

According to Nvidia, the RTX 4080 Super can generate AI video 150% faster than the previous generation model. The company has also introduced software enhancements that make large language model processing five times faster.

Expanding AI Applications

Nvidia expects a wave of new AI applications to emerge in the coming year that will take advantage of the increased processing power. Microsoft is set to release Windows 12, which will further leverage AI chips.

The new chips can be used for various applications, such as generating images on Adobe Photoshop’s Firefly generator and removing backgrounds in video calls. Nvidia is also developing tools to help game developers integrate generative AI into their titles, enabling the generation of dialogue from nonplayer characters.

Edge vs. Server

Nvidia’s recent chip announcements indicate that the company is not only focused on server GPUs but also competing with Intel, AMD, and Qualcomm in the field of local AI. All three competitors have announced new chips designed for “AI PCs” with specialized components for machine learning.

This shift comes as the tech industry explores different ways to deploy generative AI, which requires substantial computing power and can be costly to run on cloud services.

One proposed solution is the concept of “AI PCs” or “edge compute.” Instead of relying on powerful supercomputers over the internet, devices would have powerful AI chips integrated into them, allowing them to run large language models or image generators locally, albeit with some limitations.

Nvidia suggests using a cloud model for complex tasks and a local AI model for time-sensitive applications. This approach allows Nvidia GPUs in the cloud to handle large AI models, while the RTX tensor cores in PCs focus on latency-sensitive AI applications.

The new graphics cards will comply with export controls and can be shipped to China, providing an alternative for Chinese researchers and companies that cannot access Nvidia’s most powerful server GPUs.