An Interview with Nvidia CEO Jensen Huang about Manufacturing Intelligence

2022-03-28 作者: Ben Thompson 原文 #Stratechery 的其它文章

An Interview with Nvidia CEO Jensen Huang about Manufacturing Intelligence ——

It took a few moments to realize what was striking about the opening video for Nvidia’s GTC conference: the complete absence of humans.

That the video ended with Jensen Huang, the founder and CEO of Nvidia, is the exception that accentuates the takeaway. On the one hand, the theme of Huang’s keynote was the idea of AI creating AI via machine learning; he called the idea “intelligence manufacting”:

None of these capabilities were remotely possible a decade ago. Accelerated computing, at data center scale, and combined with machine learning, has sped up computing by a million-x. Accelerated computing has enabled revolutionary AI models like the transformer, and made self-supervised learning possible. AI has fundamentally changed what software can make, and how you make software. Companies are processing and refining their data, making AI software, becoming intelligence manufacturers. Their data centers are becoming AI factories. The first wave of AI learned perception and inference, like recognizing images, understanding speech, recommending a video, or an item to buy. The next wave of AI is robotics: AI planning actions. Digital robots, avatars, and physical robots will perceive, plan, and act, and just as AI frameworks like TensorFlow and PyTorch have become integral to AI software, Omniverse will be essential to making robotics software. Omniverse will enable the next wave of AI.

We will talk about the next million-x, and other dynamics shaping our industry, this GTC. Over the past decade, Nvidia-accelerated computing delivered a million-x speed-up in AI, and started the modern AI revolution. Now AI will revolutionize all industries. The CUDA libraries, the Nvidia SDKs, are at the heart of accelerated computing. With each new SDK, new science, new applications, and new industries can tap into the power of Nvidia computing. These SDKs tackle the immense complexity at the intersection of computing, algorithms, and science. The compound effect of Nvidia’s full-stack approach resulted in a million-x speed-up. Today, Nvidia accelerates millions of developers, and tens of thousands of companies and startups. GTC is for all of you.

The core idea behind machine learning is that computers, presented with massive amounts of data, can extract insights and ideas from that data that no human ever could; to put it another way, the development of not just insights but, going forward, software itself, is an emergent process. Nvidia’s role is making massively parallel computing platforms that do the calculations necessary for this emergent process far more quickly than was ever possible with general purpose computing platforms like those undergirding the PC or smartphone.

What is so striking about Nvidia generally and Huang in particular, though, is the extent to which this capability is the result of the precise opposite of an emergent process: Nvidia the company feels like a deliberate design, nearly 29 years in the making. The company started accelerating defined graphical functions, then invented the shader, which made it possible to program the hardware doing that acceleration. This new approach to processing, though, required new tools, so Nvidia invented them, and has been building on their fully integrated stack ever since.

The deliberateness of Nvidia’s vision is one of the core themes I explored in this interview with Huang recorded shortly after his GTC keynote. We also touch on Huang’s background, including immigrating to the United States as a child, Nvidia’s failed ARM acquisition, and more. One particularly striking takeaway for me came at the end of the interview, where Huang said:

Intelligence is the ability to recognize patterns, recognize relationships, reason about it and make a prediction or plan an action. That’s what intelligence is. It has nothing to do with general intelligence, intelligence is just solving problems. We now have the ability to write software, we now have the ability to partner with computers to write software, that can solve many types of intelligence, make many types of predictions at scales and at levels that no humans can.

For example, we know that there are a trillion things on the Internet and the number things on the Internet is large and expanding incredibly fast, and yet we have this little tiny personal computer called a phone, how do we possibly figure out of the trillion things in the internet what we want to see on our little tiny phone? Well, there needs to be a filter in between, what people call the personalized internet, but basically an AI, a recommender system. A recommender that figures out based on the nature of the content, the characteristics of the content, the features of the content, based on your implicit and your explicit and implicit preferences, find a way through all of that to predict what you would like to see. I mean, that’s a miracle! That’s really quite a miracle to be able to do that at scale for everything from movies and books and music and news and videos and you name it, products and things like that. To be able to predict what Ben would want to see, predict what you would want to click on, predict what is useful to you. I’m talking about things that are consumer oriented stuff, but in the future it’ll be predict what is the best financial strategy for you, predict what is the best medical therapy for you, predict what is the best health regimen for you, what’s the best vacation plan for you. All of these things are going to be possible with AI.

As I note in the interview, this should ring a bell for Stratechery readers: what Huang is describing is the computing functionality that undergirds Aggregation Theory, wherein value in a world of abundance accrues to those entities geared towards discovery and providing means of navigating this world that is fundamentally disconnected from the constraints of physical goods and geography. Nvidia’s role in this world is to provide the hardware capability for Aggregation, to be the Intel to Aggregators’ Windows. That, needless to say, is an attractive position to be; like many such attractive positions, it is one that was built not in months or years, but decades.

Read the full interview with Huang here.


文章版权归原作者所有。
二维码分享本站