Call Start: 9:35 January 1, 0000 10:05 AM ET
NVIDIA Corporation (NASDAQ:NVDA)
UBS Global Technology Conference Call
December 3, 2024 9:35 AM ET
Company Participants
Colette Kress - Executive Vice President and Chief Financial Officer
Conference Call Participants
Tim Arcuri - UBS
Tim Arcuri
Good morning. Thank you. I am Tim Arcuri. I am very pleased to have NVIDIA with us. Pleased to have Colette with us. Thank you, Colette, for the time. And we're just going to kick it off. And Colette, it's been an incredible past couple of years. You're still growing very fast. I'm wondering if you can talk about some of the use cases, how they've evolved? We're having enterprises adopt AI widely. How has the demand picture sort of evolved in terms of Cloud, Consumer, Internet, and Enterprise, and maybe if you can talk about some of the use cases that you see that are very exciting, as well?
早上好,谢谢。我是 Tim Arcuri。我非常高兴今天能有 NVIDIA 和 Colette 来到我们这里。感谢 Colette 抽出时间参与我们的会议。让我们开始吧。过去几年对 NVIDIA 来说是非常不平凡的,你们仍然在快速增长。我想知道你能否谈谈一些用例,以及它们是如何演变的?企业正在广泛采用人工智能。云计算、消费、互联网和企业等领域的需求情况有何变化?你能否分享一些令人兴奋的用例?
Colette Kress
Okay. Let me first start with a statement that I must read. As a reminder, this presentation contains forward-looking statements and investors are advised to read our reports filed with the SEC for information related to risk, uncertainties in facing our business.
好的,首先让我声明一下。我必须读一段内容。请大家注意,本次演讲包含前瞻性声明,投资者应阅读我们提交给证券交易委员会的报告,以了解与风险和不确定性相关的信息。
Okay, really great to see everybody here and thank you so much for hosting us. Let me kind of talk about what we have seen is certainly just been a fast journey even over the last several quarters. But keep in mind, we are 30 years in terms of the business that we're doing, but we certainly are in a very important phase.
非常高兴在这里看到大家,感谢大家的热情接待。我将谈谈我们在过去几个季度中看到的一些情况,这段旅程无疑非常迅速。请记住,我们已经有 30 年的历史,但我们正处于一个非常重要的阶段。
When we think about the phase that we are in and what we are seeing, we do believe that the computing platform that many of us have been using and seeing for more than 20 to 30 years is here to transform for the next decades going forward. So what that means is we are seeing those concentrating on their existing computing platform, which may include general-purpose computing and a shift to accelerated computing.
当我们思考当前所处的阶段以及我们所看到的情况时,我们相信,许多人使用并且见证了 20 到 30 年的计算平台正处于转型之中,未来几十年将发生变化。这意味着,我们看到的是,许多企业正集中精力提升现有的计算平台,这可能包括通用计算,并且转向加速计算。
But a new piece was also added outside of just focusing on accelerated computing and that was the focus on AI and the focus on Generative AI. The new use cases that we have seen of course over the last several quarters is the size of models. If you recall, before the very onset of Generative AI, we talked about large language models.
但除了专注于加速计算之外,还增加了一个新的重点,那就是聚焦于人工智能,特别是生成式人工智能。过去几个季度我们看到的新用例之一就是模型的规模。如果你还记得,在生成式人工智能刚出现之前,我们曾谈论过大型语言模型。
And the importance that they were there for much of the work in terms of the consumer Internet companies, the recommender engines and model sizes continue to get larger. Right now the work in terms of foundational models are very key in terms of the work that you see happening. But you also see the scaling of those models as they focus on many different types of foundational models and multi-modal types of models being built.
这些模型在消费者互联网公司、推荐引擎等方面的工作中至关重要,而且模型的规模继续变大。现在,基础模型的工作非常关键,这是你们看到的工作之一。同时,你也能看到这些模型的扩展,尤其是当它们聚焦于构建多种类型的基础模型和多模态模型时。
The next phase of this transition is also focusing on that inferencing phase. And you'll see more and more work in terms of the types of systems that we are bringing to support that inferencing model after we have developed the large language models, all of our different types of customers are unique.
这一转型的下一个阶段也将聚焦于推理阶段。在我们开发大型语言模型之后,你会看到越来越多的系统正在被带来,以支持这一推理模型。所有不同类型的客户都是独一无二的。
We are working from everything from the start-ups to the research, the CSPs are some of the most important that are standing up compute so fast for the end enterprises that need to use that. We are working with consumer Internet companies, but more importantly, we are working globally in terms of supporting this initiative.
我们正在与从初创企业到研究机构的各类客户合作,云服务提供商(CSP)是最重要的合作伙伴之一,他们为最终需要使用计算能力的企业提供快速的计算资源。我们与消费者互联网公司合作,但更重要的是,我们在全球范围内支持这一计划。
躲在背后最重要的模型有可能是VISA。
Nothing that we've seen before in terms of the speed and the understanding of what's going to be in front of us. So, those are some of the things that we're seeing today.
我们之前从未见过如此迅速的进展,也从未见过如此清晰的前景。因此,这些就是我们今天所看到的一些情况。
Tim Arcuri
And I think, we all worry that eventually we're going to build too much capacity. And I think your view is that we're nowhere close to that. So, maybe can you just speak to sort of how much visibility you have and maybe just the demand picture relative to what you're able to supply?
我想我们都担心最终会建造过多的产能,但你们的看法是我们离这个点还很远。那么,能否谈谈你们目前的可见度,以及需求情况相对于你们的供应能力如何?
Colette Kress
Yeah, when we look at the quarters in the past and our scaling, our number one goal in terms of scaling is working with both all of our partners, downstreams with our customers, but also in terms of bringing in supply. No, we are not near any point in time, now where we are seeing any type of a slowdown. What we are seeing right now is demand continues to be fueled a lot due to the size of models, the complexity of inferencing and we are still getting ready for our next architecture, Blackwell.
是的,当我们回顾过去几个季度的扩展情况时,我们的首要目标是与所有合作伙伴和客户合作,同时确保供应的引入。现在我们没有看到任何放缓的迹象。目前,我们看到需求依然强劲,主要受到模型规模和推理复杂性的推动,我们仍在为下一代架构 Blackwell 做准备。
And right now what we see in terms of our Blackwell, which will be here this quarter is also probably a supply constraint that is going to take us well into our next fiscal year for several quarters from now. So, no, we don't see a slowdown. We continue to see tremendous demand and interest, particularly for our new architecture that's coming out.
目前,我们看到的 Blackwell(将在本季度发布)也可能面临供应瓶颈,这将持续到下一个财年,可能会持续几个季度。因此,我们没有看到放缓的迹象。我们继续看到巨大的需求和兴趣,尤其是对于即将发布的新架构。
Tim Arcuri
Well, just on that point, so you’ve put up a great quarter. You’re shipping Blackwell in January and you are actually shipping more Blackwell than you thought you would three months ago. And at the same time, we do hear a lot of chop in the supply chain. You hear a lot of articles written and things like that. Can you just speak to how Blackwell is different than prior product cycles?
关于这一点,你们度过了一个非常出色的季度。你们将在一月发货 Blackwell,实际上你们的 Blackwell 出货量比三个月前预期的要多。同时,我们听到供应链中有很多波动,很多文章在讨论此类问题。你能谈谈 Blackwell 与之前的产品周期有什么不同吗?
Colette Kress
Yes, our Blackwell architecture is unique. What we are doing here is building at a datacenter scale. Don't get us wrong, we have been working in terms of platforms for many years, not just at the chip level and something that is end-to-end to scale. But when you look at our Hopper architecture, our Hopper architecture is for that rack scale and the work that we have done.
是的,我们的 Blackwell 架构是独特的。我们在这里所做的是在数据中心规模上进行构建。不要误解我们,我们多年来一直在从平台层面进行工作,不仅仅是芯片级别,且是一个端到端的规模化解决方案。但当你看我们之前的 Hopper 架构时,它是专注于机架规模,并且我们在这方面也做了大量工作。
Essentially what you're seeing with our Blackwell architecture, some of them we will completely build inside of our supply chain get it ready, get it stood up, take it down, dip it and they will stand it up together. And what we are doing is a greater portion of choices for the customers depending on where they are in their lifecycle.
实际上,在我们的 Blackwell 架构中,你所看到的一些配置,我们将完全在我们的供应链内构建,准备好后,组装好,再拆解并重新组装。我们正在为客户提供更多的选择,具体取决于他们处于生命周期的哪个阶段。
And what we mean by that is datacenters are complex and the types of things that they do to make them the most efficient, each of them are at different stages. So you have the opportunity to choose between a lot of different options of what we're doing. That means we can do liquid cool. We can do continued in terms of air-cooled. You can incorporate an ARM CPU, but also x86 if you want.
我们所说的意思是,数据中心非常复杂,且它们在使其最有效的过程中所采取的措施各不相同,因此它们处于不同的阶段。所以,客户可以在我们提供的多种选项中做出选择。这意味着我们可以做液冷,也可以做风冷。如果需要,你还可以选择 ARM CPU,或者选择 x86。
There's many different networking options that we have, whether that's InfiniBand and Ethernet and many different other switch choices in there. That decision is with the customer and many different configurations. When we think about where we are right now in Blackwell for this quarter, all is done in terms of the chip.
我们提供许多不同的网络选项,无论是 InfiniBand、以太网,还是其他各种交换机选择。这些决策由客户做出,并且有许多不同的配置。关于本季度的 Blackwell,我们的芯片部分已经完成。
The chip is fine. The chip and the work that we have done has moved quite well. Right now, we are standing up configurations for so many of our different customers. You will all see pictures on the Internet. Pictures with happy faces as they get excited for standing up the first one and getting ready to put the whole datacenter and all of the different racks together in terms of that. That's where we stand today.
芯片没有问题。芯片和我们所做的工作进展顺利。目前,我们正在为许多不同的客户设置配置。你们会在网上看到许多照片,照片里是他们兴奋的笑脸,他们为首次启动并准备好将整个数据中心和各个机架组合在一起而感到激动。这就是我们目前的情况。
Tim Arcuri
Great. And there's just a lot going on with the product cadence and you have B100, you have B200, you have the racks with [TB], then you have Blackwell Ultra coming about six months later. So, is there a risk that customers wait, because you have products coming so quickly?
太棒了。你们的产品发布节奏非常快,B100、B200、带有[TB]的机架,还有大约六个月后发布的 Blackwell Ultra。那么,是否存在客户因为你们的产品发布速度过快而选择等待的风险?
Colette Kress
No, when you think about what is necessary in designing the datacenter, it does takes planning. And what we have seen in, at least, the last five to ten years is more work with all of our different customers as they plan every six months they're planning. What is here? What am I going to build? And they need to be ready for compute at that time that they are working in terms of in projects.
So given that there are things that are still short in supply, we are still serving with an amazing configuration Hopper 200 for our customers. That is an opportunity for them to begin some of the work that they are doing with an HGX system and they haven’t yet even touched a GPU yet, just because the demand for that has been so strong over. The folks have worked with us helping understand how they build out the datacenters. The datacenters often have already been procured. They just know they need to fit what will be inside of those. That's why you see us with two architectures.
Additionally, as you know, we will do Vera Rubin going forward that again will be a discussion that we will have with customers that says here is a potential offerings. How can we help you think through what you may need going forward?
不,当你考虑设计数据中心所需的条件时,确实需要规划。我们看到,至少在过去五到十年里,我们与不同的客户合作,他们每六个月进行一次规划。现在有什么?我要构建什么?他们需要在项目实施时做好计算能力的准备。
因此,考虑到仍然有一些短缺的供应,我们仍然为客户提供令人惊叹的 Hopper 200 配置。这是他们开始使用 HGX 系统的一次机会,尽管他们还没有使用过 GPU,只是因为对该产品的需求非常强烈。客户与我们合作,帮助他们理解如何构建数据中心。许多数据中心已经被采购,只是他们知道需要在其中放入什么内容。这就是为什么我们有两种架构的原因。
此外,正如你所知道的,我们将推进 Vera Rubin,这也是我们将与客户讨论的内容,告诉他们这是潜在的产品,我们如何帮助你们思考未来可能需要的技术。
Tim Arcuri
But I guess maybe it's a bit similar to the H100 and H200 where customers didn’t wait and that went all great. So I guess there is evidence out there where customers did not wait when you had a product come out quickly after those Vera products?
但我想这可能有点类似于 H100 和 H200,那时客户没有等待,而且一切都非常顺利。所以我想也有证据表明,当你们的产品快速发布后,客户并没有等待?
Colette Kress
The customers are eager. Wait is a strong word, I would say, they call every day asking in terms of when they can see the compute and we are working feverishly for that new supply. But what the importance is, is remember, this is a journey that's probably going to take place for two decades. Everybody will get onboard. Our architectures allows an end-to-end scaling approach for them to do whatever they need to in the world of accelerated computing and Ai. And we're a very strong candidate to help them, not only with that infrastructure, but also with the software.
客户非常渴望。我认为“等待”是一个过于强烈的词,我会说,他们每天都打电话询问何时可以看到新的计算资源,我们正在全力以赴准备新供应。但最重要的是,请记住,这是一个可能会持续二十年的过程。每个人都会参与进来。我们的架构提供了一个端到端的扩展方法,帮助客户在加速计算和人工智能领域实现他们的需求。而且,我们是一个非常强大的候选者,能够帮助他们,不仅在基础设施上,还包括软件方面的支持。
Tim Arcuri
Thank you. So I wanted to ask you the cash question. We actually talked about this last night. You've generated about $56 billion in the cash flow over the past four quarters. I heard you are generating about $120 billion in calendar ’25. And even if I assume that you buy back about $50 billion per year, you're going to end up with a $100 billion in cash at the end of next year and potentially $200 billion in cash at the end of 2026. These are obviously massive, massive numbers even Apple, which got to $100 billion in cash ended up working that down. How do you think about what you're going to do with the cash?
谢谢。那么我想问一下关于现金的问题。实际上我们昨晚谈到过这个话题。你们在过去四个季度创造了大约 560 亿美元的现金流。我听说你们在 2025 年的日历年将创造约 1200 亿美元现金流。即使我假设你们每年回购约 500 亿美元的股票,明年年底你们的现金将达到 1000 亿美元,到 2026 年底可能达到 2000 亿美元。这些数字显然是巨大的,甚至像苹果这样曾达到 1000 亿美元现金的公司,也最终减少了现金储备。你们是如何考虑处理这些现金的?
Colette Kress
Our work in terms of our cash and our use of cash is some of the most important things that any company has to look at. We do spend with a great group of people, helping things through our strategy and our needs. Your first thing that you're going to consider is what is that cash that we need for innovation and all sorts of support just our work in the scaling that we are doing. That's going to be the first pieces. But we are continuing and know that innovation and everything that we are doing in R&D is going to be an important part of that.
Now we can do that from both learnings from companies. We can also think about that in terms of our work of bringing on great teams in some M&A form that they come onboard. That's a great opportunity for us to do and we will continue to work also in that area. Then it leads to thinking about new types of business models that we may want to add and focus on in new areas of AI.
And the support that we can do not only for, let's just say, building out software, but building out full systems for others and we'll be investing in that. After we determine those types of investments, our work is, in terms of returning to shareholders, it always will be and our focus on that is a combination of share repurchases and dividends and you'll continue to see this, watch this very carefully.
We're not a fan of excess cash. So we are going to watch this carefully and you'll continue for the next quarters.
我们如何使用现金,是任何公司都必须审视的最重要事项之一。我们与一群优秀的人一起工作,帮助我们通过战略和需求的支持来推进工作。你要考虑的第一件事是,我们需要多少现金来支持创新和我们正在进行的扩展工作。这将是首先需要考虑的部分。但我们会继续推动创新,确保我们在研发方面的工作将是重要的一部分。
我们可以从其他公司那里获得经验,也可以考虑通过并购吸引优秀团队加入。这是我们很好的机会,我们将继续在这一领域努力。接下来是思考新的商业模式,看看我们是否想要添加和聚焦于人工智能的新领域。
此外,我们还会在其他方面进行投资,不仅仅是构建软件,还包括为他人构建完整的系统。在确定这些投资方向后,我们的工作是始终致力于回馈股东,我们的回馈方式是通过股票回购和分红的结合方式,你将继续看到并密切关注这一过程。
我们不喜欢过多的现金储备,因此我们会谨慎处理,并且在未来的几个季度里继续关注这一点。
Tim Arcuri
Great. Thank you. As we talk about gross margin for a minute, you did say that gross margin comes down in the fiscal Q1 of next year as Blackwell ramps, you said it comes on to low 70s, which you then clarified as 71 through 72.5 on the call. But it comes back up after that and it comes back up to the mid-70s as you get to the end of the year.
Can you talk about sort of how confident you are in that? And with Rubin coming after that, there are some people that think, “Well, gross margin will be under pressure once again, once Rubin begins to ramp.” So, can you talk about how confident you are that you can maintain mid-70s over time?
太棒了,谢谢。我们谈一下毛利率的问题,你确实提到过,在明年财政第一季度,随着 Blackwell 的推升,毛利率将下降,你提到它会降到 70% 左右,你在电话会议中进一步澄清为 71% 到 72.5%。但是在那之后,它会回升,并且到年底时会回升到中 70% 左右。
你能谈谈你对这一点的信心吗?而且随着 Rubin 的推出,有些人认为,“一旦 Rubin 开始推升,毛利率将再次受到压力。”因此,你能谈谈你对长期保持中 70% 毛利率的信心吗?
Colette Kress
Okay. So Blackwell is unique as we've discussed in terms of its different configurations and we're standing up quite many different ones that you are going to see go to market, even in this quarter. We're not just shipping one version. There's going to be several. So therefore the volume at this time is quite small.
As we continue to scale this throughout the year, we will be able to improve the gross margins once we get into the scaling of all of our different system configurations that we have. When we think about going forward though and the Vera Rubin, little far out there to go through, we still have to run through an analysis in terms of the TCO and things that we would do.
So we'll put that off in terms of a little later in terms of what we see. But I'd say, we're in a unique position right now just with Blackwell and what we're seeing.
好的,正如我们讨论过的那样,Blackwell 是独特的,拥有多种不同的配置,我们将推出多个版本,甚至在本季度也会看到不同的版本。我们不仅仅是发货一个版本,还将发货多个版本。因此,此时的出货量相对较小。
随着我们在这一年继续扩展,随着不同系统配置的规模化推进,我们将能够提高毛利率。当我们展望未来,考虑到 Vera Rubin 的推出,虽然这个时间点稍远,我们仍需对总拥有成本(TCO)等方面进行分析。
因此,我们将稍后再深入讨论这一问题。但我可以说,我们目前仅凭 Blackwell 和我们所看到的进展,处于一个独特的位置。
Tim Arcuri
And how have you been able to move gross margin up so much? Because when Hopper launched, I remember during the early phases of Hopper gross margin was in the mid-60s, and now you're going to end up in the mid-70s. So, how have you moved gross margin up so much?
你们是如何将毛利率提高这么多的?因为当 Hopper 发布时,我记得在 Hopper 的早期阶段毛利率大约是 60% 中期,而现在你们的毛利率将达到 70% 中期。那么,你们是如何将毛利率提高这么多的?
Colette Kress
Okay. There are many different things when we are determining the value that we have provided to customers. It's not just about performance. It's not just about performance of the chips. It is about the end-to-end solution and what the customer is able to do to find the lowest TCO for them to determine. That helps determine how we go to market and use a certain price of that piece.
That TCO value is essentially looking at the full end-to-end. How would you complete the software? How would you complete your full data-center architecture? Would you need other teams? It's more than just looking at the different components. And it is not a situation where it’s the components and the cost plus model.
So, because of that and because of the strong performance, the strong efficiency, and the best TCO that enables us a full TCO incorporating all of the software that we provide both inside of the systems and support them throughout their lifetime.
好的,在确定我们提供给客户的价值时,有很多不同的因素。这不仅仅是关于性能,也不仅仅是芯片的性能。它是关于端到端的解决方案,以及客户如何找到适合他们的最低总拥有成本(TCO)。这帮助我们决定如何进入市场并使用某个价格。
TCO 的价值本质上是看整个端到端过程。你将如何完成软件?你将如何完成完整的数据中心架构?你是否需要其他团队?这不仅仅是查看不同的组件,而且并不是单纯的组件加成本模型。
因此,由于这一点,以及强劲的性能、效率和最佳的 TCO,我们能够提供完整的 TCO,涵盖我们提供的所有软件,既包括系统内部的软件,也包括在其生命周期内的支持。
Tim Arcuri
Got it. Can we talk about your Networking business? I get a lot of questions, it was down last quarter, it was expected to be up. And some people think that “Well, as you transition from InfiniBand to Ethernet that your position in networking is a little less strong.” So can you just talk about the Networking business and should we expect it to grow alongside of the compute business?
明白了。我们可以谈谈你们的网络业务吗?我收到很多问题,上一季度它下滑了,本应有所上升。有些人认为,“随着你们从 InfiniBand 转向以太网,你们在网络领域的地位可能会稍微减弱。”那么,能否谈谈你们的网络业务,是否应该预期它会与计算业务一起增长?
Colette Kress
Our Networking business is one of the most important additions that we added when we went to a datacenter scale. The ability to think through, not just the time where the data, the work is being done at a data processing or the use of the compute and/or the GPU is essential to think through the Networking’s position inside of that datacenter.
So we have two different offerings. We have InfiniBand and InfiniBand had tremendous success with many of the largest supercomputers in the world for decades. And that has been very important in terms of the size of data, the size of speed of data going through. It had different views in terms of how to deal with the traffic that will be there.
Ethernet, a great configuration that is the standard for many of the enterprises. But Ethernet was not built for AI. Ethernet was just built for the networking inside of datacenters. So we are taking some of the best of the breeds of what you see in terms of inter InfiniBand and creating Ethernet for AI. That allows customers now, both the choice between those. We can be full end-to-end systems with InfiniBand and now you have your choice in terms of what we do with Ethernet.
Both of these things are a growth option for us. In terms of this last quarter, we had some timing issues. But now, what you will see in terms of the continuation of our Networking will definitely grow. With our designs in terms of Networking with our compute, we have some of the strongest clusters that are being built and also using our Networking. That connection that we have done has been a very important part of the work that we've done since the acquisition of Mellanox. Folks do represent and understand our use of networking and how that can help their overall system as a whole.
我们的网络业务是我们在转向数据中心规模时添加的最重要的业务之一。我们不仅要考虑数据处理时发生的时间,或者计算和/或 GPU 的使用,还必须思考网络在数据中心中的位置。
我们提供两种不同的解决方案。我们有 InfiniBand,InfiniBand 在过去几十年中与世界上许多最大的超级计算机取得了巨大成功。这在数据的大小和数据传输速度方面非常重要,也有不同的方式来处理这些流量。
以太网是一种很好的配置,是许多企业的标准。但以太网并不是为人工智能设计的,它仅仅是为数据中心内部的网络通信设计的。因此,我们正在将 InfiniBand 的优势与以太网结合起来,为人工智能打造全新的以太网解决方案。这让客户可以在这两者之间选择。我们可以为客户提供完整的端到端系统,支持 InfiniBand,同时也可以提供以太网的解决方案。
这两项技术对我们来说都是增长选项。在上一季度,我们遇到了一些时机问题。但现在,关于网络业务的持续发展,您将看到它的增长。通过我们的网络设计与计算能力结合,我们正在构建一些最强大的集群,并且在使用我们的网络。这种连接是我们自收购 Mellanox 以来所做工作的一个非常重要的部分。客户也理解我们在网络领域的使用,并知道它如何帮助他们的整体系统。
Tim Arcuri
Great. Can we talk about scaling of these large language models? There's been some articles written that Google and OpenAI are having a hard time to get better results out of these larger models. But on the other hand, you had Meta on their earnings call and you had others like Anthropic saying that the scaling is of live and well.
So can you just speak to that? I know that there's some nuances in post-training and of course, there's the test time difference with the – some of these new models from OpenAI. Is the scaling question something that investors should be thinking about?
太棒了。我们能谈谈这些大型语言模型的扩展吗?有些文章写到,Google 和 OpenAI 在从这些更大模型中获得更好结果方面遇到困难。但另一方面,Meta 在其财报电话会议中提到,像 Anthropic 这样的公司表示扩展仍然在顺利进行。
你能谈谈这个问题吗?我知道在后期训练方面有一些细微差别,当然,还有一些来自 OpenAI 的新模型在测试时间上的差异。扩展性这个问题是投资者应该关注的吗?
Colette Kress
When we look at what we are seeing in the size of clusters that are being built and the work that many of our customers are looking to do, the scaling laws that we see particularly in terms of training, they're still here. I think you will see more and more larger models and complex models in this next-generation of Blackwell. What that means is, there is this phase of post-training that's coming back with reinforcement learning that are truly looking for the human piece of it and also using in terms of synthetic data to fine-tune that models.
Another way of saying that is training’s never done, and there's a lot of work that continues. But there's also been new skilling laws that is focused on the inferencing phase. If you recall, we are the largest inferencing provider that exist today. We do the most inferencing versus any other different types of configuration, why? it's very hard.
And what we are seeing from a scaling, is an important part from the onset of Generative AI to now, more are looking in terms of the focus in terms of reasoning for deep thinking and taking the time to do that. That is still going to now require additional amount of compute and a compute that is able to do at the least amount of latency for that time that you will spend in terms of the reasoning factor of it.
So we still see those scaling laws still being important and more and more new laws will be probably formed over the next decades.
当我们观察构建中的集群大小以及许多客户希望做的工作时,我们看到的扩展规律,特别是在训练方面,仍然有效。我认为,在下一代 Blackwell 中,你将看到越来越多的更大、更复杂的模型。这意味着,我们现在进入了后期训练阶段,强化学习的回归,它真正地关注人类的因素,并且使用合成数据来微调这些模型。
换句话说,训练永远不会完成,仍有大量工作在继续。但同时,也有新的扩展规律,专注于推理阶段。如果你记得,我们是目前最大的推理服务提供商。我们执行的推理数量超过任何其他类型的配置,为什么?因为它非常困难。
从扩展的角度来看,这是从生成式人工智能初期到现在的重要部分,更多的是集中于推理和深度思维的重点,需要花时间来做。这仍然将需要更多的计算能力,而且计算能力需要尽可能低的延迟,来支持你在推理过程中的时间。
走向深度思考,自己给出答案再自我纠正。
因此,我们仍然认为这些扩展规律非常重要,并且未来几十年可能会形成更多新的规律。
Tim Arcuri
You talk about a new and emerging piece of the demand picture, which is more in government-backed projects, Sovereign as we call them. You said that you're going to do double-digit billion dollars this year in that - for those projects as a whole. Can you talk about some of the examples of where that demand is coming from? And sort of how to think about how big it could be? I kind of think of it as well maybe some of these larger projects in the Middle East could eventually be as large as a US BSP.
So this could be a very large piece of demand and I am wondering how you sort of think about that you could get?
你提到了一种新兴的需求来源,那就是政府支持的项目,我们称之为主权人工智能(Sovereign AI)。你说你们今年在这些项目上将获得数十亿美元的收入。你能谈谈这些需求来自哪里的一些例子吗?并且如何考虑这些需求可能有多大?我个人认为,也许中东一些大型项目最终可能会和美国的 BSP 项目一样庞大。
这个方向不可能有大的进展,都不是做事的主体,黄仁勋不如乔布斯和巴菲特的地方。
这可能会是一个非常庞大的需求来源,我想知道你们如何看待这种需求的增长潜力?
Colette Kress
So Sovereign AI has been a very interesting part of what we've seen in terms of Generative AI. Very simply said, what they saw here that we had in the US, every country, every country that has a duty to see when I want that too, okay? And they want a model, a foundational model in their own language, in their own culture to support their nation. As they see the importance of what AI will be in the next decades going forward.
So, the amount of different countries that we are working with or even working in certain regions as you focused, is a very large part of our work that we are doing, spending around globally of AI. It is not just a West Coast of the US. It is really taking part around the world. Not all of it and even only parts of it are a government funding. Many of them are looking at very large companies that will start a new type of regional CSPs that they're able to support accelerated computing and may have a set of tenets. And we’ll likely have a foundational model with them in order to support the enterprises.
You've seen our talk in terms of what is happening in Japan. There is an area where SoftBank is very interested in building a very, very sizable model. You see, in terms of India, and many of the CSPs there working in terms of what they will incorporate as well.
That moves all the way to the Europe. It moves to the Nordics and it’s also very important part in the Asia-pacific area. So this will continue some parts of it that you want to think about is the next-generation of what we saw in terms of supercomputing in each of those countries. You will see from GDP what they need to do for AI and the Sovereign AI.
主权人工智能(Sovereign AI)在生成式人工智能领域是一个非常有趣的部分。简单来说,他们在这里看到的是美国拥有的技术,而每个国家都有责任去追求类似的技术,他们希望拥有适合自己语言和文化的基础模型来支持自己的国家。随着他们看到人工智能在未来几十年中的重要性,这个需求逐渐增加。
我们正在与多个不同的国家合作,甚至在一些地区开展工作,这已成为我们全球人工智能工作的一个重要部分。这不仅仅局限于美国西海岸,实际上它正在全球范围内展开。并非所有这些都由政府资助,甚至只有一部分是政府资助。许多国家正在寻求与大型公司合作,启动一种新的地区性云服务提供商(CSP),这些公司能够支持加速计算,并可能具备一套准则。我们很可能会与他们合作,共同打造基础模型来支持这些企业。
你已经看到我们谈论过日本的情况,软银对构建一个非常大的模型非常感兴趣。在印度,也有很多云服务提供商正在考虑他们将要整合的技术。
这也扩展到欧洲,尤其是北欧地区,它在亚太地区也扮演着非常重要的角色。所以,你可以继续关注这些地区,这些地区将推进我们看到的下一代超级计算能力。你将能够从 GDP 中看到他们为人工智能和主权人工智能所需要做的事情。
Tim Arcuri
Great. Can you talk about your Software business for just a minute or so? You had said that it's going to be crossing over the $2 billion a year runrate exiting this year. If I sort of do some back of the envelope of math and I try to figure out how many of your GPUs are you directly monetizing on for your modification of those.
I get something like 10% attach rate, where you're roughly directly licensing 10% of your So can you talk about sort of how you think about the attach rate, and how and how successful you are and directly licensing software for your GPUs?
太棒了。你能谈谈你们的软件业务吗?你提到过今年结束时你们的软件业务将达到年收入超过 20 亿美元。如果我做一些粗略的估算,试图弄清楚你们直接通过修改 GPU 获得的收入是多少。
我得出的数字大约是 10% 的附加率,意味着你们大约直接授权了 10% 的 GPU。那么,你能谈谈你们如何看待这个附加率,以及你们在直接授权软件方面的成功情况吗?
Colette Kress
Yeah. Our software platform is so essential to many of our enterprises and many of our regional CSPs, our future AI factories, our AI foundations work that we're doing. Why is that the case? What you have is a situation where that first steps of understanding how to move and get started on AI, we have thousands of different applications, as well as CUDA libraries and CUDA work that we have done for each industry, as well as each major workload within those industries.
Your enterprises have to have that piece, not only for the work that they need to do to do on their own, but their work that they need to do to support the infrastructure in that datacenter. So, we are building that for them and in many cases, your enterprise customers will have a very strong attachment to that software as they will need that for the work that they're doing.
Those that have very, very large, software teams and as I have been self-building for several decades is different. But as you can imagine the world, can't go back to building all of those software engineers. And so we have spent a very quality amount of time helping a lot of enterprises.
It's working with the enterprises from the onset of them choosing what type of compute they're doing to the delivery to helping them with their models and setting up all of their different apps and all of the overall inferencing where there are the entire way. So, it's more than just the actual software in there. That software also comes with true support and services from the company.
是的,我们的软件平台对于许多企业和地区性云服务提供商(CSP)至关重要,它支持我们未来的人工智能工厂,以及我们在人工智能基础设施方面的工作。为什么是这样呢?首先,企业需要了解如何开始使用人工智能,我们提供了成千上万的不同应用程序,以及为每个行业提供的 CUDA 库和 CUDA 工作,也为各行业的主要工作负载提供支持。
企业需要这部分内容,不仅是为了他们自己需要做的工作,也为了支持他们数据中心的基础设施。因此,我们为他们构建这些软件,并且在许多情况下,企业客户会对这些软件有很强的依赖,因为他们需要它来完成他们的工作。
那些拥有庞大软件团队的公司,和我们自己已经有数十年经验的情况有所不同。但如你所想,世界不能回到过去,需要重新构建所有这些软件工程师。因此,我们花了大量的时间帮助许多企业。
我们与企业合作,从他们选择计算类型开始,到交付支持他们的模型,设置他们的不同应用程序,并进行整体推理支持,帮助他们在整个过程中实施。所以,这不仅仅是实际的软件,软件背后还附带公司提供的真实支持和服务。
Tim Arcuri
So we looked just the enterprise market, your attach rate could be actually quite a bit higher than that?
所以我们看一下企业市场,你们的附加率实际上可能会比这个更高吗?
Colette Kress
Absolutely. That’s correct.
完全正确。
Tim Arcuri
Got it. Great. We all talk about datacentre, but inference at the Edge is going to become a much bigger theme. So can you talk about your position at the Edge? You have a large installed base in PC. That should play pretty well for you and you have Omniverse. Robotics is a huge theme. So can you talk about some of your offerings and how we should think of you as a player at the Edge?
明白了。太棒了。我们都在谈论数据中心,但边缘推理将成为一个更大的主题。那么你们在边缘计算方面的定位如何?你们在 PC 市场有庞大的安装基础,这对你们应该非常有利,并且你们有 Omniverse。机器人技术是一个巨大的主题。你能谈谈你们的一些产品和如何看待你们在边缘计算领域的角色吗?
Colette Kress
Yes, Edge Computing, Edge AI, very similar will likely go hand-in-hand and be there. What that means is, you will have factories. You will see folks in their datacenter collecting data and providing that data to go into many of the overall Edge appliances, Edge appliances that you may think, the cars. The cars that are autonomous-driven.
The next phase will likely be in terms of the robotics. A very, very big industry where the data and the learning is happening back in the datacenter and inside of those different devices includes our capabilities, to support them. It's an important industry. We do know the - I would say the datacenter piece of that is a very large market and very important for what we will see going forward.
That incorporates even a new set of software that we're doing. As, you know, we're doing the software for autonomous vehicles that will come to market later in this next calendar year. Additionally, when you think about the work that we can do inside of the robotics and also from that software and the work that we can do with many of the factories with Omniverse and the overall layout of how that will work.
So, these are very strong areas of focus even outside of just a standard datacentre. But yes, Edge Computing will be an important piece too.
是的,边缘计算和边缘人工智能将非常相似,可能会密切结合。意思是,你将有工厂,在数据中心收集数据并将这些数据提供给许多边缘设备,这些设备你可能会想到,例如自动驾驶的汽车。
下一个阶段很可能会集中在机器人技术上。这是一个非常大的行业,其中数据和学习正在数据中心进行,而这些不同的设备内包含了我们可以支持的能力。这是一个重要的行业。我们知道,数据中心部分是一个非常庞大的市场,对我们未来的工作非常重要。
这还包括我们正在开发的新一套软件。正如你所知道的,我们正在为自动驾驶汽车开发软件,这将在下一个日历年晚些时候推向市场。此外,当你考虑我们在机器人技术领域的工作,以及与许多工厂通过 Omniverse 合作,以及如何整体布局这些工作时,都会涉及到我们能够做的事情。
因此,这些都是非常强大的重点领域,甚至不仅仅局限于标准的数据中心。但边缘计算确实将成为一个重要组成部分。
Tim Arcuri
Great. Can we talk just for a minute about inference versus training? You have been saying that inference is about 40% of your revenue. Can you talk about how you see that evolving?
太棒了。我们能谈谈推理与训练的区别吗?你提到过推理占你们收入的约 40%。你能谈谈你如何看待这个比例的变化吗?
Colette Kress
It is about 40%, when we had communicated that we start thinking through what we are seeing the use cases for what they're doing. We see a lot of time spent in terms of the inferencing. And this is even before you were seeing a lot of the Generative AI applications that are still in the worst to be put out there.
So, with the recommend - recommender engines, that is a very significant part of the inferencing today. So our growth of that 40% will likely be seen as we move forward. But as I discussed earlier, we are still the largest in terms of inferencing. And when we think about the Blackwell architecture, particularly the GB200, NVL that is an important configuration that is 30X Improvement in terms of inferencing performance from our current generation.
That is such an important piece for many of the customers. They will likely use at the very onset building, what they need for their foundational model. But that important part of inferencing going forward with Blackwell is - has been very well received by many of our customers.
大约是 40%,当我们传达这个比例时,我们开始思考我们看到的用例及其背后的工作。我们看到,很多时间被花在推理上。实际上,这还发生在你看到许多生成式人工智能应用之前,这些应用还未广泛推出。
在推荐引擎方面,今天推理的一个重要组成部分就是它。我们预计,这 40% 的增长将会随着时间推移而继续增长。正如我之前提到的,我们仍然是推理领域的最大提供商。当我们谈到 Blackwell 架构时,特别是 GB200 和 NVL,这是一个重要配置,推理性能比我们目前的产品提升了 30 倍。
这对于许多客户来说是一个至关重要的部分。他们很可能在开始时就使用它来构建他们所需的基础模型。未来,随着 Blackwell 推出的推理部分,也得到了许多客户的热烈欢迎。
Tim Arcuri
Great. And then, you talked about some constraints on Blackwell, and it sounds like they're going to go away maybe mid-‘25 that they begin to ease. Can you talk about what some of those are? And maybe is that right to assume that they do sort of begin to go away in the middle of ‘25
太棒了。然后,你提到 Blackwell 存在一些约束,听起来这些约束可能会在 2025 年中期开始缓解。你能谈谈这些约束是什么吗?或许我们可以假设它们会在 2025 年中期开始消失吗?
Colette Kress
When we think through the building of Blackwell and the designing and working with our customers on the configuration, the demand Cane's Fast and Furious in terms of what demand is exceptional. And we are working with a tremendous set of partners. We talk with our suppliers each and every day to help them. So right now, yes, we need to scale to build enough Blackwell for what we see in the demand in front of us.
And we are probably going to be supply constrained pretty much through the first parts of the new fiscal year. When you say in terms of where there is constraint? It depends in terms of the configurations, but some of the challenges that you have are working again in terms of the co-op space. So the work that you need to do in terms of the different configurations and all of the work that we do in terms of the Networking those switching to get that right. Depending on those configurations that can be supply constrained. But we will have right out of the gate in terms of this quarter, we are on track to ship Blackwell. Blackwell is doing just fine and we're very excited to bring multiple configurations to our customers this quarter.
当我们考虑到 Blackwell 的构建、设计以及与客户在配置方面的合作时,我们看到的需求增长速度非常快且非常出色。我们与大量的合作伙伴合作,每天与供应商沟通,帮助他们解决问题。因此,现在是的,我们需要扩大规模,满足眼前的需求。
而且我们可能会在新财年的前几个季度内面临供应瓶颈。你提到的约束具体在哪里?这取决于配置,但一些挑战来自于合作空间的工作。你需要完成的工作,包括不同配置的工作以及我们在网络方面所做的所有工作,来确保这些切换配置能正确执行。根据这些配置,可能会存在供应瓶颈。但我们将在本季度从一开始就发货 Blackwell,Blackwell 一切正常,我们非常高兴能在本季度为客户带来多个配置。
Tim Arcuri
Well, I think it's going to be an amazing year next year for sure. So, anyway, thank you, Colette. Believe you are appreciated.
我认为明年肯定会是一个令人惊叹的一年。总之,谢谢你,Colette。相信你会得到更多的赞赏。
Colette Kress
Yes. Okay. Bye.