NVIDIA Corporation (NASDAQ:NVDA) Q4 2024 Earnings Conference Call February 21, 2024 5:00 PM ET
英伟达公司(纳斯达克股票代码:NVDA)2024 年第四季度收益电话会议 2024 年 2 月 21 日 下午 5:00 美东时间
Company Participants 公司参与者
Simona Jankowski - VP, IR
西蒙娜·扬科夫斯基 - 副总裁,投资者关系
Colette Kress - EVP & CFO
柯莱特·克雷斯 - 执行副总裁兼首席财务官
Jensen Huang - President & CEO
黄仁勋 - 总裁兼首席执行官
Conference Call Participants
电话会议参与者
Toshiya Hari - Goldman Sachs
Toshiya Hari - 高盛
Joe Moore - Morgan Stanley
乔·摩尔 - 摩根士丹利
Stacy Rasgon - Bernstein Research
斯泰西·拉斯贡 - 伯恩斯坦研究
Matt Ramsay - TD Cowen
马特·拉姆齐 - TD Cowen
Timothy Arcuri - UBS
蒂莫西·阿库里 - 瑞银
Ben Reitzes - Melius Research
本·赖茨斯 - 梅利乌斯研究
C.J. Muse - Cantor Fitzgerald
C.J. Muse - 康特菲茨杰 erald Fitzgerald
Aaron Rakers - Wells Fargo
亚伦·雷克斯 - 富国银行
Harsh Kumar - Piper Sandler
哈什·库马 - 派普桑德勒
Operator 操作员
Good afternoon. My name is Rob and I'll be your conference operator today. At this time, I would like to welcome everyone to the NVIDIA's Fourth Quarter Earnings Call. All lines have been placed on mute to prevent any background noise. After the speaker's remarks, there will be a question-and-answer session. [Operator Instructions]
下午好。我是 Rob,今天将担任您的电话会议操作员。此时,我想欢迎大家参加英伟达第四季度收益电话会议。为防止任何背景噪音,所有线路均已静音。在发言人讲话后,将进行问答环节。【操作员指示】
Thank you. Simona Jankowski, you may begin your conference.
谢谢。Simona Jankowski,您可以开始您的会议。
Simona Jankowski 西莫娜·扬科夫斯基
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the fourth quarter and fiscal 2024. With me today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer.
谢谢。大家下午好,欢迎参加英伟达 2024 财年第四季度电话会议。今天和我一起出席电话会议的有英伟达的总裁兼首席执行官黄仁勋和执行副总裁兼首席财务官科莱特·克雷斯。
I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the first quarter of fiscal 2025. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
我想提醒您,我们的电话会议正在 NVIDIA 投资者关系网站上进行网络直播。网络直播将在财年 2025 第一季度财务业绩电话会议之前提供回放。今天电话会议的内容属于 NVIDIA 的财产。未经我们事先书面同意,不得复制或转录。
很奇怪的声明,这一类材料都是公开的。
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission.
在本通话中,我们可能根据当前预期进行前瞻性声明。这些声明受到许多重大风险和不确定性的影响,我们的实际结果可能会有重大差异。有关可能影响我们未来财务业绩和业务的因素的讨论,请参阅今天收益发布中的披露,我们最近的 10-K 和 10-Q 表格以及我们可能向证券交易委员会提交的 8-K 表格中的报告。
All our statements are made as of today, February 21, 2024, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
所有我们的声明均基于我们当前掌握的信息,截至 2024 年 2 月 21 日。除非法律要求,我们不承担更新任何此类声明的义务。在本次通话中,我们将讨论非通用会计准则财务指标。您可以在我们首席财务官评论中找到这些非通用会计准则财务指标与通用会计准则财务指标的调和情况,该评论已发布在我们的网站上。
With that let me turn the call over to Colette.
让我把电话交给科莱特。
Colette Kress 柯蕾特·克雷斯
Thanks, Simona. Q4 was another record quarter. Revenue of $22.1 billion was up 22% sequentially and up to 265% year-on-year and well above our outlook of $20 billion. For fiscal 2024, revenue was $60.9 billion and up 126% from the prior year.
谢谢,Simona。Q4 是另一个创纪录的季度。221 亿美元的收入环比增长 22%,同比增长 265%,远高于我们的预期 200 亿美元。2024 财年,收入为 609 亿美元,比上一年增长 126%。
Starting with data center. Data center revenue for the fiscal 2024 year was $47.5 billion, more than tripling from the prior year. The world has reached the tipping point of new computing era. The $1 trillion installed base of data center infrastructure is rapidly transitioning from general purpose to accelerated computing.
从数据中心开始。截至 2024 财年,数据中心收入为 475 亿美元,较上一年增长了两倍以上。世界已经达到了新计算时代的转折点。数据中心基础设施的 1 万亿美元装机量正迅速从通用计算转向加速计算。
As Moore's Law slows while computing demand continues to skyrocket, companies may accelerate every workload possible to drive future improvement in performance, TCO and energy efficiency. At the same time, companies have started to build the next generation of modern data centers, what we refer to as AI factories, purpose built to refine raw data and produce valuable intelligence in the era of generative AI.
随着摩尔定律放缓,而计算需求持续激增,公司可能会加速尽可能处理每项工作负载,以推动未来性能、TCO 和能效的改善。与此同时,公司已经开始建设下一代现代数据中心,我们称之为 AI 工厂,旨在精炼原始数据并在生成式 AI 时代产生有价值的智能。
In the fourth quarter, data center revenue of $18.4 billion was a record, up 27% sequentially and up 409% year-over-year, driven by the NVIDIA Hopper GPU computing platform along with InfiniBand end-to-end networking. Compute revenue grew more than 5x and networking revenue tripled from last year. We are delighted that supply of Hopper architecture products is improving. Demand for Hopper remains very strong. We expect our next-generation products to be supply constrained as demand far exceeds supply.
在第四季度,数据中心收入达到 184 亿美元,创下纪录,环比增长 27%,同比增长 409%,主要受到英伟达霍珀 GPU 计算平台以及 InfiniBand 端到端网络的推动。计算收入增长超过 5 倍,网络收入较去年增长三倍。我们很高兴看到霍珀架构产品的供应正在改善。对霍珀的需求仍然非常强劲。我们预计下一代产品将受到供应限制,需求远远超过供应。
Fourth quarter data center growth was driven by both training and inference of generative AI and large language models across a broad set of industries, use cases and regions. The versatility and leading performance of our data center platform enables a high return on investment for many use cases, including AI training and inference, data processing and a broad range of CUDA accelerated workloads. We estimate in the past year approximately 40% of data center revenue was for AI inference.
第四季度数据中心增长受到广泛行业、用例和地区的生成式人工智能和大型语言模型的训练和推理驱动。我们数据中心平台的多功能性和领先性能为许多用例带来高投资回报,包括人工智能训练和推理、数据处理以及广泛范围的 CUDA 加速工作负载。我们估计过去一年大约 40%的数据中心收入用于人工智能推理。
Building and deploying AI solutions has reached virtually every industry. Many companies across industries are training and operating their AI models and services at scale, enterprises across NVIDIA AI infrastructure through cloud providers, including hyperscales, GPU specialized and private clouds or on-premise.
构建和部署 AI 解决方案已经几乎覆盖了每个行业。许多公司跨行业在规模上训练和运营他们的 AI 模型和服务,通过云提供商在 NVIDIA AI 基础设施上运行,包括超大规模、GPU 专用和私有云或本地部署。
NVIDIA's computing stack extends seamlessly across cloud and on-premise environments, allowing customers to deploy with a multi-cloud or hybrid-cloud strategy. In the fourth quarter, large cloud providers represented more than half of our data center revenue, supporting both internal workloads and external public cloud customers.
NVIDIA 的计算堆栈在云端和本地环境之间无缝延伸,使客户能够采用多云或混合云策略部署。在第四季度,大型云服务提供商占据了我们数据中心收入的一半以上,支持内部工作负载和外部公共云客户。
Microsoft recently noted that more than 50,000 organizations use GitHub Copilot business to supercharge the productivity of their developers, contributing to GitHub revenue growth accelerating to 40% year-over-year. And Copilot for Microsoft 365 adoption grew faster in its first two months than the two previous major Microsoft 365 enterprise suite releases did.
微软最近指出,超过 50,000 家组织使用 GitHub Copilot 业务来提高开发人员的生产力,为 GitHub 的收入增长加速至每年 40%做出贡献。而 Microsoft 365 的 Copilot 在头两个月的采用速度比之前两个主要 Microsoft 365 企业套件发布时增长更快。
AI提高软件开发能力是最直接的现实。
Consumer internet companies have been early adopters of AI and represent one of our largest customer categories. Companies from search to e-commerce, social media, news and video services and entertainment are using AI for deep learning-based recommendation systems. These AI investments are generating a strong return by improving customer engagement, ad conversation and click-throughs rates.
消费者互联网公司是人工智能的早期采用者之一,也是我们最大的客户类别之一。从搜索到电子商务、社交媒体、新闻和视频服务以及娱乐等公司都在利用基于深度学习的推荐系统进行人工智能应用。这些人工智能投资通过提高客户参与度、广告转化率和点击率等方面带来了强劲的回报。
Meta in its latest quarter cited more accurate predictions and improved advertiser performance as contributing to the significant acceleration in its revenue. In addition, consumer internet companies are investing in generative AI to support content creators, advertisers and customers through automation tools for content and ad creation, online product descriptions and AI shopping assistance.
在最新的季度报告中,Meta 提到更准确的预测和改善广告主表现为其收入显著加速做出了贡献。此外,消费者互联网公司正在投资生成式人工智能,以通过自动化工具支持内容创作者、广告主和客户,包括内容和广告创作、在线产品描述和人工智能购物辅助。
Enterprise software companies are applying generative AI to help customers realize productivity gains. Early customers we've partnered with for both training and inference of generative AI are already seeing notable commercial success.
企业软件公司正在应用生成式人工智能来帮助客户实现生产力提升。我们早期合作的客户在生成式人工智能的培训和推理方面已经看到了显著的商业成功。
ServiceNow's generative AI products in their latest quarter drove their largest ever net new annual contract value contribution of any new product family release. We are working with many other leading AI and enterprise software platforms as well, including Adobe, Databricks, Getty Images, SAP and Snowflake.
ServiceNow 在最新季度推出的生成式人工智能产品推动了其有史以来最大的净新增年度合同价值贡献,超过了任何新产品系列发布。我们还与许多其他领先的人工智能和企业软件平台合作,包括 Adobe、Databricks、Getty Images、SAP 和 Snowflake。
这是和GitHub Copilot同一类型的需求。
The field of foundation of large-language models is thriving. Anthropic, Google, Inflection, Microsoft, OpenAI and xAI are leading with continued amazing breakthrough in generative AI. Exciting companies like Adept, AI21, Character.ai, Cohere, Mistral, Perplexity and Runway are building platforms to serve enterprises and creators. New startups are creating LLMs to serve the specific languages, cultures and customs of the world many regions.
大语言模型基础领域蓬勃发展。Anthropic、Google、Inflection、Microsoft、OpenAI 和 xAI 在生成式人工智能方面持续取得惊人突破。Adept、AI21、Character.ai、Cohere、Mistral、Perplexity 和 Runway 等令人振奋的公司正在构建平台,以服务企业和创作者。新创企业正在创建LLMs,以服务世界各地许多地区的特定语言、文化和习俗。
And others are creating foundation models to address entirely different industries like Recursion Pharmaceuticals and Generate:Biomedicines for biology. These companies are driving demand for NVIDIA AI infrastructure through hyperscale or GPU specialized cloud providers. Just this morning, we announced that we've collaborated with Google to optimize its state-of-the art new Gemma language models to accelerate their inference performance on NVIDIA GPUs in the cloud data center and PC.
还有一些公司正在创建基础模型,以解决完全不同的行业,比如 Recursion Pharmaceuticals 和 Generate:Biomedicines 用于生物学。这些公司通过超大规模或 GPU 专业化云服务提供商推动了对 NVIDIA AI 基础设施的需求。就在今天早上,我们宣布我们已与谷歌合作,优化其最先进的新 Gemma 语言模型,以加速在 NVIDIA GPU 上的推理性能,云数据中心和个人电脑。
One of the most notable trends over the past year is the significant adoption of AI by enterprises across the industry verticals such as automotive, healthcare and financial services. NVIDIA offers multiple application frameworks to help companies adopt AI in vertical domains such as autonomous driving, drug discovery, low latency machine learning for fraud detection or robotics, leveraging our full stack accelerated computing platform.
过去一年中最显著的趋势之一是企业在汽车、医疗保健和金融服务等行业垂直领域大规模采用人工智能。英伟达提供多个应用框架,帮助企业在垂直领域采用人工智能,如自动驾驶、药物发现、低延迟机器学习用于欺诈检测或机器人技术,充分利用我们的全栈加速计算平台。
We estimate the data center revenue contribution of the automotive vertical through the cloud or on-prem exceeded $1 billion last year. NVIDIA DRIVE infrastructure solutions includes systems and software for the development of autonomous driving, including data ingestion, creation, labeling and AI training, plus validation through simulation.
我们估计去年汽车垂直领域通过云端或本地数据中心贡献的收入超过 10 亿美元。英伟达 DRIVE 基础设施解决方案包括用于自动驾驶开发的系统和软件,包括数据摄入、创建、标注和人工智能训练,以及通过模拟进行验证。
Almost 80 vehicle manufacturers across global OEMs, new energy vehicles, trucking, robotaxi and Tier 1 suppliers are using NVIDIA's AI infrastructure to train LLMs and other AI models for automated driving and AI cockpit applications. And in fact, nearly every automotive company working on AI is working with NVIDIA. As AV algorithms move to video transformers and more cars are equipped with cameras, we expect NVIDIA's automotive data center processing demand to grow significantly.
全球近 80 家车辆制造商,包括全球原始设备制造商、新能源车辆、卡车、无人出租车和一级供应商,都在使用 NVIDIA 的人工智能基础设施来训练 1001 等其他人工智能模型,用于自动驾驶和人工智能驾驶舱应用。事实上,几乎每家从事人工智能的汽车公司都在与 NVIDIA 合作。随着自动驾驶算法转向视频变压器,越来越多的汽车配备摄像头,我们预计 NVIDIA 的汽车数据中心处理需求将显著增长。
In healthcare, digital biology and generative AI are helping to reinvent drug discovery, surgery, medical imaging and wearable devices. We have built deep domain expertise in healthcare over the past decade, creating the NVIDIA Clara healthcare platform and NVIDIA BioNeMo, a generative AI service to develop, customize and deploy AI foundation models for computer-aided drug discovery.
在医疗保健领域,数字生物学和生成式人工智能正在帮助重新定义药物发现、手术、医学影像和可穿戴设备。在过去的十年中,我们在医疗保健领域建立了深厚的专业知识,创建了 NVIDIA Clara 医疗平台和 NVIDIA BioNeMo,这是一个生成式人工智能服务,用于开发、定制和部署计算机辅助药物发现的人工智能基础模型。
BioNeMo features a growing collection of pre-trained Biomolecular AI models that can be applied to the end-to-end drug discovery processes. We announced Recursion is making available for their proprietary AI model through BioNeMo for the drug discovery ecosystem. In financial services, customers are using AI for a growing set of use cases from trading and risk management to customer service and fraud detection. For example, American Express improved fraud detection accuracy by 6% using NVIDIA AI.
BioNeMo 拥有日益增长的预训练生物分子人工智能模型集合,可应用于端到端的药物发现流程。我们宣布 Recursion 通过 BioNeMo 为药物发现生态系统提供其专有的 AI 模型。在金融服务领域,客户正在使用人工智能进行一系列不断增长的用例,从交易和风险管理到客户服务和欺诈检测。例如,美国运通利用 NVIDIA 人工智能将欺诈检测准确率提高了 6%。
American Express在过去搞的不如VISA,巴菲特选择American Express而不是VISA,American Express是个垂直的结构,这样的结构自主性更强也更有确定性,可以自主掌控自己的软件系统。
Shifting to our data center revenue by geography. Growth was strong across all regions, except for China where our data center revenue declined significantly following the U.S. government export control regulations imposed in October. Although we have not received licenses from the U.S. government to ship restricted products to China, we have started shipping alternatives that don't require a license for the China market. China represented a mid-single digit percentage of our data center revenue in Q4. And we expect it to stay in a similar range in the first-quarter.
根据地理位置划分我们的数据中心收入。除中国外,所有地区的增长都很强劲,中国的数据中心收入在美国政府于十月实施的出口管制法规后显著下降。尽管我们尚未获得美国政府向中国出口受限制产品的许可,但我们已开始向中国市场出货不需要许可的替代产品。中国在第四季度占我们数据中心收入的中等个位数百分比。我们预计第一季度将保持在类似范围内。
In regions outside of the U.S. and China, sovereign AI has become an additional demand driver. Countries around the world are investing in AI infrastructure to support the building of large-language models in their own language, on domestic data and in support of their local research and enterprise ecosystems. From a product perspective, the vast majority of revenue was driven by our Hopper architecture along with InfiniBand networking. Together, they have emerged as the de-facto standard for accelerated computing and AI infrastructure.
在美国和中国以外的地区,主权人工智能已成为额外的需求驱动因素。世界各国正在投资人工智能基础设施,以支持在本国语言、本国数据上构建大型语言模型,并支持本地研究和企业生态系统的发展。从产品角度来看,绝大部分收入来自我们的 Hopper 架构以及 InfiniBand 网络。它们已成为加速计算和人工智能基础设施的事实标准。
We are on track to ramp H200 with initial shipments in the second quarter. Demand is strong as H200 nearly doubles the inference performance of H100. Networking exceeded a $13 billion annualized revenue run rate. Our end-to-end networking solutions define modern AI data centers. Our Quantum InfiniBand solutions grew more than 5x year on year.
我们正计划在第二季度开始初步发货,以推动 H200 的增长。需求强劲,因为 H200 的推理性能几乎是 H100 的两倍。网络业务超过 130 亿美元的年收入率。我们的端到端网络解决方案定义了现代 AI 数据中心。我们的 Quantum InfiniBand 解决方案同比增长超过 5 倍。
NVIDIA Quantum InfiniBand is the standard for the highest performance AI-dedicated infrastructures. We are now entering the ethernet networking space with the launch of our new Spectrum-X end-to-end offering designed for an AI-optimized networking for the data center. Spectrum-X introduces new technologies over ethernet, that are purpose built for AI. Technologies incorporated in our Spectrum switch, BlueField DPU and software stack deliver 1.6x higher networking performance for AI processing compared with traditional ethernet.
NVIDIA Quantum InfiniBand 是最高性能人工智能专用基础设施的标准。我们现在推出了全新的 Spectrum-X 端到端解决方案,进军以太网网络领域,旨在为数据中心提供经过 AI 优化的网络。Spectrum-X 在以太网上引入了针对 AI 定制的新技术。我们的 Spectrum 交换机、BlueField DPU 和软件堆栈中融入的技术,与传统以太网相比,为 AI 处理提供了 1.6 倍更高的网络性能。
Leading OEMs, including Dell, HPE, Lenovo and Super Micro, with their global sales channels, are partnering with us to expand our AI solution to enterprises worldwide. We are on track to ship Spectrum-X this quarter. We also made great progress with our software and services offerings, which reached an annualized revenue run rate of $1 billion in Q4. We announced that NVIDIA DGX Cloud will expand its list of partners to include Amazon's AWS, joining Microsoft Azure, Google Cloud and Oracle Cloud. DGX Cloud is used for NVIDIA's own AI R&D and custom model development as well as NVIDIA developers. It brings the CUDA ecosystem to NVIDIA CSP partners.
领先的 OEM 厂商,包括戴尔、惠普、联想和超微,通过其全球销售渠道,与我们合作,将我们的人工智能解决方案拓展到全球企业。我们正计划在本季度发货 Spectrum-X。我们在软件和服务方面取得了巨大进展,年化营收在第四季度达到 10 亿美元。我们宣布 NVIDIA DGX Cloud 将扩大合作伙伴名单,包括亚马逊的 AWS,加入微软 Azure、谷歌云和甲骨文云。DGX Cloud 用于 NVIDIA 自身的人工智能研发和定制模型开发,以及 NVIDIA 开发人员。它将 CUDA 生态系统带给 NVIDIA 的 CSP 合作伙伴。
Okay, moving to gaming. Gaming revenue was $2.87 billion, was flat sequentially and up 56% year on year, better than our outlook on solid consumer demand for NVIDIA GeForce RTX GPUs during the holidays. Fiscal year revenue of $10.45 billion was up 15%. At CES, we announced our GeForce RTX 40 Super Series family of GPUs. Starting at $599, they deliver incredible gaming performance and generative AI capabilities. Sales are off to a great start.
好的,转向游戏。游戏收入为 28.7 亿美元,按季度持平,同比增长 56%,优于我们对节假日期间 NVIDIA GeForce RTX GPU 强劲消费需求的预期。财政年度收入为 104.5 亿美元,增长 15%。在 CES 上,我们宣布了 GeForce RTX 40 Super 系列 GPU 家族。起价 599 美元,它们提供令人难以置信的游戏性能和生成式 AI 能力。销售开局良好。
NVIDIA AI Tensor cores and the GPUs deliver up to 836 AI tops, perfect for powering AI for gaming, creating an everyday productivity. The rich software stack we offer with our RTX GPUs further accelerates AI. With our DLSS technologies, seven out of eight pixels can be AI generated, resulting up to 4x faster ray tracing and better image quality. And with the Tensor RT LLM for Windows, our open-source library that accelerates inference performance for the latest large-language models generative AI can run up to 5X faster on RTX AI PCs.
NVIDIA AI Tensor 核心和 GPU 可提供高达 836 AI tops 的性能,非常适合为游戏和日常生产力提供 AI 支持。我们提供的丰富软件堆栈进一步加速了 AI 在 RTX GPU 上的运行。借助我们的 DLSS 技术,八个像素中有七个可以由 AI 生成,从而实现高达 4 倍更快的光线追踪和更好的图像质量。而且,借助 Tensor RT LLM for Windows,我们的开源库可以加速最新大型语言模型生成 AI 的推理性能,在 RTX AI PC 上运行速度最多可提高 5 倍。
At CES, we also announced a wave of new RTX 40 Series AI laptops from every major OEMs. These bring high-performance gaming and AI capabilities to a wide range of form factors, including 14 inch and thin and light laptops. With up to 686 tops of AI performance, these next-generation AI PCs increase generative AI performance by up to 60x, making them the best-performing AI PC platforms. At CES, we announced NVIDIA Avatar Cloud Engine microservices, which allowed developers to integrate state-of-the-art generative AI models into digital avatars. ACE won several Best of CES 2024 awards.
在 CES 上,我们还宣布了一波来自各大 OEM 厂商的新 RTX 40 系列 AI 笔记本电脑。这些产品将高性能游戏和人工智能功能带入了各种形态,包括 14 英寸、轻薄笔记本电脑。这些下一代 AI 电脑的 AI 性能高达 686TOPS,将生成式 AI 性能提高了 60 倍,使其成为性能最佳的 AI 电脑平台。在 CES 上,我们宣布了 NVIDIA Avatar Cloud Engine 微服务,允许开发人员将最先进的生成式 AI 模型集成到数字化身。ACE 赢得了多个 CES 2024 年度最佳奖。
NVIDIA has an end-to-end platform for building and deploying generative AI applications for RTX PCs and workstations. This includes libraries, SDKs, tools and services developers can incorporate into their generative AI workloads. NVIDIA is fueling the next wave of generative AI applications coming to the PC. With over 100 million RTX PCs in the installed-base and over 500 AI-enabled PC applications and games, we are on our way.
NVIDIA 拥有端到端平台,用于构建和部署适用于 RTX 个人电脑和工作站的生成式人工智能应用程序。这包括开发人员可以整合到他们的生成式人工智能工作负载中的库、SDK、工具和服务。NVIDIA 正在推动下一波即将登陆个人电脑的生成式人工智能应用程序。在已安装基数超过 1 亿台 RTX 个人电脑和超过 500 个 AI 启用的个人电脑应用程序和游戏的情况下,我们正在前进。
Moving to Pro Visualization. Revenue of $463 million was up 11% sequentially and up 105% year on year. Fiscal year revenue of $1.55 billion was up 1%. Sequential growth in the quarter was driven by a rich mix of RTX Ada architecture GPUs continuing to ramp. Enterprises are refreshing their workstations to support generative AI-related workloads, such as data preparation, LLM fine-tuning and retrieval augmented generation.
转向专业可视化。收入为 4.63 亿美元,环比增长 11%,同比增长 105%。财政年度收入为 15.5 亿美元,增长 1%。本季度的环比增长主要受到 RTX Ada 架构 GPU 的丰富组合继续增长的推动。企业正在更新他们的工作站,以支持生成式 AI 相关工作负载,如数据准备,微调和检索增强生成。
These key industrial verticals driving demand include manufacturing, automotive and robotics. The automotive industry has also been an early adopter of NVIDIA Omniverse as it seeks to digitize work flows from design to build, simulate, operate and experience their factories and cars. At CES, we announced that creative partners and developers including Brickland, WPP and ZeroLight are building Omniverse-powered car configurators. Leading automakers like LOTUS are adopting the technology to bring new levels of personalization, realism and interactivity to the car buying experience.
推动需求的关键工业垂直领域包括制造业、汽车和机器人。汽车行业也是 NVIDIA Omniverse 的早期采用者,它致力于将设计到建造、模拟、运营和体验工厂和汽车的工作流数字化。在 CES 上,我们宣布,包括 Brickland、WPP 和 ZeroLight 在内的创意合作伙伴和开发者正在构建基于 Omniverse 的汽车配置器。像 LOTUS 这样的领先汽车制造商正在采用这项技术,为汽车购买体验带来新的个性化、逼真和互动水平。
Moving to Automotive. Revenue was $281 million, up 8% sequentially and down 4% year on year. Fiscal year revenue of $1.09 billion was up 21%, crossing the $1 billion mark for the first time on continued adoption of the NVIDIA DRIVE platform by automakers. NVIDIA DRIVE Orin is the AI car computer of choice for software-defined AV fleets.
转向汽车行业。收入为 2.81 亿美元,环比增长 8%,同比下降 4%。财政年度收入为 10.9 亿美元,同比增长 21%,首次突破 10 亿美元大关,汽车制造商持续采用 NVIDIA DRIVE 平台。 NVIDIA DRIVE Orin 是软件定义 AV 车队的首选人工智能汽车计算机。
Its successor, NVIDIA DRIVE Thor, designed for vision transformers often -- offers more AI performance and integrates a wide range of intelligent capabilities into a single AI compute platform, including autonomous driving and parking, driver and passenger monitoring and AI cockpit functionality and will be available next year. There were several automotive customer announcements this quarter, Li Auto, Great Wall Motor, ZEEKR, the premium EV subsidiary of Geely and Jeremy Xiaomi EV all announced new vehicles built on NVIDIA.
其继任者 NVIDIA DRIVE Thor,专为视觉转换器设计,通常提供更多的人工智能性能,并将各种智能功能集成到单一的人工智能计算平台中,包括自动驾驶和停车、驾驶员和乘客监控以及人工智能驾驶舱功能,并将于明年推出。本季度有几家汽车客户宣布消息,理想汽车、长城汽车、ZEEKR、吉利旗下的高端电动汽车子公司以及小米集团旗下的 Jeremy 小米电动汽车都宣布推出基于 NVIDIA 的新车型。
Moving to the rest of the P&L. GAAP gross margins expanded sequentially to 76% and non-GAAP gross margins to 76.7% on strong data center growth and mix. Our gross margins in Q4 benefited from favorable component costs. Sequentially, GAAP operating expenses were up 6% and non-GAAP operating expenses were up 9%, primarily reflecting higher compute and infrastructure investments and employee growth.
转向损益表的其他部分。根据通用会计准则,毛利率按顺序扩大至 76%,非通用会计准则下的毛利率扩大至 76.7%,主要受益于数据中心增长和产品组合。我们在第四季度的毛利受益于有利的零部件成本。按顺序计算,通用会计准则下的营业费用增长了 6%,非通用会计准则下的营业费用增长了 9%,主要反映了更高的计算和基础设施投资以及员工增长。
In Q4, we returned $2.8 billion to shareholders in the form of share repurchases and cash dividends. During fiscal year '24, we utilized cash of $9.9 billion towards shareholder returns, including $9.5 billion in share repurchases.
在第四季度,我们以股票回购和现金股利的形式向股东返还了 28 亿美元。在 2024 财年,我们利用了 99 亿美元的现金用于股东回报,其中包括 95 亿美元的股票回购。
Let me turn to the outlook for the first quarter. Total revenue is expected to be $24 billion, plus or minus 2%. We expect sequential growth in data center and proviz, partially offset by seasonal decline in gaming. GAAP and non-GAAP gross margins are expected to be 76.3% and 77% respectively, plus or minus 50 basis-points. Similar to Q4, Q1 gross margins are benefiting from favorable component costs. Beyond Q1, for the remainder of the year, we expect gross margins to return to the mid-70s percent range.
让我来谈一下第一季度的展望。预计总收入将达到 240 亿美元,加减 2%。我们预计数据中心和 Proviz 将实现环比增长,部分抵消游戏季节性下降。预计按照美国通用会计准则(GAAP)和非美国通用会计准则(non-GAAP)计算的毛利率分别为 76.3%和 77%,加减 50 个基点。与第四季度类似,第一季度的毛利率受益于有利的零部件成本。在第一季度之后,今年剩余时间内,我们预计毛利率将回到中 70%的百分比范围内。
GAAP and non-GAAP operating expenses are expected to be approximately $3.5 billion and $2.5 billion respectively. Fiscal year 2025 GAAP and non-GAAP operating expenses are expected to grow in the mid-30% range as we continue to invest in the large opportunities ahead of us.
根据预期,根据普通会计准则(GAAP)和非普通会计准则(non-GAAP)的运营费用分别约为 35 亿美元和 25 亿美元。预计到 2025 财年,随着我们继续投资于前方巨大的机遇,GAAP 和非 GAAP 运营费用预计将增长约 30%。
GAAP and non-GAAP other income and expenses are expected to be an income of approximately $250 million, excluding gains and losses from non-affiliated investments. GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1% excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website.
根据预期,除了来自非关联投资的收益和损失外,预计根据普通会计准则和非普通会计准则的其他收入和费用将约为 2.5 亿美元。预计普通会计准则和非普通会计准则的税率将为 17%,加减 1%,不包括任何离散项目。财务细节可在首席财务官评论和我们 IR 网站上提供的其他信息中找到。
In closing, let me highlight some upcoming events for the financial community. We will attend the Morgan Stanley Technology and Media and Telecom Conference in San Francisco on March 4 and the TD Cowen's 44th Annual Healthcare Conference in Boston on March 5. And of course, please join us for our Annual DTC conference starting Monday March 18 in San Jose, California, to be held in-person for the first time in five years. DTC will kick off with Jen-Hsun's keynote and we will host a Q&A session for financial analysts the next day, March 19.
在结束时,让我强调一些即将举行的金融界活动。我们将于 3 月 4 日参加在旧金山举行的摩根士丹利科技、传媒和电信会议,以及于 3 月 5 日参加在波士顿举行的 TD Cowen 第 44 届年度医疗保健会议。当然,请加入我们于 3 月 18 日星期一在加利福尼亚州圣何塞举行的年度 DTC 会议,这是五年来首次以面对面的方式举行。DTC 将以黄仁勋的主题演讲拉开序幕,我们将于第二天 3 月 19 日为金融分析师举办问答环节。
At this time, we will now open the call for questions. Operator, would you please poll for questions?
此时,我们现在开始提问环节。操作员,请您进行提问调查好吗?
Question-and-Answer Session
问答环节
Operator 操作员
[Operator Instructions] Your first question comes from the line of Toshiya Hari from Goldman Sachs. Your line is open.
【操作员指示】您的第一个问题来自高盛的 Toshiya Hari。您可以发言了。
Toshiya Hari
Hi. Thank you so much for taking the question and congratulations on the really strong results. My question is for Jen-Hsun on the data center business. Clearly, you're doing extremely well in the business. I'm curious how your expectations for calendar '24 and '25 have evolved over the past 90 days.
你好。非常感谢您回答问题,并祝贺您取得了非常强劲的业绩。我的问题是关于数据中心业务的。显然,您在这个业务上表现得非常出色。我很好奇过去 90 天里,您对 2024 年和 2025 年的预期是如何发展的。
And as you answer the question, I was hoping you can touch on some of the newer buckets within data center, things like software. Sovereign AI, I think you've been pretty vocal about how to think about that medium-to-long term. And recently, there was an article about NVIDIA potentially participating in the ASIC market. Is there any credence to that, and if so, how should we think about you guys playing in that market over the next several years? Thank you.
当您回答这个问题时,我希望您能谈谈数据中心内一些新的领域,比如软件。关于主权人工智能,我认为您对如何考虑中长期问题已经表达得很明确。最近,有一篇关于英伟达可能参与 ASIC 市场的文章。这是否属实,如果是,我们应该如何看待未来几年您们在该市场的角色?谢谢。
Jensen Huang 黄仁勋
Thanks, Toshiya. Let's see. There were three questions, one more time. First question was -- can you -- well?
谢谢,Toshiya。让我们再看一遍。有三个问题,再说一遍。第一个问题是-- 你能-- 嗯?
Toshiya Hari
I guess your expectations for data center, how they've evolved. Thank you.
我猜测您对数据中心的期望已经发生了变化。谢谢。
Jensen Huang 黄仁勋
Okay. Yeah. Well, we guide one quarter at a time. But fundamentally, the conditions are excellent for continued growth calendar '24, to calendar '25 and beyond. And let me tell you why? We're at the beginning of two industry-wide transitions and both of them are industry wide. The first one is a transition from general to accelerated computing. General-purpose computing, as you know, is starting to run out of steam. And you can tell by the CSPs extending and many data centers, including our own for general-purpose computing, extending the depreciation from four to six years.
好的。是的。嗯,我们每次引导一个季度。但从根本上讲,条件非常适合于持续增长,从日历'24 年到日历'25 年以及以后。让我告诉你为什么?我们正处于两项全行业转型的开端,而且这两项都是全行业范围的。第一项是从通用计算向加速计算的转型。通用计算,正如你所知,开始力不从心。你可以从云服务提供商和许多数据中心,包括我们自己的通用计算,将折旧期从四年延长到六年来看出。
There's just no reason to update with more CPUs when you can't fundamentally and dramatically enhance its throughput like you used to. And so you have to accelerate everything. This is what NVIDIA has been pioneering for some time. And with accelerated computing, you can dramatically improve your energy efficiency. You can dramatically improve your cost in data processing by 20 to 1. Huge numbers. And of course, the speed. That speed is so incredible that we enabled a second industry-wide transition called generative AI.
更新更多的 CPU 没有理由,当你不能像以前那样从根本上和显著地增加吞吐量。因此,你必须加速一切。这是 NVIDIA 近来一直在开创的领域。通过加速计算,你可以显著提高能源效率。你可以将数据处理成本提高 20 倍。这些数字是巨大的。当然,还有速度。这种速度是如此令人难以置信,我们推动了第二次全行业转型,称之为生成式人工智能。
Generative AI, I'm sure we're going to talk plenty -- plenty about it during the call. But remember, generative AI is a new application. It is enabling a new way of doing software, new types of software are being created. It is a new way of computing. You can't do generative AI on traditional general-purpose computing. You have to accelerate it.
生成式人工智能,我相信我们在通话中会谈论很多——很多关于它。但请记住,生成式人工智能是一种新应用。它正在推动一种新的软件开发方式,正在创造新类型的软件。这是一种新的计算方式。你不能在传统的通用计算机上进行生成式人工智能。你必须加速它。
And the third is it is enabling a whole new industry, and this is something worthwhile to take a step back and look at and it connects to your last question about sovereign AI. A whole new industry in the sense that for the very first time a data center is not just about computing data and storing data and serving the employees of a company. We now have a new type of data center that is about AI generation, an AI generation factory.
第三点是它正在推动一个全新的行业,这是值得停下来审视的事情,它与您上一个关于主权人工智能的问题相关。全新的行业意味着数据中心不再仅仅是关于计算数据、存储数据和为公司员工提供服务。我们现在拥有一种新型数据中心,它是关于人工智能生成的,是一个人工智能生成工厂。
And you've heard me describe it as AI factories. But basically, it takes raw material, which is data, it transforms it with these AI supercomputers that NVIDIA builds, and it turns them into incredibly valuable tokens. These tokens are what people experience on the amazing ChatGPT or Midjourney or, search these days are augmented by that. All of your recommender systems are now augmented by that, the hyper-personalization that goes along with it.
你听我描述过它是 AI 工厂。但基本上,它以数据为原材料,通过英伟达构建的 AI 超级计算机对其进行转化,将其转化为极具价值的代币。这些代币是人们在令人惊叹的 ChatGPT 或 Midjourney 上体验到的,或者说,当今搜索所增强的。现在所有的推荐系统都得到了增强,伴随而来的是超个性化。
All of these incredible startups in digital biology, generating proteins and generating chemicals and the list goes on. And so all of these tokens are generated in a very specialized type of data center. And this data center we call AI supercomputers and AI generation factories. But we're seeing diversity -- one of the other reasons -- so at the foundation is that. The way it manifests into new markets is in all of the diversity that you're seeing us in.
所有这些令人难以置信的数字生物学初创公司,生成蛋白质、生成化学品等等,还有很多其他的。因此,所有这些代币都是在一种非常专业的数据中心中生成的。我们称这个数据中心为 AI 超级计算机和 AI 生成工厂。但我们看到多样性——另一个原因——所以基础就在这里。它在新市场中的表现方式就是你在我们身上看到的所有多样性。
One, the amount of inference that we do is just off the charts now. Almost every single time you interact with ChatGPT, that we're inferencing. Every time you use Midjourney, we're inferencing. Every time you see amazing -- these Sora videos that are being generated or Runway, the videos that they're editing, Firefly, NVIDIA is doing inferencing. The inference part of our business has grown tremendously. We estimate about 40%. The amount of training is continuing, because these models are getting larger and larger, the amount of inference is increasing.
我们进行的推理量现在已经是离谱的。几乎每次与 ChatGPT 互动时,我们都在进行推理。每次使用 Midjourney 时,我们都在进行推理。每次看到惊人的 Sora 视频或 Runway 视频时,我们都在进行推理。Firefly,NVIDIA 正在进行推理。我们的业务推理部分增长迅速。我们估计大约增长了 40%。培训量仍在持续,因为这些模型变得越来越大,推理量也在增加。
But we're also diversifying into new industries. The large CSPs are still continuing to build out. You can see from their CapEx and their discussions, but there's a whole new category called GPU specialized CSPs. They specialize in NVIDIA AI infrastructure, GPU specialized CSPs. You're seeing enterprise software platforms deploying AI. ServiceNow is just a really, really great example. You see Adobe. There's the others, SAP and others. You see consumer Internet services that are now augmenting all of their services of the past with generative AI. So they can have even more hyper-personalized content to be created.
但我们也在向新的行业多元化发展。大型 CSP 仍在继续扩建。您可以从他们的资本支出和讨论中看到,但还有一个全新的类别叫做 GPU 专业化 CSP。他们专门从事 NVIDIA 人工智能基础设施,GPU 专业化 CSP。您会看到企业软件平台正在部署人工智能。ServiceNow 就是一个非常好的例子。您会看到 Adobe。还有其他的,比如 SAP 等。您会看到消费者互联网服务现在正在用生成式人工智能来增强他们过去的所有服务。这样他们可以创造出更加超个性化的内容。
You see us talking about industrial generative AI. Now our industries represent multi-billion dollar businesses, auto, health, financial services. In total, our vertical industries are multi-billion dollar businesses now. And of course sovereign AI. The reason for sovereign AI has to do with the fact that the language, the knowledge, the history, the culture of each region are different and they own their own data.
您看到我们谈论工业生成式人工智能。现在我们的行业代表着价值数十亿美元的企业,汽车、医疗、金融服务。总的来说,我们的垂直行业现在是价值数十亿美元的企业。当然还有主权人工智能。主权人工智能之所以存在是因为每个地区的语言、知识、历史、文化都不同,它们拥有自己的数据。
They would like to use their data, train it with to create their own digital intelligence and provision it to harness that raw material themselves. It belongs to them, each one of the regions around the world. The data belongs to them. The data is most useful to their society. And so they want to protect the data. They want to transform it themselves, value-added transformation, into AI and provision those services themselves.
他们希望使用自己的数据,通过训练来创建自己的数字智能,并提供它以利用这些原始材料。这些数据属于他们,属于世界各地的每一个地区。数据属于他们。这些数据对他们的社会非常有用。因此,他们希望保护这些数据。他们希望自己转化这些数据,进行增值转化,将其转化为人工智能,并自行提供这些服务。
So we're seeing sovereign AI infrastructure is being built in Japan, in Canada, in France, so many other regions. And so my expectation is that what is being experienced here in the United States, in the West, will surely be replicated around the world, and these AI generation factories are going to be in every industry, every company, every region. And so I think the last -- this last year, we've seen a generative AI really becoming a whole new application space, a whole new way of doing computing, a whole new industry is being formed and that's driving our growth.
因此,我们看到主权人工智能基础设施正在日本、加拿大、法国等许多其他地区建设。因此,我的期望是,美国、西方所经历的情况必将在全球范围内复制,这些人工智能生成工厂将出现在每个行业、每家公司、每个地区。因此,我认为在过去的一年里,我们看到生成式人工智能真正成为一个全新的应用领域,一种全新的计算方式,一个全新的产业正在形成,推动着我们的增长。
Operator 操作员
Your next question comes from the line of Joe Moore from Morgan Stanley. Your line is open.
您的下一个问题来自摩根士丹利的乔·摩尔。您可以发言了。
Joe Moore 乔·摩尔
Great. Thank you. I wanted to follow up on the 40% of revenues coming from inference. That's a bigger number than I expected. Can you give us some sense of where that number was maybe a year before, how much you're seeing growth around LLMs from inference? And how are you measuring that? Is that -- I assume it's in some cases the same GPUs you use for training and inference. How solid is that measurement? Thank you.
好的。谢谢。我想跟进一下来自推断的收入占比 40%。这个数字比我预期的要大。您能否让我们了解一下,也许一年前这个数字是多少,您从推断中看到了多少增长?您是如何衡量这一点的?我猜想在某些情况下,您用于训练和推断的 GPU 可能是相同的。这个测量结果有多可靠?谢谢。
Jensen Huang 黄仁勋
I'll go backwards. The estimate is probably understated. And -- but we estimated it. And let me tell you why. Whenever -- a year ago, the recommender systems that people are -- when you run the internet, the news, the videos, the music, the products that are being recommended to you because as you know, the internet has trillions -- I don't know how many trillions, but trillions of things out there and your phone is 3-inches square. And so the ability for them to fit all of that information down to something, such a small real estate, is through a system, an amazing system called recommender systems.
我会倒退。 估计可能被低估了。 但是我们估计了。 让我告诉你为什么。 一年前,人们使用的推荐系统——当你浏览互联网、新闻、视频、音乐、向你推荐的产品,因为你知道,互联网上有数以万计——我不知道有多少万,但是有数以万计的东西,而你的手机只有 3 英寸。 因此,他们将所有这些信息压缩到如此小的空间的能力,是通过一个系统,一个称为推荐系统的令人惊叹的系统。
These recommender systems used to be all based on CPU approaches. But the recent migration to deep learning and now generative AI has really put these recommender systems now directly into the path of GPU acceleration. It needs GPU acceleration for the embeddings. It needs GPU acceleration for the nearest neighbor search. It needs GPU acceleration for the re-ranking and it needs GPU acceleration to generate the augmented information for you.
这些推荐系统过去都是基于 CPU 方法的。但最近迁移到深度学习,现在又发展到生成式人工智能,这确实让这些推荐系统直接进入了 GPU 加速的路径。它需要 GPU 加速来进行嵌入。它需要 GPU 加速来进行最近邻搜索。它需要 GPU 加速来进行重新排序,也需要 GPU 加速为您生成增强信息。
So GPUs are in every single step of a recommender system now. And as you know, recommender system is the single largest software engine on the planet. Almost every major company in the world has to run these large recommender systems. Whenever you use ChatGPT, it's being inferenced. Whenever you hear about Midjourney and just the number of things that they're generating for consumers, when you when you see Getty, the work that we do with Getty and Firefly from Adobe. These are all generative models. The list goes on. And none of these, as I mentioned, existed a year ago, 100% new.
因此,GPU 现在在推荐系统的每一个步骤中都起着作用。正如您所知,推荐系统是地球上最大的软件引擎。几乎世界上每家主要公司都必须运行这些大型推荐系统。每当您使用 ChatGPT 时,它都在进行推理。每当您听说 Midjourney 以及他们为消费者生成的各种内容时,当您看到 Getty 时,我们与 Getty 和 Adobe 的 Firefly 合作的工作。这些都是生成模型。清单还在继续。正如我提到的,这些都是一年前完全全新的。
Operator 操作员
Your next question comes from the line of Stacy Rasgon from Bernstein Research. Your line is open.
您的下一个问题来自伯恩斯坦研究的 Stacy Rasgon。您可以发言了。
Stacy Rasgon 斯泰西·拉斯贡
Hi, guys. Thanks for taking my question. I wanted Colette -- I wanted to touch on your comment that you expected the next generation of products -- I assume that meant Blackwell, to be supply constrained. Could you dig into that a little bit, what is the driver of that? Why does that get constrained as Hopper is easing up? And how long do you expect that to be constrained, like do you expect the next generation to be constrained like all the way through calendar '25, like when do those start to ease?
嗨,各位。感谢您们回答我的问题。我想问 Colette -- 我想谈一下您提到的您预计下一代产品 -- 我猜这意味着 Blackwell,会受到供应限制。您能详细解释一下吗?是什么驱动了这一点?为什么在 Hopper 放松的情况下会受到限制?您预计这种限制会持续多久,比如您是否预计下一代产品会一直受限制到 2025 年,那么何时开始放松?
Jensen Huang 黄仁勋
Yeah. The first thing is overall, our supply is improving, overall. Our supply chain is just doing an incredible job for us, everything from of course the wafers, the packaging, the memories, all of the power regulators, to transceivers and networking and cables and you name it. The list of components that we ship -- as you know, people think that NVIDIA GPUs is like a chip. But the NVIDIA Hopper GPU has 35,000 parts. It weighs 70 pounds. These things are really complicated things we've built. People call it an AI supercomputer for good reason. If you ever look in the back of the data center, the systems, the cabling system is mind boggling. It is the most dense complex cabling system for networking the world's ever seen.
是的。首先,总体来说,我们的供应正在改善,总体来说。我们的供应链为我们做了令人难以置信的工作,从晶圆、封装、存储器,所有的电源调节器,到收发器、网络和电缆等等。我们发货的零部件清单很长——正如你所知,人们认为英伟达的 GPU 就像一块芯片。但英伟达的 Hopper GPU 有 35,000 个零部件。它重 70 磅。这些确实是我们建造的非常复杂的东西。人们称其为人工智能超级计算机是有充分理由的。如果你曾经看过数据中心的后面,系统、布线系统令人难以置信。这是世界上见过的最密集复杂的网络布线系统。
Our InfiniBand business grew 5x year over year. The supply chain is really doing fantastic supporting us. And so overall, the supply is improving. We expect the demand will continue to be stronger than our supply provides and -- through the year and we'll do our best. The cycle times are improving and we're going to continue to do our best. However, whenever we have new products, as you know, it ramps from zero to a very large number. And you can't do that overnight. Everything is ramped up. It doesn't step up.
我们的 InfiniBand 业务同比增长了 5 倍。供应链真的很出色地支持我们。因此,总体而言,供应正在改善。我们预计需求将继续强于我们提供的供应,并且在整个年度内我们会尽力而为。周期时间正在改善,我们将继续尽力而为。然而,正如您所知,每当我们推出新产品时,从零到非常大的数字都需要逐步增加。这不是一蹴而就的。一切都是逐步增加的,而不是一步到位。
And so whenever we have a new generation of products -- and right now, we are ramping H200's. There is no way we can reasonably keep up on demand in the short term as we ramp. We're ramping Spectrum-X. We're doing incredibly well with Spectrum-X. It's our brand-new product into the world of ethernet. InfiniBand is the standard for AI-dedicated systems. Ethernet with Spectrum-X --ethernet is just not a very good scale-out system.
所以每当我们推出新一代产品时——现在,我们正在推出 H200。在我们推出产品的短期内,我们无法合理地满足需求。我们正在推出 Spectrum-X。我们在 Spectrum-X 方面表现得非常出色。这是我们进入以太网世界的全新产品。InfiniBand 是 AI 专用系统的标准。使用 Spectrum-X 的以太网——以太网并不是一个很好的横向扩展系统。
But with Spectrum-X, we've augmented, layered on top of ethernet, fundamental new capabilities like adaptive routing, congestion control, noise isolation or traffic isolation, so that we could optimize ethernet for AI. And so InfiniBand will be our AI-dedicated infrastructure. Spectrum-X will be our AI-optimized networking and that is ramping, and so we'll -- with all of the new products, demand is greater than supply. And that's just kind of the nature of new products and so we work as fast as we can to capture the demand. But overall, overall net-net, overall, our supply is increasing very nicely.
但是通过 Spectrum-X,我们已经增强了以太网的基本新功能,如自适应路由、拥塞控制、噪音隔离或流量隔离,以便优化以太网用于人工智能。因此,InfiniBand 将成为我们的人工智能专用基础设施。Spectrum-X 将成为我们的人工智能优化网络,目前正在推广,因此我们——随着所有新产品的推出,需求大于供应。这就是新产品的性质,因此我们会尽快工作以满足需求。但总的来说,总体而言,总体而言,我们的供应正在非常好地增加。
Operator 操作员
Your next question comes from the line of Matt Ramsay from TD Cowen. Your line is open.
您的下一个问题来自 TD Cowen 的 Matt Ramsay。请发问。
Matt Ramsay 马特·拉姆齐
Good afternoon, Jensen, Colette. Congrats on the results. I wanted to ask I guess a two-part question, and it comes at what Stacy was just getting out on your demand being significantly more than your supply, even though supply is improving. And I guess the two sides of the question are, I guess, first for Colette, like how are you guys thinking about allocation of product in terms of customer readiness to deploy and sort of monitoring if there's any kind of build-up of product that might not yet be turned on?
下午好,詹森,科莱特。恭喜你们的成绩。我想问一个两部分的问题,就是关于斯泰西刚刚提到的你们的需求明显高于供应,尽管供应正在改善。我猜问题有两方面,第一个问题是关于科莱特的,你们如何考虑产品的分配,以及客户准备部署的情况,以及监控是否有任何可能尚未启动的产品积压的情况?
And then I guess Jen-Hsun, for you, I'd be really interested to hear you speak a bit about the thought that you and your company are putting into the allocation of your product across customers, many of which compete with each other, across industries to smaller startup companies, to things in the healthcare arena to government. It's a very, very unique technology that you're enabling and I'd be really interested to hear you speak a bit about how you think about quote/unquote fairly allocating sort of for the good of your company, but also for the good of the industry. Thanks.
然后我想,Jen-Hsun,我对听到您谈一下您和您的公司正在考虑如何在竞争激烈的客户之间,跨行业分配产品的想法非常感兴趣,包括小型初创公司、医疗保健领域和政府。您正在实现的技术非常独特,我很想听听您对如何考虑“公平分配”以促进公司利益和行业利益的看法。谢谢。
Colette Kress 柯蕾特·克雷斯
Let me first start with your question, thanks, about how we are working with our customers as they look into how they are building out their GPU instances and our allocation process. The folks that we work with, our customers that we work with, have been partners with us for many years as we have been assisting them both in what they set up in the cloud, as well as what they are setting up internally.
让我首先从您的问题开始,谢谢,关于我们如何与客户合作,因为他们正在研究如何构建 GPU 实例以及我们的分配过程。我们合作的人,我们的客户,多年来一直与我们合作,我们一直在协助他们在云端建立的内容,以及他们正在内部建立的内容。
Many of these providers have multiple products going at one time to serve so many different needs across their end customers but also what they need internally. So they are working in advance, of course, thinking about those new clusters that they will need. And our discussions with them continue not only on our Hopper architecture, but helping them understand the next wave and getting their interest and getting their outlook for the demand that they want.
许多这些供应商同时推出多种产品,以满足终端客户的许多不同需求,同时也考虑到他们内部的需求。因此,他们当然是提前工作,考虑到他们将需要的新集群。我们与他们的讨论不仅仅局限于我们的 Hopper 架构,还帮助他们了解下一个浪潮,引起他们的兴趣,并了解他们想要的需求前景。
So it's always a moving process in terms of what they will purchase, what is still being built and what is in use for our end customers. But the relationships that we've built and their understanding of the sophistication of the build has really helped us with that allocation and both helped us with our communications with them.
所以在他们购买什么、仍在建设中的项目以及我们最终客户正在使用的项目方面,这始终是一个不断变化的过程。但我们建立的关系以及他们对建设复杂性的理解确实帮助我们进行分配,并且帮助我们与他们的沟通。
Jensen Huang 黄仁勋
First, our CSPs have a very clear view of our product road map and transitions. And that transparency with our CSPs gives them the confidence of which products to place and where and when. And so they know their -- they know the timing to the best of our ability. And they know quantities and of course allocation. We allocate fairly. We allocate fairly. We do the best of our -- do the best we can to allocate fairly and to avoid allocating unnecessarily.
首先,我们的 CSP 对我们的产品路线图和过渡有非常清晰的了解。与我们的 CSP 之间的透明度使他们对应该在何处何时放置哪些产品充满信心。因此,他们知道他们的 - 他们尽可能了解最佳的时间。他们知道数量,当然也知道分配情况。我们公平分配。我们公平分配。我们尽最大努力公平分配,避免不必要的分配。
As you mentioned earlier, why allocate something when the data center's not ready. Nothing is more difficult then to have anything sit around. And so, allocate fairly, and to avoid allocating unnecessarily. And where we do -- the question that you asked about the end markets, that we have an excellent ecosystem with OEMs, ODMs, CSPs and, very importantly, end markets. What NVIDIA is really unique about is that we bring our customers, we bring our partners, CSPs and OEMs, we bring them customers.
正如您之前提到的,当数据中心尚未准备就绪时,为什么要分配资源呢?没有比让任何东西闲置更困难的事情了。因此,要公平分配资源,避免不必要的分配。至于您提出的关于终端市场的问题,我们与 OEM 厂商、ODM 厂商、CSP 和非常重要的终端市场建立了卓越的生态系统。NVIDIA 真正独特的地方在于,我们为客户、合作伙伴、CSP 和 OEM 厂商带来了客户。
The biology companies, the healthcare companies, financial services companies, AI developers, large-language model developers, autonomous vehicle companies, robotics companies. There's just a giant suite of robotics companies that are emerging. There are warehouse robotics to surgical robotics to humanoid robotics, all kinds of really interesting robotics companies, agriculture robotics companies. All of these startups, large companies, healthcare, financial services and auto and such are working on NVIDIA's platform. We support them directly.
生物科技公司、医疗保健公司、金融服务公司、人工智能开发者、大型语言模型开发者、自动驾驶车辆公司、机器人公司。现在出现了一大批机器人公司。有仓储机器人、外科机器人、仿人机器人,各种非常有趣的机器人公司,还有农业机器人公司。所有这些初创公司、大公司、医疗保健、金融服务、汽车等公司都在 NVIDIA 的平台上开展工作。我们直接支持它们。
And oftentimes, we can have a twofer by allocating to a CSP and bringing the customer to the CSP at the same time. And so this ecosystem, you're absolutely right that it's vibrant. But at the core of it, we want to allocate fairly with avoiding waste and looking for opportunities to connect partners and end users. We're looking for those opportunities all the time.
往往情况下,我们可以通过向 CSP 分配资源并同时将客户引入 CSP 来实现双赢。因此,正如您所说,这个生态系统确实充满活力。但在其核心,我们希望公平分配资源,避免浪费,并寻找连接合作伙伴和最终用户的机会。我们一直在寻找这些机会。
Operator 操作员
Your next question comes from the line of Timothy Arcuri from UBS. Your line is open.
您的下一个问题来自瑞银的 Timothy Arcuri。请发问。
Timothy Arcuri 蒂莫西·阿库里
Thanks a lot. I wanted to ask about how you're converting backlog into revenue. Obviously, lead times for your products have come down quite a bit. Colette, you didn't talk about the inventory purchase commitments. But if I sort of add up your inventory plus the purchase commits and your prepaid supply, sort of the aggregate of your supply, it was actually down a touch. How should we read that? Is that just you saying that you don't need to make as much of a financial commitment to your suppliers because the lead times are lower or is that maybe you're reaching some sort of steady state where you're closer to filling your order book and your backlog? Thanks.
非常感谢。我想问一下关于您如何将积压订单转化为收入的问题。显然,您的产品交货时间已经大大缩短。Colette,您没有谈论库存采购承诺。但如果我将您的库存加上采购承诺以及预付供应相加,也就是您的供应总量,实际上略有下降。我们应该如何解读这一点?这只是您在说您不需要向供应商做出太多财务承诺,因为交货时间更短,还是可能您正在达到某种稳定状态,更接近填补订单簿和积压订单?谢谢。
Colette Kress 柯蕾特·克雷斯
Yeah. So let me, highlight on those three different areas of how we look at our suppliers. You're correct. Our inventory on hand given our allocation that we're on, we're trying to, as things come into inventory, immediately work to ship them to our customers. I think our customer appreciates our ability to meet the schedules that we've looked for.
是的。那么让我重点介绍一下我们如何看待供应商的这三个不同方面。你是对的。鉴于我们的分配情况,我们手头的库存,我们正在努力,一旦库存到货,立即开始发货给我们的客户。我认为我们的客户欣赏我们满足他们所期望的时间表的能力。
The second piece of it is our purchase commitments. Our purchase commitments have many different components into it, components that we need for manufacturing. But also, often we are procuring capacity that we may need. The length of that need for capacity or the length for the components are all different. Some of them may be for the next two quarters, but some of them may be for multiple years.
它的第二部分是我们的采购承诺。我们的采购承诺中包含许多不同的组成部分,这些组成部分是我们制造所需的。但通常我们也在采购可能需要的产能。对产能需求的持续时间或对组件的持续时间都各不相同。有些可能是未来两个季度的,但有些可能是多年的。
I can say the same regarding our prepaids. Our prepaids are pre-designed to make sure that we have the reserve capacity that we need at several of our manufacturing suppliers as we look forward. So wouldn't read into anything regarding approximately about the same numbers as we are increasing our supply. All of them just have different lengths as we have sometimes had to buy things in long-lead times or things that needed capacity to be built for us.
就我们的预付款而言,我可以说类似的话。我们的预付款是预先设计的,以确保我们在展望未来时在几家制造供应商那里拥有所需的储备能力。因此,不要对我们增加供应的大致相同数字有任何误解。它们只是长度不同,因为有时我们不得不购买长交货周期的物品,或者需要为我们建造容量的物品。
Operator 操作员
Your next question comes from the line of Ben Reitzes from Melius Research. Your line is open.
您的下一个问题来自 Melius Research 的 Ben Reitzes。请发问。
Ben Reitzes 本·赖泽斯
Yeah. Thanks. Congratulations on the results. Colette, I wanted to talk about your comment regarding gross margins and that they should go back to the mid-70s. If you don't mind unpacking that. And also, is that due to the HBM content in the new products and what do you think are the drivers of that comment? Thanks so much.
是的。谢谢。恭喜你们的成绩。Colette,我想谈谈你关于毛利率的评论,认为它们应该回到中 70 年代。如果你不介意的话,能详细解释一下吗?另外,这是因为新产品中的 HBM 内容吗?你认为是什么因素导致了这一评论?非常感谢。
Colette Kress 柯蕾特·克雷斯
Yeah. Thanks for the question. We highlighted in our opening remarks really about our Q4 results and our outlook for Q1. Both of those quarters are unique. Those two quarters are unique in their gross margin as they include some benefit from favorable component cost in the supply chain kind of across both our compute and networking and also in several different stages of our manufacturing process.
是的。感谢提问。我们在开场白中重点介绍了我们的第四季度业绩和第一季度展望。这两个季度都很独特。这两个季度的毛利率独特之处在于它们包含了供应链中有利的零部件成本优势,涵盖了我们的计算和网络以及制造过程的几个不同阶段。
So looking forward, we have visibility into a mid-70s gross margin for the rest of the fiscal year, taking us back to where we were before this Q4 and Q1 peak that we've had here. So we're really looking at just a balance of our mix. Mix is always going to be our largest driver of what we will be shipping for the rest of the year. And those are really just the drivers.
因此展望未来,我们可以看到在本财政年度剩余时间内,我们的毛利率将保持在中 70%左右,使我们回到了在这里经历 Q4 和 Q1 高峰之前的水平。因此,我们真正关注的是我们产品组合的平衡。产品组合始终是我们在本年度剩余时间内将要发货的最主要驱动因素。这些才是真正的驱动因素。
Operator 操作员
Your next question comes from the line of C.J. Muse from Cantor Fitzgerald. Your line is open.
您的下一个问题来自康特菲茨杰.穆斯的线路。您可以发言了。
C.J. Muse
Yeah. Good afternoon, and thank you for taking the question. Bigger picture question for you, Jen-Hsun. When you think about the million-x improvement in GPU compute over the last decade and expectations for similar improvements in the next, how do your customers think about the long-term usability of their NVIDIA investments that they're making today? Do today's training clusters become tomorrow's inference clusters? How do you see this playing out? Thank you.
是的。下午好,感谢您提问。对于您,Jen-Hsun,有一个更宏观的问题。当您考虑过去十年中 GPU 计算能力的百万倍提升以及对未来类似提升的期望时,您的客户如何看待他们今天所做的 NVIDIA 投资的长期可用性?今天的训练集群会成为明天的推断集群吗?您如何看待这种发展?谢谢。
Jensen Huang 黄仁勋
Hey, CJ. Thanks for the question. Yeah, that's the really cool part. If you look at the reason why we're able to improve performance so much, it's because we have two characteristics about our platform. One, is that it's accelerated. And two, it's programmable. It's not brittle. NVIDIA is the only architecture that has gone from the very, very beginning, literally the very beginning when CNN's and Alex Krizhevsky and Ilya Sutskever and Geoff Hinton first revealed AlexNet, all the way through to RNNs to LSTMs to every -- RLs to deep learning RLs to transformers to every single version.
嘿,CJ。感谢提问。是的,那是非常酷的部分。如果你看一下我们能够如此大幅提高性能的原因,那是因为我们的平台具有两个特点。一是它是加速的。二是它是可编程的。它不脆弱。NVIDIA 是唯一一个从一开始就开始,从 CNN 的 Alex Krizhevsky 和 Ilya Sutskever 以及 Geoff Hinton 首次揭示 AlexNet 的时候,一直到 RNNs、LSTMs、每一个 RLs、深度学习 RLs、transformers 以及每一个版本的架构。
Every single version and every species that have come along, vision transformers, multi-modality transformers, every single -- and now time sequence stuff, and every single variation, every single species of AI that has come along, we've been able to support it, optimize our stack for it and deploy it into our installed base. This is really the great amazing part. On the one hand, we can invent new architectures and new technologies like our Tensor cores, like our transformer engine for Tensor cores, improved new numerical formats and structures of processing like we've done with the different generations of Tensor cores, meanwhile, supporting the installed base at the same time.
每个版本和每个出现的物种,视觉转换器,多模态转换器,每一个——现在是时间序列的东西,每一个变体,每一个出现的 AI 物种,我们都能够支持它,优化我们的堆栈,并将其部署到我们的已安装基础上。这真的是一个了不起的部分。一方面,我们可以像我们的张量核心一样发明新的架构和新技术,像我们的张量核心的转换器引擎一样,改进新的数字格式和处理结构,就像我们用不同世代的张量核心所做的那样,同时支持已安装的基础。
And so, as a result, we take all of our new software algorithm invest -- inventions, all of the inventions, new inventions of models of the industry, and it runs on our installed base on the one hand. On the other hand, whenever we see something revolutionary we can -- like transformers, we can create something brand new like the Hopper transformer engine and implement it into future. And so we simultaneously have this ability to bring software to the installed base and keep making it better and better and better, so our customers installed base is enriched over time with our new software.
因此,作为结果,我们将所有新的软件算法投资--发明,所有的发明,行业模型的新发明,它们在我们的已安装基础上运行。另一方面,每当我们看到一些革命性的东西,比如变压器,我们可以创造出像霍珀变压器引擎这样全新的东西,并将其实施到未来。因此,我们同时具有将软件带到已安装基础并不断改进它的能力,使我们的客户已安装基础随着我们的新软件而不断丰富。
On the other hand, for new technologies, create revolutionary capabilities. Don't be surprised if in our future generation, all of a sudden amazing breakthroughs in large-language models were made possible And those breakthroughs, some of which will be in software because they run CUDA, will be made available to the installed base. And so we carry everybody with us on the one hand. We make giant breakthroughs on the other hand.
另一方面,对于新技术,创造革命性的能力。如果在我们的未来一代中,突然出现了大型语言模型方面的惊人突破,那也不足为奇。这些突破中的一些将是软件方面的,因为它们运行 CUDA,将会提供给已安装的基础设施。因此,我们一方面带领着每个人。另一方面,我们取得了巨大的突破。
Operator 操作员
Your next question comes from the line of Aaron Rakers from Wells Fargo. Your line is open.
您的下一个问题来自富国银行的 Aaron Rakers。请发问。
Aaron Rakers
Yeah. Thanks for taking the question. I wanted to ask about the China business. I know that in your prepared comments you said that you started shipping some alternative solutions into China. You also put it out that you expect that contribution to continue to be about a mid-single digit percent of your total data center business. So I guess the question is what is the extent of products that you're shipping today into the China market and why should we not expect that maybe other alternative solutions come to the market and expand your breadth to participate in that in that opportunity again? Thank you.
是的。感谢您提问。我想问一下关于中国业务的事情。我知道在您的准备好的评论中,您提到您已经开始向中国运送一些替代解决方案。您还表示您预计这一贡献将继续占您整个数据中心业务的中位数个位数百分比。所以我想问的是,您今天向中国市场运送的产品范围是什么程度,为什么我们不应该期望其他替代解决方案进入市场并扩大您的广度以参与那个机会呢?谢谢。
Jensen Huang 黄仁勋
Think of, at the core, remember the US government wants to limit the latest capabilities of NVIDIA's accelerated computing and AI to the Chinese market. And the U.S. government would like to see us be as successful in China as possible. Within those two constraints, within those two pillars if you will, are the restrictions, and so we had to pause when the new restrictions came out. We immediately paused. So that we understood what the restrictions are, reconfigured our products in a way that is not software hackable in any way. And that took some time. And so we reset -- we reset our product offering to China and now we're sampling to customers in China.
从根本上想一想,记住美国政府希望限制英伟达加速计算和人工智能的最新能力在中国市场的应用。美国政府希望我们在中国取得尽可能大的成功。在这两个限制条件下,在这两个支柱中,有一些限制,所以当新的限制出台时,我们不得不暂停。我们立即停止了。因此,我们了解了限制是什么,重新配置了我们的产品,以确保在任何情况下都不会被软件黑客攻击。这需要一些时间。因此,我们重新设定了我们在中国的产品供应,并现在向中国客户提供样品。
And we're going to do our best to compete in that marketplace and succeed in that marketplace within the -- within the specifications of the restriction. And so that's it. We -- this last quarter, we -- our business significantly declined as we -- as we paused in the marketplace. We stopped shipping in the marketplace. We expect this quarter to be about the same. But after, that hopefully we can go compete for our business and do our best, and we'll see how it turns out.
我们将尽最大努力在市场上竞争并在市场上取得成功,符合限制规定。这就是我们的计划。上个季度,我们的业务显著下降,因为我们在市场上暂停了。我们停止了在市场上的发货。我们预计本季度情况大致相同。但希望之后我们能够竞争我们的业务并尽力而为,看看结果如何。
Operator 操作员
Your next question comes from the line of Harsh Kumar from Piper Sandler. Your line is open.
您的下一个问题来自 Piper Sandler 的 Harsh Kumar。您可以发言了。
Harsh Kumar 哈什·库马
Yeah. Hey, Jen-Hsun, Colette and NVIDIA team. First of all, congratulations on a stunning quarter and guide. I wanted to talk about, a little bit about your software business and it's pleasing to hear that it's over a $1 billion but I was hoping Jen-Hsun or Colette if you could just help us understand what the different parts and pieces are for the software business? In other words, just help us unpack it a little bit, so we can get a better understanding of where that growth is coming from.
是的。嗨,Jen-Hsun,Colette 和 NVIDIA 团队。首先,恭喜你们取得了惊人的季度业绩和指引。我想谈一下,关于你们的软件业务,很高兴听到它超过了 10 亿美元,但我希望 Jen-Hsun 或 Colette 能帮助我们了解软件业务的不同部分和组成部分是什么?换句话说,帮助我们稍微解开一下,这样我们就能更好地了解增长来自何处。
Jensen Huang 黄仁勋
Let me take a step back and explain the fundamental reason why NVIDIA will be very successful in software. So first, as you know, accelerated computing really grew in the cloud. In the cloud, the cloud service providers have really large engineering teams and we work with them in a way that allows them to operate and manage their own business. And whenever there are any issues, we have large teams assigned to them. And their engineering teams are working directly with our engineering teams and we enhance, we fix, we maintain, we patch the complicated stack of software that's involved in accelerated computing.
让我退后一步,解释一下英伟达在软件方面将会非常成功的根本原因。首先,正如您所知,加速计算真正发展起来是在云端。在云端,云服务提供商拥有非常庞大的工程团队,我们与他们合作的方式使他们能够运营和管理自己的业务。每当出现任何问题时,我们都会为他们指派大型团队。他们的工程团队直接与我们的工程团队合作,我们增强、修复、维护、打补丁加速计算所涉及的复杂软件堆栈。
As you know, accelerated computing is very different than general-purpose computing. You're not starting from a program like C++. You compile it and things run on all your CPUs. The stacks of software necessary for every domain from data processing SQL versus -- SQL structure data versus all the images and text and PDF, which is unstructured, to classical machine-learning to computer vision to speech to large-language models, all --recommender systems. All of these things require different software stacks. That's the reason why NVIDIA has hundreds of libraries. If you don't have software, you can't open new markets. If you don't have software, you can't open and enable new applications.
正如您所知,加速计算与通用计算有很大不同。您不是从像 C++这样的程序开始。您编译它,然后在所有 CPU 上运行。每个领域所需的软件堆栈都不同,从数据处理 SQL 结构化数据与所有图像、文本和 PDF(非结构化数据),到经典机器学习、计算机视觉、语音、大型语言模型,以及推荐系统。所有这些都需要不同的软件堆栈。这就是为什么英伟达拥有数百个库的原因。如果没有软件,您就无法开拓新市场。如果没有软件,您就无法开发和启用新应用。
Software is fundamentally necessary for accelerated computing. This is the fundamental difference between accelerated computing and general-purpose computing that most people took a long time to understand. And now, people understand that the software is really key. And the way that we work with CSPs, that's really easy. We have large teams that are working with their large teams.
软件对于加速计算是基本必需的。这是加速计算和通用计算之间的根本区别,大多数人花了很长时间才明白。现在,人们明白软件真的很关键。我们与 CSP 合作的方式非常简单。我们有大型团队与他们的大型团队合作。
可能相当于传统硬件产品的驱动程序。
However, now that generative AI is enabling every enterprise and every enterprise software company to embrace accelerated computing -- and when -- it is now essential to embrace accelerated computing because it is no longer possible, no longer likely anyhow to sustain improved throughput through just general-purpose computing. All of these enterprise software companies and enterprise companies don't have large engineering teams to be able to maintain and optimize their software stack to run across all of the world's clouds and private clouds and on-prem.
然而,现在生成式人工智能使每家企业和每家企业软件公司都能拥抱加速计算——而且——现在必须拥抱加速计算,因为仅仅通过通用计算已经不再可能,也不太可能维持改进的吞吐量。所有这些企业软件公司和企业公司都没有庞大的工程团队来维护和优化他们的软件堆栈,使其能够在全球所有云和私有云以及本地环境中运行。
如果这是真的,那么大幅度提升的算力应用在哪里?或者整体性的改善,大量APP的服务能力都会提升,比如,购物、点外卖、信用卡,娱乐、社交,这些应用的能力将进一步提升。
So we are going to do the management, the optimization, the patching, the tuning, the installed-base optimization for all of their software stacks. And we containerize them into our stack. We call it NVIDIA AI Enterprise. And the way we go to market with it is that think of that NVIDIA AI Enterprise now as a run time like an operating system, it's an operating system for artificial intelligence.
因此,我们将为他们所有的软件堆栈进行管理、优化、打补丁、调优和安装基础优化。然后将它们容器化到我们的堆栈中。我们称之为 NVIDIA AI Enterprise。我们推出的市场策略是,将 NVIDIA AI Enterprise 视为一个运行时,就像一个操作系统,它是人工智能的操作系统。
And we charge $4,500 per GPU per year. And my guess is that every enterprise in the world, every software enterprise company that are deploying software in all the clouds and private clouds and on-prem, will run on NVIDIA AI Enterprise, especially obviously for our GPUs. And so this is going to likely be a very significant business over time. We're off to a great start. And Colette mentioned that it's already at $1 billion run rate and we're really just getting started.
我们每个 GPU 每年收费 4500 美元。我猜想世界上每家企业,每家在所有云端、私有云和本地部署软件的软件企业公司都将在 NVIDIA AI Enterprise 上运行,尤其是显然是针对我们的 GPU。因此,随着时间的推移,这很可能会成为一个非常重要的业务。我们已经有了一个很好的开端。Colette 提到它已经达到 10 亿美元的年收入率,而我们真的只是刚刚开始。
Operator 操作员
Thank you. I will now turn the call back over to Jen-Hsun Huang, CEO, for closing remarks.
谢谢。现在我将把电话交回给首席执行官黄仁勋,以便结束讲话。
Jensen Huang 黄仁勋
The computer industry is making two simultaneous platform shifts at the same time. The trillion-dollar installed base of data centers is transitioning from general purpose to accelerated computing. Every data center will be accelerated so the world can keep up with the computing demand, with increasing throughput, while managing costs and energy. The incredible speed up of NVIDIA enabled -- that NVIDIA enabled, a whole new computing paradigm, generative AI, where software can learn, understand and generate any information from human language to the structure of biology and the 3D world.
计算机行业同时进行两个平台转变。价值数万亿美元的数据中心已经从通用用途转向加速计算。每个数据中心都将实现加速,以便世界能够跟上计算需求,提高吞吐量,同时控制成本和能源消耗。NVIDIA 实现的令人难以置信的加速,开启了全新的计算范式,生成式人工智能,使软件能够从人类语言到生物结构和 3D 世界的任何信息进行学习、理解和生成。
We are now at the beginning of a new industry where AI-dedicated data centers process massive raw data to refine it into digital intelligence. Like AC power generation plants of the last industrial revolution, NVIDIA AI supercomputers are essentially AI generation factories of this Industrial Revolution. Every company in every industry is fundamentally built on their proprietary business intelligence, and in the future, their proprietary generative AI.
我们现在正处于一个新行业的开端,人工智能专用数据中心处理海量原始数据,将其精炼为数字智能。就像上一次工业革命的交流电发电厂一样,NVIDIA 人工智能超级计算机本质上是这次工业革命的人工智能生成工厂。每个行业的每家公司基本上都是建立在他们专有的商业智能之上,未来也将建立在他们专有的生成式人工智能之上。
Generative AI has kicked off a whole new investment cycle to build the next trillion dollars of infrastructure of AI generation factories. We believe these two trends will drive a doubling of the world's data center infrastructure installed base in the next five years and will represent an annual market opportunity in the hundreds of billions. This new AI infrastructure will open up a whole new world of applications not possible today. We started the AI journey with the hyperscale cloud providers and consumer internet companies. And now, every industry is on board, from automotive to healthcare to financial services, to industrial to telecom, media and entertainment.
生成式人工智能已经启动了一个全新的投资周期,以构建下一个价值数万亿美元的人工智能生成工厂基础设施。我们相信这两种趋势将推动世界数据中心基础设施装机量在未来五年内翻倍,并将代表数千亿美元的年度市场机会。这种新的人工智能基础设施将开启一个今天无法实现的全新应用世界。我们从超大规模云服务提供商和消费者互联网公司开始了人工智能之旅。现在,每个行业都加入了,从汽车到医疗保健,金融服务,工业,电信,媒体和娱乐。
NVIDIA's full stack computing platform with industry-specific applications frameworks and a huge developer and partner ecosystem, gives us the speed, scale and reach to help every company -- to help companies in every industry become an AI company. We have so much to share with you at next month's GTC in San Jose. So be sure to join us. We look forward to updating you on our progress next quarter.
英伟达的全栈计算平台具有行业特定应用框架和庞大的开发者和合作伙伴生态系统,为我们提供了速度、规模和覆盖范围,帮助每家公司——帮助各行各业的公司成为人工智能公司。我们在下个月的圣何塞 GTC 大会上有很多内容要与您分享。请务必加入我们。我们期待在下个季度向您更新我们的进展。
Operator 操作员
This concludes today's conference call. You may now disconnect.
今天的电话会议到此结束。您可以挂断电话了。