2024-08-28 NVIDIA Corporation (NVDA) Q2 2025 Earnings Call Transcript

2024-08-28 NVIDIA Corporation (NVDA) Q2 2025 Earnings Call Transcript

NVIDIA Corporation (NASDAQ:NVDA) Q2 2025 Earnings Conference Call August 28, 2024 5:00 PM ET
英伟达公司(纳斯达克:NVDA)2025 年第二季度财报电话会议 2024 年 8 月 28 日 下午 5:00(东部时间)

Company Participants 公司参与者

Stewart Stecker - Investor Relations
斯图尔特·斯特克 - 投资者关系
Colette Kress - Executive Vice President and Chief Financial Officer
科莱特·克雷斯 - 执行副总裁兼首席财务官
Jensen Huang - President and Chief Executive Officer
黄仁勋 - 总裁兼首席执行官

Conference Call Participants
会议电话参与者

Vivek Arya - Bank of America Securities
维韦克·阿里亚 - 美国银行证券
Toshiya Hari - Goldman Sachs
Toshiya Hari - 高盛
Joe Moore - Morgan Stanley
乔·摩尔 - 摩根士丹利
Matt Ramsay - TD Cowen
马特·拉姆齐 - TD Cowen
Timothy Arcuri - UBS
蒂莫西·阿尔库里 - 瑞银
Stacy Rasgon - Bernstein Research
斯泰西·拉斯贡 - 伯恩斯坦研究
Ben Reitzes - Melius
本·瑞茨斯 - Melius
C.J. Muse - Cantor Fitzgerald
C.J. Muse - 坎托尔·菲茨杰拉德
Aaron Rakers - Wells Fargo
亚伦·雷克斯 - 富国银行

Operator 主持人

Good afternoon. My name is Abby and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's Second Quarter Earnings Call. All lines have been placed on mute to prevent any background noise. After the speakers' remarks, there will be a question-and-answer session. [Operator Instructions] Thank you.
下午好。我的名字是 Abby,我将是今天的会议主持人。此时,我想欢迎大家参加 NVIDIA 的第二季度财报电话会议。所有线路已被静音,以防止任何背景噪音。在发言者的发言后,将进行问答环节。[主持人指示]谢谢。

And Mr. Stewart Stecker, you may begin your conference.
斯图尔特·斯特克先生,您可以开始您的会议。

Stewart Stecker 斯图尔特·斯特克

Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the second quarter of fiscal 2025.
谢谢大家。下午好,欢迎参加 NVIDIA 2025 财年第二季度的电话会议。

With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
今天与我一起的有来自 NVIDIA 的黄仁勋,总裁兼首席执行官;以及科莱特·克雷斯,执行副总裁兼首席财务官。

I would like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2025. The content of today's call is NVIDIA's property. It cannot be reproduced or transcribed without prior written consent. During this call, we may make forward-looking statements based on current expectations. These are subject to a number of risks, significant risks, and uncertainties, and our actual results may differ materially.
我想提醒您,我们的电话会议正在 NVIDIA 投资者关系网站上进行直播。该直播将在 2025 财年第三季度财务结果讨论电话会议之前可供重播。今天电话会议的内容是 NVIDIA 的财产。未经事先书面同意,不能复制或转录。在此次电话会议中,我们可能会根据当前预期做出前瞻性声明。这些声明受到多种风险、重大风险和不确定性的影响,我们的实际结果可能会有实质性差异。

For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K, and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, August 28th, 2024, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.
有关可能影响我们未来财务业绩和业务的因素的讨论,请参阅今天的财报发布、我们最近的 10-K 和 10-Q 表格,以及我们可能向证券交易委员会提交的 8-K 表格报告。我们所有的声明均基于截至今天,即 2024 年 8 月 28 日,当前可用的信息。除法律要求外,我们不承担更新任何此类声明的义务。

During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
在此次电话会议中,我们将讨论非 GAAP 财务指标。您可以在我们网站上发布的首席财务官评论中找到这些非 GAAP 财务指标与 GAAP 财务指标的对账。

Let me highlight an upcoming event for the financial community. We will be attending the Goldman Sachs Communacopia and Technology Conference on September 11 in San Francisco, where Jensen will participate in a keynote fireside chat. Our earnings call to discuss the results of our third quarter of fiscal 2025 is scheduled for Wednesday, November 20th, 2024.
让我为金融界强调一个即将到来的活动。我们将于 9 月 11 日在旧金山参加高盛的 Communacopia 和技术会议,Jensen 将参与主题炉边谈话。我们将于 2024 年 11 月 20 日星期三召开电话会议,讨论 2025 财年第三季度的业绩。

With that, let me turn the call over to Colette.
那么,我把电话交给 Colette。

Colette Kress 科莱特·克雷斯

Thanks, Stewart. Q2 was another record quarter. Revenue of $30 billion was up 15% sequentially and up 122% year-on-year and well above our outlook of $28 billion. Starting with data center, data center revenue of $26.3 billion was a record, up 16% sequentially and up 154% year-on-year, driven by strong demand for NVIDIA Hopper, GPU computing, and our networking platforms.
谢谢,斯图尔特。第二季度又是一个创纪录的季度。300 亿美元的收入环比增长 15%,同比增长 122%,远超我们 280 亿美元的预期。从数据中心开始,数据中心收入为 263 亿美元,创下纪录,环比增长 16%,同比增长 154%,这得益于对 NVIDIA Hopper、GPU 计算和我们的网络平台的强劲需求。

Compute revenue grew more than 2.5 times, networking revenue grew more than 2 times from the last year. Cloud service providers represented roughly 45% for our data center revenue and more than 50% stemmed from the consumer, Internet, and enterprise companies.
计算收入增长超过 2.5 倍,网络收入比去年增长超过 2 倍。云服务提供商占我们数据中心收入的约 45%,而来自消费者、互联网和企业公司的收入超过 50%。

Customers continue to accelerate their Hopper architecture purchases, while gearing up to adopt Blackwell. Key workloads driving our data center growth include generative AI, model training, and inferencing. Video, image, and text data pre and post-processing with CUDA and AI workloads, synthetic data generation, AI-powered recommender systems, SQL, and vector database processing as well.
客户继续加速他们的 Hopper 架构采购,同时准备采用Blackwell。推动我们数据中心增长的关键工作负载包括生成性 AI、模型训练和推理。视频、图像和文本数据的前后处理与 CUDA 和 AI 工作负载、合成数据生成、AI 驱动的推荐系统、SQL 和向量数据库处理等。

Next-generation models will require 10 to 20 times more compute to train with significantly more data. The trend is expected to continue. Over the trailing four quarters, we estimate that inference drove more than 40% of our data center revenue. CSPs, consumer Internet companies, and enterprises benefit from the incredible throughput and efficiency of NVIDIA's inference platform.
下一代模型将需要 10 到 20 倍的计算能力来训练,并且需要显著更多的数据。这个趋势预计将持续。在过去四个季度中,我们估计推理驱动了我们数据中心收入的 40%以上。云服务提供商、消费互联网公司和企业都受益于 NVIDIA 推理平台的惊人吞吐量和效率。

Demand for NVIDIA is coming from frontier model makers, consumer Internet services, and tens of thousands of companies and startups building generative AI applications for consumers, advertising, education, enterprise and healthcare, and robotics. Developers desire NVIDIA's rich ecosystem and availability in every cloud. CSPs appreciate the broad adoption of NVIDIA and are growing their NVIDIA capacity given the high demand.
对 NVIDIA 的需求来自前沿模型制造商、消费互联网服务以及成千上万的公司和初创企业,这些公司和初创企业正在为消费者、广告、教育、企业和医疗保健以及机器人技术构建生成式 AI 应用。开发者渴望 NVIDIA 丰富的生态系统和在每个云中的可用性。云服务提供商欣赏 NVIDIA 的广泛采用,并在高需求的情况下增加他们的 NVIDIA 容量。

NVIDIA H200 platform began ramping in Q2, shipping to large CSPs, consumer Internet, and enterprise company. The NVIDIA H200 builds upon the strength of our Hopper architecture and offering, over 40% more memory bandwidth compared to the H100.
NVIDIA H200 平台在第二季度开始 ramping,向大型 CSP、消费互联网和企业公司发货。NVIDIA H200 基于我们 Hopper 架构的优势,提供比 H100 多超过 40% 的内存带宽。

Our data center revenue in China grew sequentially in Q2 and is a significant contributor to our data center revenue. As a percentage of total data center revenue, it remains below levels seen prior to the imposition of export controls. We continue to expect the China market to be very competitive going-forward. The latest round of MLPerf inference benchmarks highlighted NVIDIA's inference leadership with both NVIDIA, Hopper, and Blackwell platforms combining to win gold medals on all tasks.
我们在中国的数据中心收入在第二季度环比增长,是我们数据中心收入的重要贡献者。作为总数据中心收入的百分比,它仍低于实施出口管制之前的水平。我们继续预计中国市场在未来将非常具有竞争力。最新一轮的 MLPerf 推理基准测试突显了 NVIDIA 在推理方面的领导地位,NVIDIA、Hopper 和Blackwell平台在所有任务中共同获得金牌。

At Computex, NVIDIA with the top computer manufacturers unveiled an array of Blackwell architecture-powered systems and NVIDIA networking for building AI factories and data centers. With the NVIDIA MGX modular reference architecture, our OEMs and ODM partners are building more than 100 Blackwell-based systems designed quickly and cost-effectively.
在台北电脑展上,NVIDIA 与顶级计算机制造商联合推出了一系列基于Blackwell架构的系统和 NVIDIA 网络,用于构建 AI 工厂和数据中心。借助 NVIDIA MGX 模块化参考架构,我们的 OEM 和 ODM 合作伙伴正在快速且经济高效地构建超过 100 个基于Blackwell的系统。

The NVIDIA Blackwell platform brings together multiple GPU, CPU, DPU, NVLink, NVLink switch, and the networking chips systems, and NVIDIA CUDA software to power the next-generation of AI across the cases, industries, and countries. The NVIDIA GB 200 NVL72 system with the fifth-generation NVLink enables all 72 GPUs to act as a single GPU and deliver up to 30 times faster inference for LLMs, workloads, and unlocking the ability to run trillion parameter models in real-time.
NVIDIA Blackwell 平台汇集了多个 GPU、CPU、DPU、NVLink、NVLink 交换机和网络芯片系统,以及 NVIDIA CUDA 软件,为各个案例、行业和国家的下一代 AI 提供动力。配备第五代 NVLink 的 NVIDIA GB 200 NVL72 系统使所有 72 个 GPU 能够作为一个单一的 GPU 运行,并为 LLMs 工作负载提供高达 30 倍的推理速度,解锁实时运行万亿参数模型的能力。

Hopper demand is strong and Blackwell is widely sampling. We executed a change to the Blackwell GPU mass to improve production yields. Blackwell production ramp is scheduled to begin in the fourth quarter and continue into fiscal year '26.
Hopper的需求强劲,Blackwell 正在广泛采样。我们对 Blackwell GPU 大规模进行了更改,以提高生产良率。Blackwell 的生产提升计划定于第四季度开始,并持续到 2026 财年。

In Q4, we expect to ship several billion dollars in Blackwell revenue. Hopper shipments are expected to increase in the second half of fiscal 2025. Hopper supply and availability have improved. Demand for Blackwell platforms is well above supply, and we expect this to continue into next year. Networking revenue increased 16% sequentially.
在第四季度,我们预计将实现数十亿美元的Blackwell收入。Hopper 出货量预计将在 2025 财年的下半年增加。Hopper 的供应和可用性有所改善。对Blackwell平台的需求远高于供应,我们预计这种情况将持续到明年。网络收入环比增长 16%。

Our Ethernet for AI revenue, which includes our Spectrum-X end-to-end Ethernet platform, doubled sequentially with hundreds of customers adopting our Ethernet offerings. Spectrum-X has broad market support from OEM and ODM partners and is being adopted by CSPs, GPU cloud providers, and enterprise, including X-AI to connect the largest GPU compute cluster in the world.
我们的 AI 以太网收入,包括我们的 Spectrum-X 端到端以太网平台,按季度翻了一番,数百家客户采用了我们的以太网产品。Spectrum-X 得到了 OEM 和 ODM 合作伙伴的广泛市场支持,并被 CSP、GPU 云服务提供商和企业采用,包括 X-AI,以连接世界上最大的 GPU 计算集群。

Spectrum-X supercharges Ethernet for AI processing and delivers 1.6 times the performance of traditional Ethernet. We plan to launch new Spectrum-X products every year to support demand for scaling compute clusters from tens of thousands of DPUs today to millions of GPUs in the near future. Spectrum-X is well on-track to begin a multi-billion dollar product line within a year. Our sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at national imperatives for their society and industries.
Spectrum-X 为 AI 处理提供了超强的以太网支持,性能是传统以太网的 1.6 倍。我们计划每年推出新的 Spectrum-X 产品,以支持从今天的数万个 DPU 到未来数百万个 GPU 的计算集群扩展需求。Spectrum-X 正在按计划在一年内启动一个数十亿美元的产品线。随着各国认识到 AI 专业知识和基础设施对其社会和产业的国家重要性,我们的主权 AI 机会持续扩大。

Japan's National Institute of Advanced Industrial Science and Technology is building its AI bridging cloud infrastructure 3.0 supercomputer with NVIDIA. We believe sovereign AI revenue will reach low-double-digit billions this year. The enterprise AI wave has started. Enterprises also drove sequential revenue growth in the quarter. We are working with most of the Fortune 100 companies on AI initiatives across industries and geographies. A range of applications are fueling our growth, including AI-powered chatbots, generative AI copilots, and agents to build new monetizable business applications and enhance employee productivity.
日本国家先进工业科学技术研究所正在与 NVIDIA 合作建设其 AI 桥接云基础设施 3.0 超级计算机。我们相信主权 AI 收入将在今年达到低双位数十亿。企业 AI 浪潮已经开始。企业在本季度也推动了连续的收入增长。我们正在与大多数财富 100 强公司在各行业和地区开展 AI 项目。多种应用正在推动我们的增长,包括 AI 驱动的聊天机器人、生成式 AI 副驾驶和代理,以构建新的可货币化商业应用并提升员工生产力。

Amdocs is using NVIDIA generative AI for their smart agent, transforming the customer experience and reducing customer service costs by 30%. ServiceNow is using NVIDIA for its Now Assist offering, the fastest-growing new product in the company's history. SAP is using NVIDIA to build dual Co-pilots. Cohesity is using NVIDIA to build their generative AI agent and lower generative AI development costs. Snowflake serves over 3 billion queries a day for over 10,000 enterprise customers is working with NVIDIA to build Copilots. And lastly, Wistron is using NVIDIA AI Omniverse to reduce end-to-end cycle times for their factories by 50%.
Amdocs 正在使用 NVIDIA 生成性 AI 来支持他们的智能代理,转变客户体验并将客户服务成本降低 30%。ServiceNow 正在使用 NVIDIA 的 Now Assist 产品,这是公司历史上增长最快的新产品。SAP 正在使用 NVIDIA 构建双重副驾驶。Cohesity 正在使用 NVIDIA 构建他们的生成性 AI 代理并降低生成性 AI 开发成本。Snowflake 每天为超过 10,000 家企业客户处理超过 30 亿个查询,正在与 NVIDIA 合作构建副驾驶。最后,纬创资通正在使用 NVIDIA AI Omniverse 将其工厂的端到端周期时间减少 50%。

Automotive was a key growth driver for the quarter as every automaker developing autonomous vehicle technology is using NVIDIA in their data centers. Automotive will drive multi-billion dollars in revenue across on-prem and cloud consumption and will grow as next-generation AV models require significantly more compute. Healthcare is also on its way to being a multi-billion dollar business as AI revolutionizes medical imaging, surgical robots, patient care, electronic health record processing, and drug discovery.
汽车是本季度的关键增长驱动力,因为每个开发自动驾驶技术的汽车制造商都在其数据中心使用 NVIDIA。汽车将推动数十亿美元的收入,涵盖本地和云消费,并将随着下一代自动驾驶模型对计算需求的显著增加而增长。医疗保健也正在成为一个数十亿美元的业务,因为人工智能正在革新医学影像、手术机器人、病人护理、电子健康记录处理和药物发现。

During the quarter, we announced a new NVIDIA AI foundry service to supercharge generative AI for the world's enterprises with Meta's Llama 3.1, collection of models. This marks a watershed moment for enterprise AI. Companies for the first time can leverage the capabilities of an open-source frontier-level model to develop customized AI applications to encode their institutional knowledge into an AI flywheel to automate and accelerate their business.
在本季度,我们宣布了一项新的 NVIDIA AI 铸造服务,以 Meta 的 Llama 3.1 模型集合为全球企业提供生成性 AI 的强大支持。这标志着企业 AI 的一个重要时刻。公司首次可以利用开源前沿模型的能力,开发定制的 AI 应用程序,将其机构知识编码到 AI 飞轮中,以自动化和加速其业务。

Accenture is the first to adopt the new service to build custom Llama 3.1 models for both its own use and to assist clients seeking to deploy generative AI applications. Nvidia NIM accelerate and simplify model deployment. Companies across healthcare, energy, financial services, retail, transportation, and telecommunications are adopting NIMs, including Aramco, Lowe's, and Uber. AT&T realized 70% cost savings and 8 times latency reduction after moving into NIMs for generative AI, call transcription, and classification.
埃森哲是首个采用新服务的公司,旨在为自身使用和帮助客户部署生成式人工智能应用构建定制的 Llama 3.1 模型。Nvidia NIM 加速并简化模型部署。包括阿美石油、Lowe's 和 Uber 在内的公司正在采用 NIM,涵盖医疗、能源、金融服务、零售、运输和电信等行业。AT&T 在转向 NIM 进行生成式人工智能、通话转录和分类后,实现了 70%的成本节省和 8 倍的延迟减少。

Over 150 partners are embedding NIMs across every layer of the AI ecosystem. We announced NIM agent Blueprints, a catalog of customizable reference applications that include a full suite of software for building and deploying enterprise generative AI applications. With NIM agent blueprints, enterprises can refine their AI applications overtime, creating a data-driven AI flywheel. The first NIM agent blueprints include workloads for customer service, computer-aided drug discovery, and enterprise retrieval augmented generation.
超过 150 个合作伙伴正在将 NIM 嵌入到 AI 生态系统的每一层。我们发布了 NIM 代理蓝图,这是一个可定制参考应用程序的目录,包含构建和部署企业生成 AI 应用程序的完整软件套件。通过 NIM 代理蓝图,企业可以随着时间的推移优化他们的 AI 应用程序,创建一个数据驱动的 AI 飞轮。首批 NIM 代理蓝图包括客户服务、计算机辅助药物发现和企业检索增强生成的工作负载。

Our system integrators, technology solution providers, and system builders are bringing NVIDIA NIM agent blueprints to enterprises. NVIDIA NIM and NIM agent blueprints are available through the NVIDIA AI enterprise software platform, which has great momentum. We expect our software, SaaS and support revenue to approach a $2 billion annual run rate exiting this year, with NVIDIA AI Enterprise notably contributing to growth.
我们的系统集成商、技术解决方案提供商和系统构建者正在将 NVIDIA NIM 代理蓝图带入企业。NVIDIA NIM 和 NIM 代理蓝图可通过 NVIDIA AI 企业软件平台获得,该平台势头强劲。我们预计到今年年底,我们的软件、SaaS 和支持收入将接近 20 亿美元的年化收入,NVIDIA AI 企业在增长中发挥了显著作用。

Moving to gaming and AI PCs. Gaming revenue of $2.88 billion increased 9% sequentially and 16% year-on-year. We saw sequential growth in console, notebook, and desktop revenue and demand is strong and growing and channel inventory remains healthy. Every PC with RTX is an AIPC. RTX PCs can deliver up to 1,300 AI tops and there are now over 200 RTX AI laptops designs from leading PC manufacturers.
转向游戏和人工智能个人电脑。游戏收入为 28.8 亿美元,环比增长 9%,同比增长 16%。我们看到主机、笔记本和台式机的收入环比增长,需求强劲且持续增长,渠道库存保持健康。每台配备 RTX 的个人电脑都是人工智能个人电脑。RTX 个人电脑可以提供高达 1300 AI TOPS,目前已有超过 200 款来自领先 PC 制造商的 RTX AI 笔记本设计。

With 600 AI-powered applications and games and an installed base of 100 million devices, RTX is set to revolutionize consumer experiences with generative AI. NVIDIA ACE, a suite of generative AI technologies is available for RTX, AI PCs. Mecha BREAK is the first game to use NVIDIA ACE, including our small large -- small language model, Minitron-4B optimized on device inference.
拥有 600 个基于 AI 的应用程序和游戏,以及 1 亿台设备的安装基础,RTX 将彻底改变消费者体验,带来生成式 AI。NVIDIA ACE 是一套可用于 RTX 和 AI PC 的生成式 AI 技术。Mecha BREAK 是首个使用 NVIDIA ACE 的游戏,包括我们的小大型——在设备推理上优化的小语言模型 Minitron-4B。

The NVIDIA gaming ecosystem continues to grow, recently added RTX and DLSS titles including Indiana Jones and the Great Circle, Dune Awakening, and Dragon Age: The Veilguard. The GeForce NOW library continues to expand with total catalog size of over 2,000 titles, the most content of any cloud gaming service.
NVIDIA 游戏生态系统持续增长,最近新增了包括《印第安纳·琼斯与大圆圈》、《沙丘觉醒》和《龙腾世纪:面纱守卫》在内的 RTX 和 DLSS 游戏。GeForce NOW 库持续扩展,总目录规模超过 2,000 个标题,是所有云游戏服务中内容最多的。

Moving to Pro visualization. Revenue of $454 million was up 6% sequentially and 20% year-on-year. Demand is being driven by AI and graphic use cases, including model fine-tuning and Omniverse-related workloads. Automotive and manufacturing were among the key industry verticals driving growth this quarter. Companies are racing to digitalize workflows to drive efficiency across their operations.
转向专业可视化。收入为 4.54 亿美元,环比增长 6%,同比增长 20%。需求受到人工智能和图形应用场景的推动,包括模型微调和 Omniverse 相关工作负载。汽车和制造业是推动本季度增长的主要行业垂直领域。各公司正在竞相数字化工作流程,以提高运营效率。

The world's largest electronics manufacturer, Foxconn is using NVIDIA Omniverse to power digital twins of the physical plants that produce NVIDIA Blackwell systems. And several large global enterprises, including Mercedes-Benz signed multi-year contracts for NVIDIA Omniverse Cloud to build industrial digital twins for factories. We announced new NVIDIA USD NIMs and connectors to open Omniverse to new industries and enable developers to incorporate generative AI Copilots and agents into USD workflows, accelerating their ability to build highly accurate virtual worlds. WPP is implementing USD NIM microservices in its generative AI-enabled content creation pipeline for customers such as the Coca-Cola company.
世界上最大的电子制造商富士康正在利用NVIDIA Omniverse来为生产NVIDIA Blackwell系统的物理工厂创建数字孪生。此外,包括梅赛德斯-奔驰在内的几家大型跨国企业也签署了为期多年的NVIDIA Omniverse Cloud合约,用于构建工厂的工业数字孪生。我们还宣布推出了新的NVIDIA USD NIM和连接器,旨在将Omniverse扩展到新的行业,并帮助开发者将生成式AI助手和代理集成到USD工作流程中,从而加速他们构建高度精确虚拟世界的能力。WPP正在其生成式AI支持的内容创建管道中,实施USD NIM微服务,为可口可乐公司等客户服务。

Moving to automotive and robotics, revenue was $346 million, up 5% sequentially and up 37% year-on-year. Year-on-year growth was driven by the new customer ramps in self-driving platforms and increased demand for AI cockpit solutions. At the consumer -- at the Computer Vision and Pattern Recognition conference, NVIDIA won the Autonomous Brand Challenge in the end-to-end driving at-scale category, outperforming more than 400 entries worldwide. Austin Dynamics, BYD Electronics, Figure, Intrinsic, Siemens, Skilled ADI, and Teradyne Robotics are using the NVIDIA Isaac Robotics platform for autonomous robot arms, humanoids, and mobile robots.
转向汽车和机器人领域,收入为 3.46 亿美元,环比增长 5%,同比增长 37%。同比增长主要得益于自驾平台的新客户增加和对 AI 驾驶舱解决方案的需求上升。在消费者领域——在计算机视觉与模式识别会议上,NVIDIA 在大规模端到端驾驶类别中赢得了自主品牌挑战赛,超过了全球 400 多个参赛作品。Austin Dynamics、比亚迪电子、Figure、Intrinsic、西门子、Skilled ADI 和 Teradyne Robotics 正在使用 NVIDIA Isaac Robotics 平台进行自主机器人手臂、人形机器人和移动机器人。

Now moving to the rest of the P&L. GAAP gross margins were 75.1% and non-GAAP gross margins were 75.7%, down sequentially due to a higher mix of new products within data center and inventory provisions for low-yielding Blackwell material. Sequentially, GAAP and non-GAAP operating expenses were up 12%, primarily reflecting higher compensation-related costs. Cash flow from operations was $14.5 billion.
现在转向损益表的其余部分。GAAP 毛利率为 75.1%,非 GAAP 毛利率为 75.7%,由于数据中心中新产品的比例增加以及对低收益Blackwell材料的库存准备,环比下降。环比来看,GAAP 和非 GAAP 运营费用上升了 12%,主要反映了更高的薪酬相关成本。运营现金流为 145 亿美元。

In Q2, we utilized cash of $7.4 billion towards shareholder returns in the form of share repurchases and cash dividends, reflecting the increase in dividend per share. Our Board of Directors recently approved a $50 billion share repurchase authorization to add to our remaining $7.5 billion of authorization at the end of Q2.
在第二季度,我们利用了 74 亿美元用于以股票回购和现金分红的形式回报股东,反映了每股分红的增加。我们的董事会最近批准了一项 500 亿美元的股票回购授权,以增加我们在第二季度末剩余的 75 亿美元授权。

Let me turn the outlook for the third quarter. Total revenue is expected to be $32.5 billion, plus or minus 2%. Our third-quarter revenue outlook incorporates continued growth of our Hopper architecture and sampling of our Blackwell products. We expect Blackwell production ramp in Q4. GAAP and non-GAAP gross margins are expected to be 74.4% and 75%, respectively, plus or minus 50 basis points.
让我转向第三季度的展望。预计总收入为 325 亿美元,正负 2%。我们的第三季度收入展望包括我们 Hopper 架构的持续增长和我们Blackwell产品的采样。我们预计Blackwell的生产将在第四季度 ramp up。GAAP 和非 GAAP 的毛利率预计分别为 74.4%和 75%,正负 50 个基点。

As our data center mix continues to shift to new products, we expect this trend to continue into the fourth quarter of fiscal 2025. For the full-year, we expect gross margins to be in the mid-70% range. GAAP and non-GAAP operating expenses are expected to be approximately $4.3 billion and $3.0 billion, respectively.
随着我们的数据中心组合不断转向新产品,我们预计这一趋势将持续到 2025 财年的第四季度。全年,我们预计毛利率将在 70%的中间范围内。GAAP 和非 GAAP 的运营费用预计分别约为 43 亿美元和 30 亿美元。

Full-year operating expenses are expected to grow in the mid-to-upper 40% range as we work on developing our next generation of products. GAAP and non-GAAP other income and expenses are expected to be about $350 million, including gains and losses from non-affiliated investments and publicly-held equity securities.
全年运营费用预计将增长至中高 40%的范围,因为我们正在开发下一代产品。GAAP 和非 GAAP 的其他收入和支出预计约为 3.5 亿美元,包括来自非关联投资和公开持有的股权证券的收益和损失。

GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items. Further financial detail are included in the CFO commentary and other information available on our IR website. We are now going to open the call for questions.
GAAP 和非 GAAP 税率预计为 17%,上下浮动 1%,不包括任何一次性项目。进一步的财务细节包含在 CFO 评论和我们 IR 网站上的其他信息中。我们现在将开放提问环节。

Operator, would you please help us poll for questions.
操作员,请您帮我们投票提问。

Question-and-Answer Session
问答环节

Operator 主持人

Thank you. [Operator Instructions] And your first question comes from the line of Vivek Arya with Bank of America Securities. Your line is open.
谢谢。[操作员指示]您的第一个问题来自美国银行证券的 Vivek Arya。您的线路已开启。

Vivek Arya 维韦克·阿里亚

Thanks for taking my question. Jensen, you mentioned in the prepared comments that there is a change in the Blackwell GPU mask. I'm curious, are there any other incremental changes in back end packaging or anything else? And I think related, you suggested that you could ship several billion dollars of Blackwell in Q4 despite the change in the design. Is it because all these issues will be solved by then? Just help us size what is the overall impact of any changes in Blackwell timing? What that means to your kind of revenue profile and how are customers reacting to it?
谢谢你回答我的问题。Jensen,你在准备的发言中提到,Blackwell GPU的掩膜设计发生了变化。我很好奇,是否还有其他在后端封装或其他方面的增量变化?另外,你提到尽管设计发生了变化,你们还是能够在第四季度出货数十亿美元的Blackwell。这是因为到那时这些问题都能解决吗?请帮我们理解一下这些变化对Blackwell整体时间安排的影响,这对你们的收入状况意味着什么,客户对此又有何反应?

Jensen Huang 黄仁勋

Yes. Thanks, Vivek. The change to the mask is complete. There were no functional changes necessary. And so we're sampling functional samples of Blackwell -- Grace Blackwell in a variety of system configurations as we speak. There are something like 100 different types of Blackwell-based systems that are built that were shown at Computex. And we're enabling our ecosystem to start sampling those. The functionality of Blackwell is as it is, and we expect to start production in Q4.
好的,谢谢你,Vivek。掩膜的更改已经完成,没有必要进行功能上的更改。因此,我们目前正在对各种系统配置中的Blackwell——Grace Blackwell进行功能样品测试。目前已经有大约100种基于Blackwell的系统在Computex上展示过。我们正在帮助我们的生态系统开始对这些系统进行测试。Blackwell的功能已经确定,我们预计将在第四季度开始生产。

Operator 操作符

And your next question comes from the line of Toshiya Hari with Goldman Sachs. Your line is open.
您的下一个问题来自高盛的 Hari Toshiya。您的线路已开启。

Toshiya Hari

Hi, thank you so much for taking the question. Jensen, I had a relatively longer-term question. As you may know, there's a pretty heated debate in the market on your customers and customer's customers return on investment and what that means for the sustainability of CapEx going forward. Internally at NVIDIA, like what are you guys watching? What's on your dashboard as you try to gauge customer return and how that impacts CapEx?
嗨,非常感谢您回答这个问题。Jensen,我有一个相对长期的问题。正如您所知,市场上关于您的客户及其客户的投资回报率以及这对未来资本支出的可持续性意味着什么,存在相当激烈的辩论。在 NVIDIA 内部,你们在关注什么?在你们的仪表板上有什么内容,帮助你们评估客户回报以及这如何影响资本支出?

And then a quick follow-up maybe for Colette. I think your sovereign AI number for the full-year went up maybe a couple of billion. What's driving the improved outlook? And how should we think about fiscal '26? Thank you.
接下来一个简短的跟进问题,可能是问Colette的。我注意到你们全年主权AI的预期增加了大约几十亿美元。是什么推动了这种前景的改善?我们应该如何看待2026财年?谢谢。

Jensen Huang 黄仁勋

Thanks, Toshiya. First of all, when I said ship production in Q4, I mean shipping out. I don't mean starting to ship, but I mean -- I don't mean starting production, but shipping out. On the longer-term question, let's take a step-back and you've heard me say that we're going through two simultaneous platform transitions at the same time. The first one is transitioning from accelerated computing to -- from general-purpose computing to accelerated computing. And the reason for that is because CPU scaling has been known to be slowing for some time. And it is slow to a crawl. And yet the amount of computing demand continues to grow quite significantly. You could maybe even estimate it to be doubling every single year.
谢谢,Toshiya。首先,当我说第四季度出货时,我是指发货。我不是说开始发货,而是我不是说开始生产,而是发货。关于长期问题,让我们退一步,你听我说过我们正在同时经历两个平台的转型。第一个是从通用计算转向加速计算。原因是 CPU 的扩展速度已经被认为在减缓,并且已经慢到几乎停滞。然而,计算需求的增长仍然相当显著。你甚至可以估计它每年翻一番。

And so if we don't have a new approach, computing inflation would be driving up the cost for every company, and it would be driving up the energy consumption of data centers around the world. In fact, you're seeing that. And so the answer is accelerated computing. We know that accelerated computing, of course, speeds up applications. It also enables you to do computing at a much larger scale, for example, scientific simulations or database processing. But what that translates directly to is lower cost and lower energy consumed.
因此,如果我们没有新的方法,计算通货膨胀将会推高每个公司的成本,并且会推高全球数据中心的能源消耗。事实上,你已经看到了这一点。因此,答案是加速计算。我们知道,加速计算当然可以加速应用程序。它还使你能够以更大规模进行计算,例如科学模拟或数据库处理。但这直接转化为更低的成本和更低的能耗。

And in fact, this week, there's a blog that came out that talked about a whole bunch of new libraries that we offer. And that's really the core of the first platform transition going from general purpose computing to accelerated computing. And it's not unusual to see someone save 90% of their computing cost. And the reason for that is, of course, you just sped up an application 50x, you would expect the computing cost to decline quite significantly.
实际上,这周有一篇博客发布,讨论了我们提供的一系列新库。这实际上是从通用计算转向加速计算的第一个平台转型的核心。看到有人节省 90%的计算成本并不奇怪。原因当然是,你将一个应用程序加速了 50 倍,你会期望计算成本显著下降。

The second was enabled by accelerated computing because we drove down the cost of training large language models or training deep learning so incredibly, that it is now possible to have gigantic scale models, multi-trillion parameter models, and train it on -- pre-train it on just about the world's knowledge corpus and let the model go figure out how to understand a human represent -- human language representation and how to codify knowledge into its neural networks and how to learn reasoning, and so -- which caused the generative AI revolution.
第二个是由于加速计算的推动,因为我们大幅降低了训练大型语言模型或深度学习的成本,现在可以拥有巨型规模的模型,万亿参数的模型,并在几乎整个世界的知识语料库上进行预训练,让模型去理解人类语言表示,如何将知识编码到其神经网络中,以及如何学习推理,这导致了生成式人工智能革命。

Now generative AI, taking a step back about why it is that we went so deeply into it is because it's not just a feature, it's not just a capability, it's a fundamental new way of doing software. Instead of human-engineered algorithms, we now have data. We tell the AI, we tell the model, we tell the computer what's the -- what are the expected answers, What are our previous observations. And then for it to figure out what the algorithm is, what's the function. It learns a universal -- AI is a bit of a universal function approximator and it learns the function.
现在生成性人工智能,回顾一下我们为什么如此深入研究它,是因为这不仅仅是一个特性,也不仅仅是一种能力,而是一种全新的软件开发方式。我们不再依赖人类设计的算法,而是使用数据。我们告诉人工智能,我们告诉模型,我们告诉计算机预期的答案是什么,我们之前的观察是什么。然后让它去找出算法是什么,函数是什么。它学习一个通用的——人工智能有点像一个通用函数逼近器,它学习这个函数。

And so you could learn the function of almost anything, you know. And anything that you have that's predictable, anything that has structure, anything that you have previous examples of. And so now here we are with generative AI. It's a fundamental new form of computer science. It's affecting how every layer of computing is done from CPU to GPU, from human-engineered algorithms to machine-learned algorithms. And the type of applications you could now develop and produce is fundamentally remarkable.
因此,您几乎可以学习任何事物的功能,您知道的。任何您拥有的可预测的事物,任何具有结构的事物,任何您有过先例的事物。现在,我们来到了生成式人工智能。这是一种全新的计算机科学形式。它影响着计算的每一层,从 CPU 到 GPU,从人工设计的算法到机器学习的算法。您现在可以开发和生产的应用程序类型是根本上令人惊叹的。

And there are several things that are happening in generative AI. So the first thing that's happening is the frontier models are growing in quite substantial scale. And they're still seeing -- we're still all seeing the benefits of scaling. And whenever you double the size of a model, you also have to more than double the size of the dataset to go train it. And so the amount of flops necessary in order to create that model goes up quadratically. And so it's not unusual -- it's not unexpected to see that the next-generation models could take 20 -- 10, 20, 40 times more compute than last generation.
在生成性人工智能中,有几件事情正在发生。首先,前沿模型的规模正在大幅增长。我们仍然看到规模化的好处。每当你将模型的大小翻倍时,你也必须将数据集的大小增加超过一倍来进行训练。因此,创建该模型所需的浮点运算次数呈平方增长。因此,看到下一代模型可能需要比上一代多 10、20、40 倍的计算能力并不奇怪。
这个方向一定是指数级的增长,会不会走向更简单的方式?

So we have to continue to drive the generational performance up quite significantly, so we can drive down the energy consumed and drive down the cost necessary to do it. So the first one is, there are larger frontier models trained on more modalities and surprisingly, there are more frontier model makers than last year. And so you have more on more on more. That's one of the dynamics going on in generative AI. The second is although it's below the tip of the iceberg. What we see are ChatGPT, image generators, we see coding. We use a generative AI for coding quite extensively here at NVIDIA now.
所以我们必须继续大幅提升代际性能,以便降低能耗和实现所需的成本。首先,有更大的前沿模型在更多模态上进行训练,令人惊讶的是,前沿模型的制造商比去年更多。因此,你会看到越来越多的模型。这是生成性人工智能中正在发生的一个动态。其次,尽管这只是冰山一角,但我们看到的是 ChatGPT、图像生成器,以及编码。我们现在在 NVIDIA 广泛使用生成性人工智能进行编码。

We, of course, have a lot of digital designers and things like that. But those are kind of the tip of the iceberg. What's below the iceberg are the largest systems -- largest computing systems in the world today, which are -- and you've heard me talk about this in the past, which are recommender systems moving from CPUs, it's now moving from CPUs to generative AI. So recommended systems, ad generation, custom ad generation targeting ads at very large scale and quite hyper targeting search and user-generated content. These are all very large-scale applications have now evolved to generative AI.
我们当然有很多数字设计师和类似的东西。但这些只是冰山一角。冰山下是世界上最大的系统——最大的计算系统,这些系统——你们以前听我谈过——是推荐系统,从 CPU 转向生成式人工智能。因此,推荐系统、广告生成、定制广告生成、在非常大规模上针对广告的超精准投放以及用户生成内容。这些都是已经演变为生成式人工智能的非常大规模应用。

Of course, the number of generative AI startups is generating tens of billions of dollars of cloud renting opportunities for our cloud partners and sovereign AI. Countries that are now realizing that their data is their natural and national resource and they have to use -- they have to use AI, build their own AI infrastructure so that they could have their own digital intelligence.
当然,生成式AI初创公司的数量为我们的云合作伙伴和主权AI带来了数百亿美元的云租赁机会。各国现在意识到,他们的数据是他们的自然资源和国家资源,他们必须使用AI,构建自己的AI基础设施,这样他们才能拥有自己的数字智能。

Enterprise AI, as Colette mentioned earlier, is starting and you might have seen our announcement that the world's leading IT companies are joining us to take the NVIDIA AI enterprise platform to the world's enterprises. The companies that we're talking to. So many of them are just so incredibly excited to drive more productivity out of their company.
企业人工智能,正如科莱特之前提到的,正在起步,您可能已经看到我们的公告,世界领先的 IT 公司正在加入我们,将 NVIDIA AI 企业平台带入全球企业。我们正在与之交谈的公司中,许多公司对提高公司生产力感到无比兴奋。

And then I -- and then General Robotics. The big transformation last year as we are able to now learn physical AI from watching video and human demonstration and synthetic data generation from reinforcement learning from systems like Omniverse. We are now able to work with just about every robotics companies now to start thinking about start building on general robotics. And so you can see that there are just so many different directions that generative AI is going. And so we're actually seeing the momentum of generative AI accelerating.
然后我——然后是通用机器人。去年发生了重大转变,因为我们现在能够通过观看视频和人类演示以及从像 Omniverse 这样的系统中进行强化学习生成合成数据来学习物理 AI。我们现在能够与几乎所有的机器人公司合作,开始思考并构建通用机器人。因此,您可以看到生成 AI 正在朝着许多不同的方向发展。因此,我们实际上看到生成 AI 的势头在加速。
讲了一大堆就是没有回答投入和产出的问题,只有模糊的正确:人类,大部分人类是累赘。

Colette Kress 科莱特·克雷斯

And Toshiya, to answer your question regarding sovereign AI and our goals in terms of growth, in terms of revenue, it certainly is a unique and growing opportunity, something that surfaced with generative AI and the desires of countries around the world to have their own generative AI that would be able to incorporate their own language, incorporate their own culture, incorporate their own data in that country. So more and more excitement around these models and what they can be specific for those countries. So yes, we are seeing some growth opportunity in front of us.
Toshiya,关于你提到的主权AI和我们在增长和收入方面的目标,这是一个独特且不断增长的机会。这一机会伴随着生成式AI的出现,以及全球各国希望拥有自己的生成式AI的愿望,这些AI能够整合他们自己的语言、文化和本国的数据。因此,围绕这些模型的兴趣越来越浓厚,它们能够为这些国家提供特定的功能。所以,是的,我们确实看到了面前的一些增长机会。

Operator 主持人

And your next question comes from the line of Joe Moore with Morgan Stanley. Your line is open.
您的下一个问题来自摩根士丹利的乔·摩尔。您的线路已开启。

Joe Moore 乔·摩尔

Great. Thank you. Jensen, in the press release, you talked about Blackwell anticipation being incredible, but it seems like Hopper demand is also really strong. I mean, you're guiding for a very strong quarter without Blackwell in October. So how long do you see sort of coexisting strong demand for both? And can you talk about the transition to Blackwell? Do you see people intermixing clusters? Do you think most of the Blackwell activity is new clusters? Just some sense of what that transition looks like?
很好。谢谢你。詹森,在新闻稿中,你提到对Blackwell的期待非常高,但似乎Hopper的需求也非常强劲。我的意思是,你在指导没有Blackwell的情况下,预计第四季度会非常强劲。那么你认为这种强劲需求共存的情况会持续多久?你能谈谈向Blackwell的过渡吗?你认为人们会混合使用集群吗?你认为大多数Blackwell活动是新的集群吗?能否给我们一些关于这个过渡的感觉?

Jensen Huang 黄仁勋

Yes. Thanks, Joe. The demand for Hopper is really strong and it's true. The demand for Blackwell is incredible. There's a couple of reasons for that. The first reason is, if you just look at the world's cloud service providers and the amount of GPU capacity they have available, it's basically none. And the reason for that is because they're either being deployed internally for accelerating their own workloads, data processing, for example.
好的,谢谢你,Joe。Hopper的需求确实非常强劲,而Blackwell的需求则更是难以置信。这有几个原因。首先,如果你看看全球云服务提供商所拥有的GPU容量,基本上是没有的。而这背后的原因是,这些GPU要么被内部部署,用于加速他们自己的工作负载,比如数据处理。

Data processing, we hardly ever talk about it because it's mundane. It's not very cool because it doesn't generate a picture or generate words, but almost every single company in the world processes data in the background. And NVIDIA's GPUs are the only accelerators on the planet that process and accelerate data. SQL data, Pandas data science, toolkits like Pandas and the new one, Polars, these are the most popular data processing platforms in the world.
数据处理,我们几乎从不谈论它,因为它很平凡。它并不是很酷,因为它不会生成图像或文字,但几乎世界上每一家公司都在后台处理数据。而 NVIDIA 的 GPU 是地球上唯一能够处理和加速数据的加速器。SQL 数据、Pandas 数据科学、像 Pandas 和新的 Polars 这样的工具包,这些是世界上最受欢迎的数据处理平台。

And aside from CPUs, which as I've mentioned before, really running out of steam, Nvidia's accelerated computing is really the only way to get boosting performance out of that. And so that's number one is the primary -- the number one use-case long before generative AI came along is the migration of applications one after another to accelerated computing.
除了CPU之外,正如我之前提到的,CPU的性能提升已经遇到瓶颈,而Nvidia的加速计算实际上是唯一能够显著提升性能的方法。因此,首先——在生成式AI出现之前,最主要的用例就是将应用程序一个接一个地迁移到加速计算上。

The second is, of course, the rentals. Their renting capacity to model makers or renting it to startup companies and a generative AI company spends the vast majority of their invested capital into infrastructure so that they could use an AI to help them create products. And so these companies need it now. They just simply can't afford -- you just raise money, you -- they want you to put it to use now. You have processing that you have to do. You can't do it next year, you got to do it today. And so there's a fair -- that's one reason.
其次,当然就是租赁了。他们将计算能力租赁给模型开发者或初创公司,而生成式AI公司将大部分投资资本用于基础设施建设,以便利用AI来帮助他们创造产品。因此,这些公司现在就需要这些资源。他们无法等待——你刚刚筹集到资金,投资者希望你马上将其投入使用。你有需要处理的计算任务,不能等到明年,必须今天就完成。所以这是其中的一个原因。

The second reason for Hopper demand right now is because of the race to the next plateau. So the first person to the next plateau, it gets to be -- gets to introduce a revolutionary level of AI. So the second person who gets there is incrementally better or about the same. And so the ability to systematically and consistently race to the next plateau and be the first one there, is how you establish leadership. NVIDIA is constantly doing that and we show that to the world and the GPUs we make and AI factories that we make, the networking systems that we make, the SOCs we create.
目前对Hopper需求的第二个原因是争夺下一个技术高地的竞赛。第一个达到下一个高地的人将能够引入革命性的AI水平。而第二个到达的人则只能在此基础上有所改进或保持相似。因此,系统化且持续不断地冲向下一个高地,并成为第一个到达的人,这就是确立领导地位的方式。NVIDIA一直在这样做,并通过我们制造的GPU、AI工厂、网络系统和我们创建的SOC向世界展示这一点。
贩卖恐惧,这是非常短期的行为,第一个造出飞机的,第一个造出电脑的都没什么好处。

I mean, we want to set the pace. We want to be consistently the world's best. And that's the reason why, we drive ourselves so hard. And of course, we also want to see our dreams come true, and all of the capabilities that we imagine in the future, and the benefits that we can bring to society. We want to see all that come true. And so these model makers are the same. They're -- of course, they want to be the world's best, they want to be the world's first. And although Blackwell will start shipping out in billions of dollars at the end of this year, the standing up of the capacity is still probably weeks and a month or so away. And so between now and then is a lot of generative AI market dynamic.
我意思是,我们想要设定节奏。我们希望始终成为世界上最优秀的。这就是我们如此努力的原因。当然,我们也希望看到我们的梦想成真,以及我们想象中的未来所有能力,以及我们能为社会带来的好处。我们希望看到这一切成真。因此,这些模型制造商也是如此。他们当然想成为世界上最优秀的,想成为世界第一。尽管Blackwell将在今年年底开始以数十亿美元的规模发货,但产能的建立可能仍然需要几周到一个月的时间。因此,在此之前,生成性人工智能市场的动态将会非常活跃。

And so everybody is just really in a hurry. It's either operational reasons that they need it, they need accelerated computing. They don't want to build any more general-purpose computing infrastructure and even Hopper, you know, of course, H200 state-of-the-art. Hopper, if you have a choice between building CPU infrastructure right now for business or Hopper infrastructure for business right now, that decision is relatively clear. And so I think people are just clamoring to transition the trillion dollars of established installed infrastructure to a modern infrastructure in Hopper's state-of-the-art.
所以每个人都非常着急。要么是出于运营原因,他们需要加速计算。他们不想再建立任何通用计算基础设施,甚至 Hopper,当然,H200 是最先进的。如果你现在有选择在为业务建立 CPU 基础设施还是为业务建立 Hopper 基础设施,这个决定相对明确。因此,我认为人们只是急于将万亿美元的现有安装基础设施转变为 Hopper 最先进的现代基础设施。
进一步贩卖恐惧,打击传统CPU的制造商。

Operator 主持人

And your next question comes from the line of Matt Ramsay with TD Cowen. Your line is open.
您的下一个问题来自 TD Cowen 的 Matt Ramsay。您的线路已开启。

Matt Ramsay 马特·拉姆塞

Thank you very much. Good afternoon, everybody. I wanted to kind of circle back to an earlier question about the debate that investors are having about, I don't know, the ROI on all of this CapEx, and hopefully this question and the distinction will make some sense. But what I'm having discussions about is with like the percentage of folks that you see that are spending all of this money and looking to sort of push the frontier towards AGI convergence and as you just said, a new plateau and capability.
非常感谢。下午好,大家好。我想回到之前的问题,关于投资者对所有这些资本支出的投资回报率(ROI)的讨论。希望我的问题和区分能够有所帮助。我所讨论的是,你们看到的那些投入大量资金的公司,他们希望推动前沿技术向AGI(通用人工智能)收敛,正如你刚才所说的,迈向一个新的技术高地和能力。

And they're going to spend regardless to get to that level of capability because it opens up so many doors for the industry and for their company versus customers that are really, really focused today on CapEx versus ROI. I don't know if that distinction makes sense. I'm just trying to get a sense of how you're seeing the priorities of people that are putting the dollars in the ground on this new technology and what their priorities are and their time frames are for that investment? Thanks.
他们将不惜一切代价达到那种能力水平,因为这为行业和他们的公司打开了许多大门,而与那些今天非常关注资本支出(CapEx)而非投资回报率(ROI)的客户相比。我不知道这个区别是否有意义。我只是想了解一下您如何看待那些在这项新技术上投入资金的人们的优先事项,以及他们的优先级和投资时间框架是什么?谢谢。

Jensen Huang 黄仁勋

Thanks, Matt. The people who are investing in NVIDIA infrastructure are getting returns on it right away. It's the best ROI infrastructure -- computing infrastructure investment you can make today. And so one way to think through it, probably the most -- the easiest way to think through is just go back to first principles. You have trillion dollars’ worth of general-purpose computing infrastructure and the question is, do you want to build more of that or not? And for every $1 billion worth of general CPU-based infrastructure that you stand up, you probably rent it for less than $1 billion. And so because it's commoditized. There's already a trillion dollars on the ground. What's the point of getting more?
谢谢你,Matt。那些投资于NVIDIA基础设施的人立刻就能获得回报。这是当前最好的投资回报率基础设施——计算基础设施投资。因此,一种思考方式,可能最简单的思考方式,就是回到最基本的原则。你拥有价值数万亿美元的通用计算基础设施,问题是你是否还想再建更多这样的基础设施。对于每建立10亿美元的基于通用CPU的基础设施,你可能会租赁不到10亿美元的收入,因为这已经是一个商品化的市场。地面上已经有了万亿美元的基础设施,再建更多的意义何在?

And so the people who are clamoring to get this infrastructure, one, when they build-out Hopper-based infrastructure and soon Blackwell-based infrastructure, they start saving money. That's tremendous return on investment. And the reason why they start saving money is because data processing saves money, data processing is pricing, just a giant part of it already. And so recommended system save money, so on and so forth, okay? And so you start saving money.
因此,那些渴望获得这一基础设施的人,当他们构建基于 Hopper 的基础设施以及即将到来的Blackwell-基于基础设施时,他们开始节省资金。这是巨大的投资回报。之所以开始节省资金,是因为数据处理节省了成本,数据处理本身就是定价,已经占据了很大一部分。因此,推荐系统节省资金,等等,好吗?所以你开始节省资金。

The second thing is everything you stand-up are going to get rented because so many companies are being founded to create generative AI. And so your capacity gets rented right away and the return on investment of that is really good. And then the third reason is your own business. Do you want to either create the next frontier yourself or your own Internet services benefit from a next-generation ad system or a next-generation recommended system or next-generation search system?
第二件事是你所建立的一切都会被租用,因为有很多公司正在成立以创建生成式人工智能。因此,你的能力会立即被租用,而投资回报率非常好。第三个原因是你自己的业务。你是想自己创造下一个前沿,还是让你自己的互联网服务受益于下一代广告系统、下一代推荐系统或下一代搜索系统?

So for your own services, for your own stores, for your own user-generated content, social media platforms for your own services, generative AI is also a fast ROI. And so there's a lot of ways you could think through it. But at the core, it's because it is the best computing infrastructure you could put in the ground today. The world of general-purpose computing is shifting to accelerated computing. The world of human-engineered software is moving to generative AI software. If you were to build infrastructure to modernize your cloud and your data centers, build it with accelerated computing NVIDIA. That's the best way to do it.
所以对于您自己的服务、您自己的商店、您自己的用户生成内容、社交媒体平台,生成式人工智能也是一个快速的投资回报。因此,您可以通过很多方式来思考这个问题。但从根本上说,这是因为它是您今天可以投入的最佳计算基础设施。通用计算的世界正在转向加速计算。人造软件的世界正在转向生成式人工智能软件。如果您要构建基础设施以现代化您的云和数据中心,请使用加速计算的 NVIDIA 来构建。这是最佳的做法。

Operator 主持人

And your next question comes from the line of Timothy Arcuri with UBS. Your line is open.
您的下一个问题来自 UBS 的 Timothy Arcuri。您的线路已开启。

Timothy Arcuri 蒂莫西·阿尔库里

Thanks a lot. I had a question on the shape of the revenue growth both near and longer-term. I know, Colette, you did increase OpEx for the year. And if I look at the increase in your purchase commitments and your supply obligations, that's also quite bullish. On the other hand, there's some school of thought that not that many customers really seem ready for liquid cooling and I do recognize that some of these racks can be air-cooled. But Jensen, is that something to consider sort of on the shape of how Blackwell is going to ramp?
非常感谢。我有一个关于收入增长形态的问题,无论是短期还是长期。我知道,Colette,你确实增加了今年的运营支出。如果我查看你们的采购承诺和供应义务的增加,这也相当乐观。另一方面,有一种观点认为并不是很多客户真的准备好液体冷却,我也意识到其中一些机架可以进行空气冷却。但是,Jensen,这是否是考虑Blackwell将如何增长的一个方面?

And then I guess when you look beyond next year, which is obviously going to be a great year and you look into '26, do you worry about any other gating factors like, say, the power, supply-chain or at some point, models start to get smaller? I'm just wondering if you could speak to that? Thanks.
然后我想,当你展望明年,显然那将是一个伟大的一年,接着你再看 2026 年,你是否担心其他的制约因素,比如说,电力、供应链,或者在某个时刻,模型开始变得更小?我只是想知道你能否谈谈这个?谢谢。

Jensen Huang 黄仁勋

I'm going to work backwards. I really appreciate the question, Tim. So remember, the world is moving from general purpose computing to accelerated computing. And the world builds about $1 trillion dollars’ worth of data centers -- $1 trillion dollars’ worth of data centers in a few years will be all accelerated computing. In the past, no GPUs are in data centers, just CPUs. In the future, every single data center will have GPUs. And the reason for that is very clear is because we need to accelerate workloads so that we can continue to be sustainable, continue to drive down the cost of computing so that when we do more computing our -- we don't experience computing inflation.
我将从后往前说。我非常感谢你的提问,Tim。所以请记住,世界正在从通用计算转向加速计算。在未来几年,世界将建设价值约 1 万亿美元的数据中心——价值 1 万亿美元的数据中心将全部是加速计算。在过去,数据中心没有 GPU,只有 CPU。在未来,每个数据中心都会有 GPU。原因非常明确,因为我们需要加速工作负载,以便能够继续保持可持续性,继续降低计算成本,这样当我们进行更多计算时,我们就不会经历计算通货膨胀。

Second, we need GPUs for a new computing model called generative AI that we can all acknowledge is going to be quite transformative to the future of computing. And so I think working backwards, the way to think about that is the next trillion dollars of the world's infrastructure will clearly be different than the last trillion, and it will be vastly accelerated.
其次,我们需要 GPU 来支持一种新的计算模型,称为生成性人工智能,我们都可以承认这将对未来的计算产生相当大的变革。因此,我认为反向思考,未来一万亿美元的全球基础设施显然将与过去的一万亿美元不同,并且将大大加速。

With respect to the shape of our ramp, we offer multiple configurations of Blackwell. Blackwell comes in either a Blackwell classic, if you will, that uses the HGX form factor that we pioneered with Volta, and I think it was Volta. And so we've been shipping the HGX form factor for some time. It is air-cooled. The Grace Blackwell is liquid-cooled. However, the number of data centers that want to go liquid-cooled is quite significant.
关于我们坡道的形状,我们提供多种配置的Blackwell。Blackwell有经典的Blackwell版本,如果你愿意的话,使用我们与 Volta 首创的 HGX 形状因素,我认为是 Volta。因此,我们已经发货 HGX 形状因素一段时间了。它是风冷的。Grace Blackwell是液冷的。然而,想要采用液冷的数据中心数量相当可观。

And the reason for that is because we can in a liquid-cooled data center, in any data center -- power limited data center, whatever size data center you choose, you could install and deploy anywhere from 3 times to 5 times, the AI throughput compared to the past. And so liquid cooling is cheaper, liquid cooling TCO is better, and liquid cooling allows you to have the benefit of this capability we call NVLink , which allows us to expand it to 72 Grace Blackwell packages, which has essentially 144 GPUs.
原因在于,在液冷数据中心中,无论是任何规模的电力受限数据中心,您都可以安装和部署比过去多 3 到 5 倍的 AI 吞吐量。因此,液冷更便宜,液冷的总拥有成本更好,液冷使您能够享受我们称之为 NVLink 的能力,这使我们能够扩展到 72 个 Grace Blackwell包,基本上有 144 个 GPU。

And so imagine 144 GPUs connected in NVLink and that is -- we're increasingly showing you the benefits of that. And the next click is obviously a very low-latency, very-high throughput, large language model inference. And the large domain is going to be a game-changer for that. And so I think people are very comfortable deploying both. And so almost every CSP we're working with are deploying some of both. And so I -- I'm pretty confident that we'll ramp it up just fine.
想象一下,144 个 GPU 通过 NVLink 连接,这就是我们越来越多地向您展示的好处。接下来的点击显然是非常低延迟、非常高吞吐量的大型语言模型推理。大型领域将成为这一切的游戏规则改变者。因此,我认为人们在部署这两者时都非常自信。因此,我们合作的几乎每个云服务提供商都在同时部署这两者。因此,我对我们能够顺利提升这一点非常有信心。

Your second question out of the third is that looking-forward, yes, next year is going to be a great year. We expect to grow our data center business quite significantly next year. Blackwell is going to be going to be a complete game-changer for the industry. And Blackwell is going to carry into the following year. And as I mentioned earlier, working backwards from first principles.
您的第三个问题中的第二个是展望未来,是的,明年将是一个伟大的年份。我们预计明年我们的数据中心业务将显著增长。Blackwell 将成为行业的彻底变革者。Blackwell 将延续到下一年。正如我之前提到的,从基本原则向后工作。

Remember that computing is going through two platform transitions at the same time and that's just really, really important to keep your head on -- your mind focused on, which is general-purpose computing is shifting to accelerated computing and human engineered software is going to transition to generative AI or artificial intelligence learned software.
请记住,计算正在经历两个平台的转变,这一点非常重要,要时刻保持清醒的头脑——专注于通用计算正在向加速计算转变,而人造软件将过渡到生成式人工智能或人工智能学习的软件。
因变化崛起也会因变化衰退,反复强调这个点有些问题,乔布斯说:“have a good enough place to go”,参考:《2003-05-29 Steve Jobs.Talk to MBA students about his experience as CEO of Pixar and Apple.》,描述=现实,企业和企业之间的差别有时候只是描述的清晰程度。

Operator 主持人

And your next question comes from the line of Stacy Rasgon with Bernstein Research. Your line is open.
您的下一个问题来自 Bernstein Research 的 Stacy Rasgon。您的线路已开启。

Stacy Rasgon 斯泰西·拉斯贡

Hi guys. Thanks for taking my questions. I have two short questions for Colette. The first, several billion dollars of Blackwell revenue in Q4. I guess is that additive? You said you expected Hopper demand to strengthen in the second half. Does that mean Hopper strengthens Q3 to Q4 as well on top of Blackwell adding several billion dollars?
嗨,大家好。感谢你们回答我的问题。我有两个简短的问题想问 Colette。第一个,第四季度的Blackwell收入是数十亿美元。我想问这是否是附加的?你说你预计 Hopper 的需求在下半年会增强。这是否意味着 Hopper 在第三季度到第四季度也会增强,此外Blackwell还会增加数十亿美元?

And the second question on gross margins, if I have mid-70s for the year, explaining where I want to draw that. If I have 75% for the year, I'd be something like 71% to 72% for Q4 somewhere in that range. Is that the kind of exit rate for gross margins that you're expecting? And how should we think about the drivers of gross margin evolution into next year as Blackwell ramps? And I mean, hopefully, I guess the yields and the inventory reserves and everything come up.
第二个问题是关于毛利率的。如果我全年毛利率大约在75%左右,能否解释一下这个预期的依据?如果全年是75%,那么第四季度大概会在71%到72%左右。这是否是你们预期的毛利率水平?在Blackwell逐步投入市场后,我们应该如何看待明年毛利率演变的驱动因素?我希望产量、库存储备等方面都能有所提升。

Colette Kress 科莱特·克雷斯

Yes. So Stacy, let's first take your question that you had about Hopper and Blackwell. So we believe our Hopper will continue to grow into the second half. We have many new products for Hopper, our existing products for Hopper that we believe will start continuing to ramp, in the next quarters, including our Q3 and those new products moving to Q4. So let's say Hopper there for versus H1 is a growth opportunity for that. Additionally, we have the Blackwell on top of that, and the Blackwell starting of -- ramping in Q4. So hope that helps you on those two pieces.
是的。那么斯泰西,首先我们来谈谈你关于 Hopper 和Blackwell的问题。我们相信我们的 Hopper 将在下半年继续增长。我们有许多新的 Hopper 产品,以及我们现有的 Hopper 产品,我们相信这些产品将在接下来的几个季度,包括第三季度,开始持续增长,并且这些新产品将在第四季度推出。因此,可以说 Hopper 在上半年与下半年相比是一个增长机会。此外,我们还有Blackwell,以及Blackwell将在第四季度开始 ramping。希望这对你理解这两个方面有所帮助。

Your second piece is in terms of on our gross margin. We provided gross margin for our Q3. We provided our gross margin on a non-GAAP at about 75%. We'll work with all the different transitions that we're going through, but we do believe we can do that 75% in Q3. We provided that we're still on track for the full-year also in the mid-70s or approximately the 75%. So we're going to see some slight difference possibly in Q4.
您的第二个问题是关于我们的毛利率。我们提供了第三季度的毛利率。我们的非 GAAP 毛利率约为 75%。我们将处理我们正在经历的所有不同过渡,但我们确实相信在第三季度可以实现 75%。我们提供的信息是,我们全年仍然保持在中 70%左右或大约 75%。因此,我们可能会在第四季度看到一些轻微的差异。

Again, with our transitions and the different cost structures that we have on our new product introductions. However, I'm not in the same number that you are there. We don't have exactly guidance, but I do believe you're lower than where we are.
再次提到我们的过渡以及我们在新产品推出时的不同成本结构。然而,我的数字和你那里的不一样。我们没有确切的指导,但我确实相信你的数字低于我们的水平。

Operator 主持人

And your next question comes from the line of Ben Reitzes with Melius. Your line is open.
您的下一个问题来自 Melius 的 Ben Reitzes。您的线路已开启。

Ben Reitzes 本·瑞茨斯

Yes, hey, thanks a lot for the question, Jensen and Colette. I wanted to ask about the geographies. There was the 10-Q that came out and the United States was down sequentially while several Asian geographies were up a lot sequentially. Just wondering what the dynamics are there? Obviously, China did very well. You mentioned it in your remarks, what are the puts and takes?
是的,嘿,非常感谢你的问题,Jensen 和 Colette。我想问一下地理方面的情况。刚刚发布的 10-Q 显示,美国的业绩环比下降,而几个亚洲地区的业绩环比大幅上升。我只是想知道那里的动态是什么?显然,中国表现非常好。你在发言中提到过,具体的利弊是什么?

And then I just wanted to clarify from Stacy's question, if that means the sequential overall revenue growth rates for the company accelerate in the fourth quarter given all those favorable revenue dynamics? Thanks.
然后我只是想澄清一下 Stacy 的问题,这是否意味着考虑到所有这些有利的收入动态,公司在第四季度的连续整体收入增长率加速?谢谢。

Colette Kress 科莱特·克雷斯

Let me talk about a bit in terms of our disclosure in terms of the 10-Q, a required disclosure, and a choice of geographies. Very challenging sometimes to create that right disclosure as we have to come up with one key piece. Piece is in terms of we have in terms of who we sell to and/or specifically who we invoice to. And so what you're seeing in terms of there is who we invoice. That's not necessarily where the product will eventually be, and where it may even travel to the end-customer. These are just moving to our OEMs, our ODMs, and our system integrators for the most part across our product portfolio.
让我谈谈我们在 10-Q 报告中的披露,这是一个必要的披露,以及地理选择。有时创建正确的披露非常具有挑战性,因为我们必须提出一个关键部分。这个部分是关于我们销售给谁和/或具体开票给谁。因此,您所看到的就是我们开票的对象。这不一定是产品最终所在的位置,也不一定是它可能到达最终客户的地方。这些主要是移动到我们的 OEM、ODM 和系统集成商,涵盖我们的大部分产品组合。
To B不是骗子也至少缺少自信。

So what you're seeing there is sometimes just a swift shift in terms of who they are using to complete their full configuration before those things are going into the data center, going into notebooks and those pieces of it. And that shift happens from time-to-time. But yes, our China number there are inverse into China, keep in mind that is incorporating both gaming, also data center, also automotive in those numbers that we have.
你看到的有时只是他们在完成整个配置之前,使用不同的供应商进行切换的情况,这些配置最终会进入数据中心、笔记本电脑等设备中。这种变化时有发生。不过,是的,我们在中国的数字是反映了多个方面,包括游戏、数据中心和汽车业务的数据。

Going back to your statement in regarding gross margin and also what we're seeing in terms of what we're looking at for Hopper and Blackwell in terms of revenue. Hopper will continue to grow in the second half. We'll continue to grow from what we are currently seeing. During -- determining that exact mix in each Q3 and Q4, we don't have here. We are not here to guide yet in terms of Q4. But we do see right now the demand expectations. We do see the visibility that will be a growth opportunity in Q4. On top of that, we will have our Blackwell architecture.
回到您关于毛利率的陈述,以及我们在 Hopper 和Blackwell的收入方面所看到的情况。Hopper 将在下半年继续增长。我们将继续从目前所看到的情况中增长。在确定每个 Q3 和 Q4 的确切组合时,我们这里没有数据。我们还不在这里对 Q4 进行指导。但我们现在确实看到了需求预期。我们确实看到了在 Q4 将会有增长机会的可见性。此外,我们将拥有我们的Blackwell架构。

Operator 主持人

And your next question comes from the line of C.J. Muse with Cantor Fitzgerald. Your line is open.
您的下一个问题来自 Cantor Fitzgerald 的 C.J. Muse。您的线路已开启。

C.J. Muse

Yes, good afternoon. Thank you for taking the question. You've embarked on a remarkable annual product cadence with challenges only likely becoming more and more given rising complexity and a radical limit in advanced package world. So curious, if you take a step back, how does this backdrop alter your thinking around potentially greater vertical integration, supply chain partnerships and then thinking through consequential impact to your margin profile? Thank you.
下午好,谢谢你回答我的问题。你们已经开始了一个令人瞩目的年度产品节奏,而随着复杂性的增加以及在先进封装领域的极限挑战,这些挑战可能会越来越大。所以我很好奇,如果你退一步思考,这种背景会如何改变你们在潜在的更大垂直整合、供应链合作伙伴关系方面的思考,以及如何考虑这些因素对你们的利润率的影响?谢谢。

Jensen Huang 黄仁勋

Yes, thanks. Let's see. I think the first answer to your -- the answer to your first question is that the reason why our velocity is so high is simultaneously because, the complexity of the model is growing and we want to continue to drive its cost down. It's growing, so we want to continue to increase its scale. And we believe that by continuing to scale the AI models that will reach a level of extraordinary usefulness and that it would open up, I realize the next industrial revolution. We believe it.
好的,谢谢。首先,回答你的第一个问题,我们速度如此之快的原因在于,随着模型复杂性的增加,我们希望继续降低其成本。模型在变得更加复杂,因此我们希望继续扩大其规模。我们相信,通过不断扩大AI模型的规模,我们将达到一个极其有用的水平,从而开启下一次工业革命。我们对此深信不疑。

And so we're going to drive ourselves really hard to continue to go up that scale. We have the ability fairly uniquely to integrate -- to design an AI factory because we have all the parts. It's not possible to come up with a new AI factory every year, unless you have all the parts. And so we have -- next year, we're going to ship a lot more CPUs than we've ever had in the history of our company, more GPUs, of course, but also NVLink switches, CX DPUs, ConnectX EPU for East and West, Bluefield DPUs for North and South and data and storage processing to InfiniBand for supercomputing centers to Ethernet, which is a brand-new product for us, which is well on its way to becoming a multi-billion dollar business to bring AI to Ethernet.
因此,我们会非常努力地继续在这个规模上前进。我们有独特的能力整合和设计一个AI工厂,因为我们拥有所有的组件。每年推出一个新的AI工厂是不可能的,除非你拥有所有的组件。因此,明年我们将交付比公司历史上任何时候都多的CPU,当然还有更多的GPU。此外,还有NVLink交换机、CX DPU、用于东西向的ConnectX EPU、用于南北向的Bluefield DPU,以及从超级计算中心的InfiniBand到以太网的数据和存储处理。这对我们来说是一个全新的产品,并且正在迅速发展成一个数十亿美元的业务,将AI引入以太网领域。

And so the fact that we could build -- we have access to all of this. We have one architectural stack, as you know. It allows us to introduce new capabilities to the market as we complete it. Otherwise, what happens is, you ship these parts, you go find customers to sell it to and then you've got to build -- somebody has got to build up an AI factory. And the AI factory has got a mountain of software.
因此,事实上我们能够构建——我们可以访问所有这些资源。我们有一个统一的架构堆栈,这使得我们能够在完成后迅速将新功能引入市场。否则,通常的流程是,你先交付这些部件,然后去寻找客户进行销售,然后还得有人去构建一个AI工厂。而AI工厂需要大量的软件支持。

And so it's not about who integrates it. We love the fact that our supply chain is disintegrated in the sense that we could service Quanta, Foxconn, HP, Dell, Lenovo, Supermicro. We used to be able to serve as ZT. They were recently purchased and so on and so forth. And so the number of ecosystem partners that we have, Gigabyte, ASUS, the number of ecosystem partners that we have that allows us allows us to -- allows them to take our architecture, which all works, but integrated in a bespoke way into all of the world's cloud service providers, enterprise data centers.
因此,这并不是关于谁来整合它。我们喜欢我们的供应链是分散的,这意味着我们可以为广达、富士康、惠普、戴尔、联想、超级微等公司提供服务。我们以前能够为ZT提供服务,但他们最近被收购了,等等。因此,我们拥有的生态系统合作伙伴数量,如技嘉、华硕等,允许他们以定制的方式将我们的架构整合到全球所有的云服务提供商和企业数据中心中,尽管我们的架构都能正常工作。

The scale and reach necessary from our ODMs and our integrator supply-chain is vast and gigantic because the world is huge. And so that part we don't want to do and we're not good at doing. And I -- but we know how to design the AI infrastructure, provided the way that customers would like it and lets the ecosystem integrate it. Well, yes. So anyways, that's the reason why.
我们 ODM 和集成商供应链所需的规模和覆盖面是巨大的,因为世界是庞大的。因此,我们不想做那部分工作,而且我们也不擅长这样做。但是我们知道如何设计 AI 基础设施,前提是符合客户的需求,并让生态系统能够集成。嗯,是的。所以,这就是原因所在。

Operator 主持人

And your final question comes from the line of Aaron Rakers with Wells Fargo. Your line is open.
您的最后一个问题来自富国银行的亚伦·雷克斯。您的线路已开启。

Aaron Rakers 亚伦·雷克斯

Yes, thanks for taking the questions. I wanted to go back into the Blackwell product cycle. One of the questions that we tend to get asked is how you see the Rack Scale system mix dynamic as you think about leveraging NVLink? You think about GB, NVL72, and how that go-to-market dynamic looks as far as the Blackwell product cycle? I guess put this distinctly, how do you see that mix of Rack Scale systems as we start to think about the Blackwell cycle playing out?
是的,谢谢你回答这些问题。我想回到Blackwell产品周期。我们经常被问到的一个问题是,当你考虑利用 NVLink 时,你如何看待机架规模系统的组合动态?你考虑 GB、NVL72,以及在Blackwell产品周期中,市场推广动态看起来如何?我想明确地说,当我们开始考虑Blackwell周期时,你如何看待机架规模系统的组合?

Jensen Huang 黄仁勋

Yes, Aaron, thanks. The Blackwell Rack system, it's designed and architected as a rack, but it's sold, in a disaggregated system components. We don't sell the whole rack. And the reason for that is because everybody's rack is a little different, surprisingly. You know, some of them are OCP standards, some of them are not, some of them are enterprise, and the power limits for everybody could be a little different. Choice of CDUs, the choice of power bus bars, the configuration and integration into people's data centers, all different.
是的,亚伦,谢谢。Blackwell机架系统,它被设计和架构为一个机架,但它是以分散的系统组件出售的。我们不出售整个机架。这样做的原因是因为每个人的机架都有些不同,令人惊讶的是。有些是 OCP 标准,有些不是,有些是企业级的,每个人的功率限制可能也会有所不同。CDU 的选择、配电母线的选择、以及与人们数据中心的配置和集成,都是不同的。

And so the way we designed it, we architected the whole rack. The software is going to work perfectly across the whole rack. And then we provide the system components, like for example, the CPU and GPU compute board is then integrated into an MGX, it's a modular system architecture. MGX is completely ingenious. And we have MGX ODMs and integrators and OEMs all over the plant.
因此,我们的设计是从整个机架的角度进行架构的,确保软件能够在整个机架上完美运行。然后,我们提供系统组件,例如CPU和GPU计算板卡,这些组件会被集成到MGX中,这是一种模块化系统架构。MGX的设计非常巧妙,我们在全球各地都有MGX的原始设计制造商(ODM)、集成商和原始设备制造商(OEM)。

And so just about any configuration you would like, where you would like that 3,000 pound rack to be delivered. It's got to be close to -- it has to be integrated and assembled close to the data center because it's fairly heavy. And so everything from the supply chain from the moment that we ship the GPU, CPUs, the switches, the NICs. From that point forward, the integration is done quite close to the location of the CSPs and the locations of the data centers. And so you can imagine how many data centers in the world there are and how many logistics hubs, we've scaled out to with our ODM partners.
因此,无论你需要什么配置,都可以将这个重达3000磅的机架交付到你需要的地方。由于它相当沉重,必须在离数据中心较近的地方进行集成和组装。因此,从我们发货GPU、CPU、交换机、网卡的那一刻起,集成工作就会在接近云服务提供商(CSP)和数据中心的位置完成。你可以想象全球有多少数据中心,以及我们与ODM合作伙伴一起扩展到的物流枢纽的数量。

And so I think that because we show it as one rack and because it's always rendered that way and shown that way, we might have left the impression that we're doing the integration. Our customers hate that we do integration. The supply-chain hates us doing integration. They want to do the integration. That's their value-added. There's a final design in, if you will. It's not quite as simple as shimmy into a data center, but the design fit in is really complicated.
所以我认为,因为我们将其展示为一个机架,并且总是以这种方式呈现和展示,我们可能给人留下了我们在进行集成的印象。我们的客户讨厌我们进行集成。供应链也讨厌我们进行集成。他们想要进行集成。这是他们的增值服务。如果你愿意的话,这里有一个最终设计。它并不像简单地进入数据中心那么简单,但设计的适配确实非常复杂。

And so the install -- the design fit-in, the installation, the bring-up, the repair and replace, that entire cycle is done all over the world. And we have a sprawling network of ODM and OEM partners that does this incredibly well. So integration is not the reason why we're doing racks. It's the anti-reason of doing it. The way we don't want to be an integrator. We want to be a technology provider.
因此,安装——设计适配、安装、启动、维修和更换,这整个周期在全球范围内都在进行。我们拥有一个庞大的 ODM 和 OEM 合作伙伴网络,他们做得非常出色。因此,集成并不是我们做机架的原因。相反,这正是我们不想成为集成商的原因。我们希望成为技术提供者。

Operator 主持人

And I will now turn the call back over to Jensen Huang for closing remarks.
现在我将把电话交还给黄仁勋进行结束发言。

Jensen Huang 黄仁勋

Thank you. Let me make a couple more -- make a couple of comments that I made earlier again. That data center worldwide are in full steam to modernize the entire computing stack with accelerated computing and generative AI. Hopper demand remains strong and the anticipation for Blackwell is incredible. Let me highlight the top-five things, the top-five things of our company. Accelerated computing has reached a tipping point, CPU scaling slows, developers must accelerate everything possible. Accelerated computing starts with CUDA-X libraries. New libraries open new markets for NVIDIA.
谢谢。让我再做几条评论,重申我之前提到的。全球的数据中心正在全力以赴地现代化整个计算堆栈,采用加速计算和生成式人工智能。Hopper 的需求依然强劲,对Blackwell的期待令人难以置信。让我强调我们公司的五大要点。加速计算已达到临界点,CPU 扩展速度放缓,开发者必须尽可能加速一切。加速计算始于 CUDA-X 库。新的库为 NVIDIA 开辟了新的市场。

We released many new libraries, including accelerated Polars, Pandas, and Spark, the leading data science and data processing libraries, QVS for vector product vector databases. This is incredibly hot right now. Aerial and Sionna for 5G wireless base station, a whole suite of -- a whole world of data centers that we can go into now. Parabricks for gene sequencing and Alpha-fold two for protein structure prediction is now CUDA accelerated.
我们发布了许多新库,包括加速的 Polars、Pandas 和 Spark,这些是领先的数据科学和数据处理库,QVS 用于向量产品向量数据库。这在现在非常热门。Aerial 和 Sionna 用于 5G 无线基站,整个数据中心的完整套件——我们现在可以进入的整个世界。Parabricks 用于基因测序,Alpha-fold two 现在支持 CUDA 加速的蛋白质结构预测。

We are at the beginning of our journey to modernize a $1 trillion dollars’ worth of data centers from general-purpose computing to accelerated computing. That's number-one. Number two, Blackwell is a step-function leap over Hopper. Blackwell is an AI infrastructure platform, not just a GPU. It also happens to be in the name of our GPU, but it's an AI infrastructure platform.
我们正处于将价值 1 万亿美元的数据中心从通用计算现代化的旅程开始。这是第一点。第二点,Blackwell 是对 Hopper 的阶跃式飞跃。Blackwell 是一个 AI 基础设施平台,不仅仅是一个 GPU。它恰好也在我们 GPU 的名称中,但它是一个 AI 基础设施平台。

As we reveal more of Blackwell and sample systems to our partners and customers, the extent of Blackwell's lead becomes clear. The Blackwell Vision took nearly five years and seven one-of-a-kind chips to realize. The Grace CPU, the Blackwell dual GPU, and a coVS package, ConnectX DPU for East-West traffic, BlueField DPU for North-South and storage traffic, NVLink switch for all-to-all GPU communications, and Quantum and Spectrum-X for both InfiniBand and Ethernet can support the massive burst traffic of AI. Blackwell AI factories are building-sized computers.
随着我们向合作伙伴和客户展示更多的Blackwell和样本系统,Blackwell的领先程度变得清晰。Blackwell愿景的实现花费了近五年时间和七个独一无二的芯片。Grace CPU、Blackwell双 GPU、以及一个 coVS 包,ConnectX DPU 用于东西向流量,BlueField DPU 用于南北向和存储流量,NVLink 交换机用于全到全 GPU 通信,以及 Quantum 和 Spectrum-X 用于 InfiniBand 和以太网,都可以支持 AI 的巨大突发流量。Blackwell AI 工厂是大型计算机。

NVIDIA designed and optimized the Blackwell platform full stack end-to-end from chips, systems, networking, even structured cables, power and cooling, and mounts of software to make it fast for customers to build AI factories. These are very capital-intensive infrastructures. Customers want to deploy it as soon as they get their hands on the equipment and deliver the best performance and TCO. Blackwell provides 3 times to 5 times more AI throughput in a power-limited data center than Hopper.
NVIDIA 设计并优化了 Blackwell 平台的全栈端到端,从芯片、系统、网络,甚至结构化电缆、电源和冷却,以及软件的安装,以便客户快速构建 AI 工厂。这些都是资本密集型基础设施。客户希望在获得设备后尽快部署,并提供最佳性能和总拥有成本 (TCO)。Blackwell 在功率受限的数据中心提供比 Hopper 多 3 到 5 倍的 AI 吞吐量。

The third is NVLink. This is a very big deal with its all-to-all GPU switch is game-changing. The Blackwell system lets us connect 144 GPUs in 72-GB 200 packages into one NVLink domain, with an aggregate NVLink bandwidth of 259 terabytes per second in one rack. Just to put that in perspective, that's about 10 times higher than Hopper, 259 terabytes per second, it kind of makes sense because you need to boost the training of multi-trillion parameter models on trillions of tokens. And so that natural amount of data needs to be moved around from GPU to GPU.
第三点是关于NVLink。这是一个非常重要的创新,具有全对全的GPU交换功能,真正改变了游戏规则。Blackwell系统让我们能够将144个GPU通过72个200 GB的封装连接到一个NVLink域中,在一个机架内实现总计259 TB/s的NVLink带宽。为了让大家更好地理解,这比Hopper高出大约10倍,259 TB/s的带宽是非常惊人的,因为你需要提升多万亿参数模型在数万亿个数据块上的训练速度。因此,这种巨大的数据量需要在GPU之间进行传输。

For inference, NVLink is vital for low-latency, high-throughput, large language model, token generation. We now have three networking platforms, NVLink for GPU scale-up, Quantum InfiniBand for supercomputing and dedicated AI factories, and Spectrum-X for AI on Ethernet. NVIDIA's networking footprint is much bigger than before.
对于推理,NVLink 对于低延迟、高吞吐量的大型语言模型和令牌生成至关重要。我们现在有三个网络平台,NVLink 用于 GPU 扩展,Quantum InfiniBand 用于超级计算和专用 AI 工厂,以及 Spectrum-X 用于以太网上的 AI。NVIDIA 的网络布局比以前大得多。

Generative AI momentum is accelerating. Generative AI frontier model makers are racing to scale to the next AI plateau to increase model safety and IQ. We're also scaling to understand more modalities from text, images, and video to 3D, physics, chemistry, and biology. Chatbots, coding AIs, and image generators are growing fast, but it's just the tip of the iceberg.
生成性人工智能的势头正在加速。生成性人工智能前沿模型制造商正在竞相扩展到下一个人工智能高原,以提高模型的安全性和智商。我们也在扩展以理解更多的模态,从文本、图像和视频到 3D、物理、化学和生物学。聊天机器人、编码人工智能和图像生成器正在快速增长,但这只是冰山一角。

Internet services are deploying generative AI for large-scale recommenders, ad targeting, and search systems. AI start-ups are consuming tens of billions of dollars yearly of CSP's cloud capacity and countries are recognizing the importance of AI and investing in sovereign AI infrastructure. And NVIDIA AI and NVIDIA Omniverse is opening up the next era of AI general robotics.
互联网服务正在大规模部署生成性人工智能用于推荐系统、广告定位和搜索系统。人工智能初创公司每年消耗数百亿美元的云服务提供商的云计算能力,各国正在认识到人工智能的重要性并投资于主权人工智能基础设施。NVIDIA AI 和 NVIDIA Omniverse 正在开启人工智能通用机器人时代的下一个篇章。

And now the enterprise AI wave has started and we're poised to help companies transform their businesses. The NVIDIA AI Enterprise platform consists of NeMo, NIMS, NIM agent blueprints, and AI Foundry, that our ecosystem partners the world-leading IT companies used to help customer -- companies customize AI models and build bespoke AI applications.
现在企业人工智能浪潮已经开始,我们准备帮助公司转型。NVIDIA AI Enterprise 平台由 NeMo、NIMS、NIM 代理蓝图和 AI Foundry 组成,我们的生态系统合作伙伴——全球领先的 IT 公司——利用这些工具帮助客户公司定制 AI 模型并构建定制的 AI 应用程序。

Enterprises can then deploy on NVIDIA AI Enterprise runtime and at $4,500 per GPU per year, NVIDIA AI Enterprise is an exceptional value for deploying AI anywhere. And for NVIDIA's software TAM can be significant as the CUDA-compatible GPU installed-base grows from millions to tens of millions. And as Colette mentioned, NVIDIA software will exit the year at a $2 billion run rate. Thank you all for joining us today.
企业可以在NVIDIA AI Enterprise运行时上部署AI,每个GPU每年的费用为4500美元,NVIDIA AI Enterprise为在任何地方部署AI提供了极大的价值。随着CUDA兼容的GPU安装基数从数百万增长到数千万,NVIDIA的软件市场规模(TAM)可能会显著扩大。正如Colette提到的,NVIDIA的软件业务将在今年年底达到20亿美元的年运行率。感谢大家今天的参与。

Operator 操作符

Ladies and gentlemen, this concludes today's call and we thank you for your participation. You may now disconnect.
各位女士们、先生们,今天的电话会议到此结束,感谢您的参与。您现在可以挂断。

    Article Comments Update


      热门标签


        • Related Articles

        • 2024-11-20 NVIDIA Corporation (NVDA) Q3 2025 Earnings Call Transcript

          NVIDIA Corporation (NASDAQ:NVDA) Q3 2025 Earnings Conference Call November 20, 2024 5:00 PM ET 英伟达公司(纳斯达克:NVDA)2025 年第三季度收益电话会议 2024 年 11 月 20 日 下午 5:00 ET Company Participants 公司参与者 Stewart Stecker - Investor Relations Stewart Stecker - 投资者关系 ...
        • 2024-02-21 NVIDIA Corporation (NVDA) Q4 2024 Earnings Call Transcript

          NVIDIA Corporation (NASDAQ:NVDA) Q4 2024 Earnings Conference Call February 21, 2024 5:00 PM ET 英伟达公司(纳斯达克股票代码:NVDA)2024 年第四季度收益电话会议 2024 年 2 月 21 日 下午 5:00 美东时间 Company Participants 公司参与者 Simona Jankowski - VP, IR 西蒙娜·扬科夫斯基 - 副总裁,投资者关系 Colette Kress ...
        • 2024-05-22 NVIDIA Corporation (NVDA) Q1 2025 Earnings Call Transcript

          NVIDIA Corporation (NASDAQ:NVDA) Q1 2025 Earnings Conference Call May 22, 2024 5:00 PM ET 英伟达公司(纳斯达克股票代码:NVDA)2025 年第一季度收益电话会议 2024 年 5 月 22 日 下午 5:00 ET Company Participants 公司参与者 Simona Jankowski - Vice President, Investor Relations Simona ...
        • 2024-06-05 NVIDIA Corporation (NVDA) BofA Securities 2024 Global Technology Conference (Transcript)

          NVIDIA Corporation (NASDAQ:NVDA) BofA Securities 2024 Global Technology Conference June 5, 2024 3:30 PM ET 英伟达公司(纳斯达克:NVDA)美国银行证券 2024 年全球技术大会 2024 年 6 月 5 日 下午 3:30 ET Company Participants 公司参与者 Ian Buck - VP 伊恩·巴克 - 副总裁 Conference Call Participants ...
        • 2024-03-19 NVIDIA Corporation (NVDA) GTC Financial Analyst Q&A - (Transcript)

          NVIDIA Corporation (NASDAQ:NVDA) GTC Financial Analyst Q&A Call March 19, 2024 11:30 AM ET 英伟达公司(纳斯达克:NVDA)GTC 财务分析师问答电话会议 2024 年 3 月 19 日上午 11:30 Company Participants 公司参与者 Jensen Huang - Founder and Chief Executive Officer 黄仁勋 - 创始人兼首席执行官 Colette ...