2025-08-27 NVIDIA Corporation (NVDA) Q2 2026 Earnings Call Transcript

2025-08-27 NVIDIA Corporation (NVDA) Q2 2026 Earnings Call Transcript

NVIDIA Corporation (NASDAQ:NVDA) Q2 2026 Earnings Conference Call August 27, 2025 5:00 PM ET

Company Participants

Colette M. Kress - Executive VP & CFO
Jen-Hsun Huang - Co-Founder, CEO, President & Director
Toshiya Hari - Vice President of Investor Relations & Strategic Finance

Conference Call Participants

Aaron Christopher Rakers - Wells Fargo Securities, LLC, Research Division
Benjamin Alexander Reitzes - Melius Research LLC
Christopher James Muse - Cantor Fitzgerald & Co., Research Division
James Edward Schneider - Goldman Sachs Group, Inc., Research Division
Joseph Lawrence Moore - Morgan Stanley, Research Division
Stacy Aaron Rasgon - Sanford C. Bernstein & Co., LLC., Research Division
Timothy Michael Arcuri - UBS Investment Bank, Research Division
Vivek Arya - BofA Securities, Research Division

Operator
主持人

Good afternoon. My name is Sarah, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA’s Second Quarter Fiscal 2026 Financial Results Conference Call. [Operator Instructions] Thank you. Toshiya Hari, you may begin your conference.
下午好。我叫 Sarah,今天由我担任本次电话会议的接线员。现在,欢迎大家参加 NVIDIA 2026 财年第二季度财报电话会议。[接线员提示] 谢谢。Toshiya Hari,请开始您的发言。

Toshiya Hari

Thank you. Good afternoon, everyone, and welcome to NVIDIA’s conference call for the second quarter of fiscal 2026. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
谢谢。各位下午好,欢迎参加 NVIDIA 2026 财年第二季度的财报电话会议。今天与我一同出席的 NVIDIA 同事有 Jensen Huang,总裁兼首席执行官;以及 Colette Kress,执行副总裁兼首席财务官。

I’d like to remind you that our call is being webcast live on NVIDIA’s Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2026. The content of today’s call is NVIDIA’s property. It can’t be reproduced or transcribed without our prior written consent.
我想提醒大家,本次电话会正在 NVIDIA 的投资者关系网站上进行直播。该网络直播将在 2026 财年第三季度财报电话会议召开前可供回放。今日电话会的内容为 NVIDIA 的财产,未经我们事先书面同意,不得复制或转录。

During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today’s earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, August 27, 2025, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.
在本次电话会议中,我们可能会基于当前预期作出前瞻性陈述。这些陈述受到多项重大风险和不确定性的影响,实际结果可能与之存在重大差异。有关可能影响我们未来财务结果和业务的因素,请参阅今日财报新闻稿、我们最新的 Forms 10-K 和 10-Q,以及我们可能向 Securities and Exchange Commission 提交的 Form 8-K 报告中的披露。我们的所有陈述均以今天(2025年8月27日)为准,基于目前我们可获得的信息。除法律要求外,我们不承担更新任何此类陈述的义务。

During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
在本次电话会上,我们还将讨论 non-GAAP 财务指标。您可在我们网站发布的 CFO 评论中找到这些 non-GAAP 财务指标与 GAAP 财务指标的对账。

With that, let me turn the call over to Colette.
接下来,我把话筒交给 Colette。

Colette M. Kress

Thank you, Toshiya. We delivered another record quarter while navigating what continues to be a dynamic external environment. Total revenue was $46.7 billion, exceeded our outlook as we grew sequentially across all market platforms. Data center revenue grew 56% year-over-year. Data center revenue also grew sequentially despite the $4 billion decline in H20 revenue. NVIDIA’s Blackwell platform reached record levels, growing sequentially by 17%. We began production shipments of GB300 in Q2. Our full stack AI solutions for cloud service providers, neoclouds, enterprises and sovereigns are all contributing to our growth.
谢谢你,Toshiya。在外部环境依然充满变数的情况下,我们再次实现创纪录的季度业绩。总收入为 467 亿美元,超出我们的指引,并且在所有市场平台均实现环比增长。数据中心收入同比增长 56%。尽管 H20 收入减少了 40 亿美元,数据中心收入仍实现环比增长。NVIDIA 的 Blackwell 平台达到历史新高,环比增长 17%。我们在第二季度开始 GB300 的量产出货。面向云服务商(cloud service providers)、新型云(neoclouds)、企业以及主权客户(sovereigns)的全栈 AI 解决方案都在为我们的增长做出贡献。

We are at the beginning of an industrial revolution that will transform every industry. We see $3 trillion to $4 trillion in AI infrastructure spend in the – by the end of the decade. The scale and scope of these build-outs present significant long-term growth opportunities for NVIDIA.
我们正处在将重塑所有行业的工业革命开端。到本十年末,AI 基础设施支出有望达到 3 万亿至 4 万亿美元。如此规模与范围的建设为 NVIDIA 带来了重大的长期增长机遇。

The GB200 NVL system is seeing widespread adoption with deployments at CSPs and consumer Internet companies. Lighthouse model builders, including OpenAI, Meta and Mistral are using the GB200 NVL72 at data center scale for both training, next- generation models and serving inference models in production.
GB200 NVL 系统正被广泛采用,已在多家 CSP 与消费互联网公司部署。包括 OpenAI、Meta 和 Mistral 在内的“灯塔型”模型构建者,正以数据中心规模使用 GB200 NVL72 来训练下一代模型,并在生产中提供推理服务。

The new Blackwell Ultra platform has also had a strong quarter, generating tens of billions in revenue. The transition to the GB300 has been seamless for major cloud service providers due to its shared architecture, software and physical footprint with the GB200, enabling them to build and deploy GB300 racks with ease. The transition to the new GB300 rack-based architecture has been seamless. Factory builds in late July and early August were successfully converted to support the GB300 ramp, and today, full production is underway. The current run rate is back at full speed, producing approximately 1,000 racks per week. This output is expected to accelerate even further throughout the third quarter as additional capacity comes online.
全新的 Blackwell Ultra 平台同样表现强劲,创造了数百亿美元的收入。由于 GB300 与 GB200 在架构、软件与物理形态上共享,主要云服务商实现了无缝切换,能够轻松构建并部署 GB300 机架。向全新基于机架的 GB300 架构的过渡同样无缝衔接。7 月下旬与 8 月上旬的工厂生产线已成功转换以支持 GB300 的爬坡,目前已进入全面量产。当前产能恢复全速,周产约 1,000 个机架。预计随着更多产能上线,三季度内这一产出将进一步加速。

We expect widespread market availability in the second half of the year as CoreWeave prepares to bring their GB300 instance to market as they are already seeing 10x more inference performance on reasoning models compared to H100. Compared to the previous Hopper generation, GB300 NVL72 AI factories promise a 10x improvement in token per watt energy efficiency, which translates to revenues as data centers are power limited.
我们预计下半年将实现广泛的市场可用性;CoreWeave 正准备将其 GB300 实例推向市场,他们已经在推理类模型上看到了相较 H100 高出 10 倍的推理性能。与上一代 Hopper 相比,GB300 NVL72 AI factories 在“token per watt”能效上承诺提升 10 倍;在数据中心受电力约束的情况下,这将直接转化为收入。

The chips of the Rubin platform are in fab, the Vera CPU, Rubin GPU, CX9 SuperNIC, NVLink 144 scale up switch, Spectrum-X scale out and scale across switch, and the silicon photonics processor. Rubin remains on schedule for volume production next year. Rubin will be our third-generation NVLink rack scale AI supercomputer with a mature and full-scale supply chain. This keeps us on track with our pace of an annual product cadence and continuous innovation across compute, networking, systems and software.
Rubin 平台的芯片已进入晶圆厂生产阶段,包括 Vera CPU、Rubin GPU、CX9 SuperNIC、NVLink 144 scale up switch、Spectrum-X scale out/scale across switch,以及 silicon photonics processor。Rubin 仍按计划在明年进入量产。Rubin 将成为我们第三代 NVLink 机架级 AI 超级计算机,配套成熟且完整的供应链。这使我们能够继续保持年度产品节奏,并在计算、网络、系统与软件各层面持续创新。

In late July, the U.S. government began reviewing licenses for sales of H20 to China customers. While a select number of our China- based customers have received licenses over the past few weeks, we have not shipped any H20 based on those licenses. USG officials have expressed an expectation that the USG will receive 15% of the revenue generated from licensed H20 sales, but to date, the USG has not published a regulation codifying such requirement.
7 月下旬,U.S. government 开始审查向中国客户销售 H20 的许可证。尽管过去数周内我们部分中国客户已获得许可证,但基于这些许可证我们尚未发运任何 H20。USG 官员曾表示,USG 期望从获许可的 H20 销售中获得 15% 的收入分成,但截至目前,USG 尚未发布将该要求制度化的法规。

We have not included H20 in our Q3 outlook as we continue to work through geopolitical issues. If geopolitical issues reside, we should ship $2 billion to $5 billion in H20 revenue in Q3. And if we had more orders, we can bill more. We continue to advocate for the U.S. government to approve Blackwell for China. Our products are designed and sold for beneficial commercial use, and every license sale we make will benefit the U.S. economy, the U.S. leadership. In highly competitive markets, we want to win the support of every developer. America’s AI technology stack can be the world’s standard if we race and compete globally.
鉴于我们仍在处理地缘政治问题,我们未将 H20 纳入三季度业绩指引。若地缘政治问题得到缓解,我们在三季度应可实现 20 亿至 50 亿美元的 H20 发货收入;如果我们拥有更多订单,还可以生产并交付更多。我们将继续倡议 U.S. government 批准在中国销售 Blackwell。我们的产品旨在用于有益的商业场景,每一笔经许可的销售都将有利于美国经济与美国的领导地位。在竞争激烈的市场中,我们希望赢得每一位开发者的支持。只要我们在全球范围内加速竞赛与竞争,美国的 AI 技术栈就能成为世界标准。

Notably, in the quarter was an increase in Hopper 100 and H200 shipments. We also sold approximately $650 million of H20 in Q2 to an unrestricted customer outside of China. The sequential increase in Hopper demand indicates the breadth of data center workloads that run on accelerated computing and the power of CUDA libraries and full stack optimizations, which continuously enhance the performance and economic value of our platform.
值得注意的是,本季度 Hopper 100 和 H200 的出货量有所增加。我们还向中国以外的一家不受限制的客户在第二季度销售了约 6.5 亿美元的 H20。Hopper 需求的环比提升表明,采用加速计算的数据中心工作负载范围广泛,同时也体现了 CUDA 库与全栈优化的威力,它们持续提升我们平台的性能与经济价值。

As we continue to deliver both Hopper and Blackwell GPUs, we are focusing on meeting the soaring global demand. This growth is fueled by capital expenditures from the cloud to enterprises, which are on track to invest $600 billion in data center infrastructure and compute this calendar year alone, nearly doubling in 2 years. We expect annual AI infrastructure investments to continue growing, driven by the several factors: reasoning agentic AI requiring orders of magnitude more training and inference compute, global build- outs for sovereign AI, enterprise AI adoption, and the arrival of physical AI and robotics.
随着我们持续交付 Hopper 与 Blackwell GPU,我们正专注于满足飙升的全球需求。这一增长由云到企业的资本开支推动,仅在今年就有望在数据中心基础设施与算力上投入 6,000 亿美元,两年内几乎翻倍。我们预计年度 AI 基础设施投资将继续增长,驱动因素包括:reasoning/agentic AI 需要数量级更高的训练与推理算力、面向 sovereign AI 的全球建设、企业级 AI 的采用,以及 physical AI 与机器人技术的到来。

Blackwell has set the benchmark as it is the new standard for AI inference performance. The market for AI inference is expanding rapidly with reasoning and agentic AI gaining traction across industries. Blackwell’s rack scale NVLink and CUDA full stack architecture addresses this by redefining the economics of inference. New NVFP4 4-bit precision and NVLink 72 on the GB300 platform delivers a 50x increase in energy efficiency per token compared to Hopper, enabling companies to monetize their compute at unprecedented scale. For instance, a $3 million investment in GB200 infrastructure can generate $30 million in token revenue, a 10x return.
Blackwell 已经树立标杆,成为 AI 推理性能的新标准。随着 reasoning 与 agentic AI 在各行业快速落地,AI 推理市场正迅速扩张。Blackwell 的机架级 NVLink 与 CUDA 全栈架构通过重新定义推理经济性来应对这一趋势。GB300 平台上的全新 NVFP4 4-bit 精度与 NVLink 72 相比 Hopper 实现了“每 token 能效”提升 50 倍,使企业能够以前所未有的规模将算力货币化。举例来说,对 GB200 基础设施投资 300 万美元,可带来 3,000 万美元的 token 收入,实现 10 倍回报。

NVIDIA software innovation, combined with the strength of our developer ecosystem, has already improved Blackwell’s performance by more than 2x since its launch. Advances in CUDA, TensorRT-LLM and Dynamo are unlocking maximum efficiency. CUDA library contributions from the open source community, along with NVIDIA’s open libraries and frameworks are now integrated into millions of workflows. This powerful flywheel of collaborative innovation between NVIDIA and global community contribution strengthens NVIDIA’s performance leadership. NVIDIA is a top contributor to OpenAI models, data and software.
自发布以来,得益于 NVIDIA 的软件创新与开发者生态的力量,Blackwell 的性能已提升逾 2 倍。CUDA、TensorRT-LLM 与 Dynamo 的进步正在释放更高效率。开源社区对 CUDA 库的贡献与 NVIDIA 的开放库和框架现已融入数以百万计的工作流。NVIDIA 与全球社区协作创新所形成的强大飞轮,进一步巩固了 NVIDIA 的性能领先地位。NVIDIA 亦是 OpenAI 模型、数据与软件的重要贡献者之一。

Blackwell has introduced a groundbreaking numerical approach to large language model pretraining. Using NVFP4 computations on the GB300 can now achieve 7x faster training than the H100, which uses FP8. This innovation delivers the accuracy of 16-bit precision with the speed and efficiency of 4 bit, setting a new standard for AI factor efficiency and scalability.
Blackwell 为大语言模型的预训练引入了突破性的数值方法。在 GB300 上使用 NVFP4 计算,相比采用 FP8 的 H100,训练速度可快 7 倍。该创新在保持 16-bit 精度级别准确度的同时,实现了 4 bit 的速度与效率,为 AI 的效率与可扩展性树立了新标准。

The AI industry is quickly adopting this revolutionary technology with major players such as AWS, Google Cloud, Microsoft Azure and OpenAI as well as Cohere, Mistral, Kimi AI, Perplexity, Reflection and Runway already embracing it. NVIDIA’s performance leadership was further validated in the latest MLPerf Training benchmarks, where the GB200 delivered a clean sweep. Be on the lookout for the upcoming MLPerf Inference results in September, which will include benchmarks based on the Blackwell Ultra.
AI 行业正在快速采纳这一革命性技术,AWS、Google Cloud、Microsoft Azure 与 OpenAI,以及 Cohere、Mistral、Kimi AI、Perplexity、Reflection、Runway 等主要参与者都已拥抱该技术。NVIDIA 的性能领先也在最新的 MLPerf Training 基准中得到进一步验证:GB200 全面胜出。请关注 9 月即将发布的 MLPerf Inference 结果,届时将包含基于 Blackwell Ultra 的基准测试。

NVIDIA RTX PRO servers are in full production for the world system makers. These are air-cooled PCIe-based systems integrated seamlessly into standard IT environments and run traditional enterprise IT applications as well as the most advanced agentic and physical AI applications. Nearly 90 companies including many global leaders are already adopting RTX PRO servers. Hitachi uses them for real-time simulation and digital twins, Lilly for drug discovery, Hyundai for factory design and AV validation, and Disney for immersive storytelling. As enterprises modernize data centers, RTX PRO servers are poised to become a multibillion-dollar product line.
NVIDIA RTX PRO 服务器已面向全球系统厂商全面量产。它们是风冷、基于 PCIe 的系统,可无缝融入标准 IT 环境,既能运行传统企业级 IT 应用,也能运行最前沿的 agentic 与 physical AI 应用。包括众多全球领导者在内的近 90 家公司已开始采用 RTX PRO 服务器。Hitachi 用于实时仿真与数字孪生,Lilly 用于药物研发,Hyundai 用于工厂设计与自动驾驶验证,Disney 用于沉浸式叙事。随着企业数据中心的现代化,RTX PRO 服务器有望成长为数十亿美元级的产品线。

Sovereign AI is one on the rise as the nation’s ability to develop its own AI using domestic infrastructure, data and talent presents a significant opportunity for NVIDIA. NVIDIA is at the forefront of landmark initiatives across the U.K. and Europe. The European Union plans to invest EUR 20 billion to establish 20 AI factories across France, Germany, Italy and Spain, including 5 gigafactories to increase its AI compute infrastructure by tenfold.
Sovereign AI 正在兴起,各国依托本土基础设施、数据与人才自主发展 AI 的能力为 NVIDIA 带来重大机遇。NVIDIA 走在 U.K. 与欧洲多项标志性项目的前沿。European Union 计划投资 200 亿欧元,在 France、Germany、Italy 与 Spain 建设 20 座 AI 工厂,其中包括 5 座 “gigafactories”,以将其 AI 计算基础设施提升 10 倍。

In the U.K., the Isambard-AI supercomputer powered by NVIDIA was unveiled at the country’s most powerful AI system, delivering 21 exaflops of AI performance to accelerate breakthroughs in fields of drug discovery and climate modeling. We are on track to achieve over [ 20 billion ] in Sovereign AI revenue this year, more than double than that last year.
在 U.K.,由 NVIDIA 提供算力的 Isambard-AI 超级计算机正式亮相,成为该国最强大的 AI 系统,提供 21 EFLOPS 的 AI 性能,以加速药物研发与气候建模等领域的突破。我们有望在今年实现超过 [ 20 billion ] 的 Sovereign AI 收入,较去年翻倍以上。

Networking delivered record revenue of $7.3 billion, and escalating demands of AI compute clusters necessitate high efficiency and low latency networking. This represents a 46% sequential and 98% year-on-year increase with strong demand across Spectrum- X Ethernet, InfiniBand and NVLink. Our Spectrum-X enhanced Ethernet solutions provide the highest throughput and lowest latency network for Ethernet AI workloads. Spectrum-X Ethernet delivered double-digit sequential and year-over-year growth with annualized revenue exceeding $10 billion. At Hot Chips, we introduced Spectrum-XGS Ethernet, a technology design to unify disparate data centers into giga-scale AI super factories. [ CoreWeave ] is an initial adopter of the solution, which is projected to double GPU-to-GPU communication speed.
Networking 实现创纪录的 73 亿美元收入。随着 AI 计算集群需求不断升级,高效率、低时延的网络变得至关重要。这一结果对应环比 46%、同比 98% 的增长,Spectrum-X Ethernet、InfiniBand 与 NVLink 的需求强劲。我们的 Spectrum-X 增强型以太网方案为以太网 AI 工作负载提供最高吞吐与最低时延。Spectrum-X Ethernet 实现了两位数的环比与同比增长,年化收入超过 100 亿美元。在 Hot Chips 上,我们发布了 Spectrum-XGS Ethernet,这是一项旨在把分散的数据中心统一为超大规模 AI 超级工厂的技术。[ CoreWeave ] 是该方案的首批采用者之一,预计可将 GPU 到 GPU 的通信速度提升一倍。

InfiniBand revenue nearly doubled sequentially, fueled by the adoption of XDR technology, which provides double the bandwidth improvement over its predecessor, especially valuable for the model builders. The world’s fastest switch, NVLink, with 14x the bandwidth of PCIe Gen 5 delivered strong growth as customers deployed Grace Blackwell NVLink rack scale systems.
InfiniBand 收入环比接近翻倍,受益于 XDR 技术的采用;该技术相较前代将带宽提升一倍,对模型构建者尤为重要。全球最快的交换互连 NVLink,其带宽是 PCIe Gen 5 的 14 倍,随着客户部署 Grace Blackwell NVLink 机架级系统而实现强劲增长。

The positive reception to NVLink Fusion, which allows semi-custom AI infrastructure, has been widespread. Japan’s upcoming FugakuNEXT will integrate Fujitsu’s CPUs with our architecture via NVLink Fusion. It will run a range of workloads, including AI, supercomputing and quantum computing. FugakuNEXT joins a rapidly expanding list of leading quantum supercomputing and research centers running on NVIDIA’s CUDA-Q quantum platform, including [ ULIC ], AIST, [ NNF ] and NERSC, supported by over 300 ecosystem partners, including AWS, Google Quantum AI, Quantinuum, QuEra and PsiQuantum.
允许进行半定制 AI 基础设施的 NVLink Fusion 受到广泛好评。日本即将推出的 FugakuNEXT 将通过 NVLink Fusion 把 Fujitsu 的 CPU 与我们的架构集成。它将运行包括 AI、超级计算与量子计算在内的多类工作负载。FugakuNEXT 加入了快速扩大的、基于 NVIDIA CUDA-Q 量子平台运行的顶尖量子超算与研究中心名单,其中包括 [ ULIC ]、AIST、[ NNF ] 与 NERSC;该生态获得 300 多家合作伙伴的支持,包括 AWS、Google Quantum AI、Quantinuum、QuEra 与 PsiQuantum。

Jetson Thor, our new robotics computing platform, is now available. Thor delivers an order of magnitude greater AI performance and energy efficiency than NVIDIA AGX Orin. It runs the latest generative and reasoning AI models at the edge in real time, enabling state-of-the-art robotics.
Jetson Thor,我们新的机器人计算平台,现已推出。与 NVIDIA AGX Orin 相比,Thor 在 AI 性能与能效上实现数量级提升。它可在边缘实时运行最新的生成式与推理型 AI 模型,赋能最前沿的机器人系统。

Adoption of NVIDIA’s robotics full stack platform is growing at rapid rate, over 2 million developers and 1,000-plus hardware software applications and sensor partners taking our platform to market. Leading enterprises across industries have adopted Thor, including Agility Robotics, Amazon Robotics, Boston Dynamics, Caterpillar, Figure, Hexagon, Medtronic and Meta.
NVIDIA 机器人全栈平台的采用正在快速增长,已有逾 200 万开发者以及 1,000+ 家硬件、软件、应用与传感器合作伙伴将我们的平台推向市场。各行业的领先企业已采用 Thor,包括 Agility Robotics、Amazon Robotics、Boston Dynamics、Caterpillar、Figure、Hexagon、Medtronic 与 Meta。

Robotic applications require exponentially more compute on the device and in infrastructure, representing a significant long-term demand driver for our data center platform. NVIDIA Omniverse with Cosmos is our data center physical AI digital twin platform built for development of robot and robotic systems. This quarter, we announced a major expansion of our partnership with Siemens to enable AI automatic factories. Leading European robotics companies, including Agile Robots, NEURA Robotics and Universal Robots are building their latest innovations with the Omniverse platform.
机器人应用在终端设备与基础设施侧都需要呈指数级增长的算力,这构成了我们数据中心平台的重要长期需求驱动。NVIDIA Omniverse 搭配 Cosmos 是我们面向机器人与机器人系统开发的数据中心级 Physical AI 数字孪生平台。本季度,我们宣布与 Siemens 的合作大幅扩展,以实现 AI 自动化工厂。包括 Agile Robots、NEURA Robotics 与 Universal Robots 在内的欧洲领先机器人公司,正基于 Omniverse 平台打造其最新创新。

Transitioning to a quick summary of our revenue by geography. China declined on a sequential basis to low single-digit percentage of data center revenue. Note, our Q3 outlook does not include H20 shipments to China customers. Singapore revenue represented 22% of second quarter’s billed revenue as customers have centralized their invoicing in Singapore. Over 99% of data center compute revenue billed to Singapore was for U.S.-based customers.
下面快速按地区概述我们的收入。China 环比下降至数据中心收入的低个位数占比。需要说明的是,我们的三季度指引未包含对 China 客户的 H20 发货。由于客户将开票集中在 Singapore,Singapore 在第二季度的开票收入中占比 22%。开票至 Singapore 的数据中心计算收入中,超过 99% 实际对应 U.S.-based 客户。

Our gaming revenue was a record $4.3 billion, a 14% sequential increase and a 49% jump year-on-year. This was driven by the ramp of Blackwell GeForce GPUs and strong sales continued as we increase supply availability. This quarter, we shipped GeForce RTX 5060 desktop GPU. It brings double the performance along with advanced ray tracing, neural rendering and AI-powered DLSS 4 gameplay to millions of gamers worldwide. Blackwell is coming to GeForce NOW in September. This is GeForce NOW’s most significant upgrade, offering RTX 5080 cost performance, minimal latency and 5K resolution at 120 frames per second. We are also doubling the GeForce NOW catalog to over 4,500 titles, the largest library of any cloud gaming service.
我们的 gaming 收入创下 43 亿美元的历史新高,环比增长 14%,同比增长 49%。这受益于 Blackwell GeForce GPU 的爬坡以及供应改善后持续强劲的销售。本季度我们出货了 GeForce RTX 5060 台式机 GPU,它为全球数百万玩家带来 2 倍性能,并具备先进的光线追踪、神经渲染与由 AI 驱动的 DLSS 4 游戏体验。Blackwell 将于 9 月登陆 GeForce NOW。这将是 GeForce NOW 有史以来最重要的一次升级,提供 RTX 5080 级别的性价比、极低时延以及 5K/120 帧每秒的分辨率。我们还将把 GeForce NOW 游戏库扩大一倍至 4,500+ 款,成为所有云游戏服务中规模最大的库。

For AI enthusiasts, on-device AI performs the best RTX GPUs. We partnered with OpenAI to optimize their open source GPT models for high-quality, fast and efficient inference on millions of RTX-enabled Window devices. With the RTX platform stack, Window developers can create AI applications designed to run on the world’s largest AI PC user base.
对于 AI 爱好者而言,端侧 AI 在 RTX GPUs 上表现最佳。我们与 OpenAI 合作,对其开源的 GPT 模型进行优化,使其能在数以百万计支持 RTX 的 Windows 设备上实现高质量、快速且高效的推理。借助 RTX 平台栈,Windows 开发者可以面向全球最大的 AI PC 用户群开发并运行 AI 应用。

Professional Visualization revenue reached $601 million, a 32% year-on-year increase. Growth was driven by an adoption of the high- end RTX Workstation GPUs and AI-powered workload like design, simulation and prototyping. Key customers are leveraging our solutions to accelerate their operations. Activision Blizzard uses RTX workstations to enhance creative workflows. While robotics innovator Figure AI powers its humanoid robots with RTX-embedded GPUs.
Professional Visualization 收入达到 6.01 亿美元,同比增长 32%。增长动力来自高端 RTX Workstation GPUs 的采纳,以及设计、仿真与原型开发等由 AI 驱动的工作负载。核心客户正借助我们的解决方案加速业务流程:Activision Blizzard 使用 RTX 工作站强化创意工作流;机器人创新者 Figure AI 则通过嵌入 RTX 的 GPUs 为其人形机器人提供算力。

Automotive revenue, which includes only in-car compute revenue, was $586 million, up 69% year-on-year, primarily driven by self- driving solutions. We have begun shipments of NVIDIA Thor SoC, the successor to Orin. Thor’s arrival coincides with the industry’s accelerating shift to vision language model architecture, generative AI and higher levels of autonomy. Thor is the most successful robotics and AV computer we’ve ever created. Thor will power. Our full stack Drive AV software platform is now in production, opening up billions to new revenue opportunities for NVIDIA while improving vehicle safety and autonomy.
Automotive 收入(仅包含车载计算收入)为 5.86 亿美元,同比增长 69%,主要受自动驾驶解决方案拉动。我们已开始出货 NVIDIA Thor SoC(Orin 的继任者)。Thor 的面世恰逢行业加速转向视觉语言模型架构、生成式 AI 以及更高等级的自动化。Thor 是我们迄今最成功的机器人与自动驾驶计算平台。Thor 将提供算力支撑。我们的全栈 Drive AV 软件平台现已投入量产,在提升车辆安全与自动化水平的同时,为 NVIDIA 打开数十亿美元的新收入机会。

Now moving to the rest of our P&L. GAAP gross margin was 72.4% and non-GAAP gross margin was 72.7%. These figures include a $180 million or 40 basis point benefit from releasing previously reserved H20 inventory. Excluding this benefit, non-GAAP gross margins would have been 72.3%, still exceeding our outlook. GAAP operating expenses rose 8% and 6% on a non-GAAP basis sequentially. This increase was driven by higher compute and infrastructure costs as well as higher compensation and benefit costs. To support the ramp of Blackwell and Blackwell Ultra, inventory increased sequentially from $11 billion to $15 billion in Q2.
下面转到损益表的其余部分。GAAP 毛利率为 72.4%,non-GAAP 毛利率为 72.7%。上述数字包含了来自释放此前计提的 H20 库存而带来的 1.8 亿美元(或 40 个基点)的收益影响。若不计该影响,non-GAAP 毛利率将为 72.3%,仍高于我们的指引。GAAP 与 non-GAAP 口径的营业费用环比分别上升 8% 与 6%,主要由于计算与基础设施成本上升,以及薪酬与福利成本提高。为支持 Blackwell 与 Blackwell Ultra 的爬坡,二季度库存环比从 110 亿美元增至 150 亿美元。

While we prioritize funding our growth and strategic initiatives, in Q2, we returned $10 billion to shareholders through share repurchases and cash dividends. Our Board of Directors recently approved a $60 billion share repurchase authorization to add to our remaining $14.7 billion of authorization at the end of Q2.
在优先投入增长与战略项目的同时,我们在二季度通过股票回购与现金分红向股东回馈 100 亿美元。我们的董事会近期又批准了 600 亿美元的股票回购授权,叠加二季度末尚余的 147 亿美元授权额度。

Okay. Let me turn it to the outlook for the third quarter. Total revenue is expected to be $54 billion, plus or minus 2%. This represents over $7 billion in sequential growth. Again, we do not assume any H20 shipments to China customers in our outlook. GAAP and non- GAAP gross margins are expected to be 73.3%, 73.5%, respectively, plus or minus 50 basis points. We continue to expect to exit the year with non-GAAP gross margins in the mid-70s. GAAP and non-GAAP operating expenses are expected to be approximately $5.9 billion and $4.2 billion, respectively. For the full year, we expect operating expenses to grow in the high 30s range year-over-year, up from our prior expectations of the mid-30s. We are accelerating investments in the business to address the magnitude of growth opportunities that lie ahead.
好了,接下来给出第三季度展望。我们预计总收入为 540 亿美元,上下浮动 2%,这意味着环比增长逾 70 亿美元。再次强调,我们的指引未包含对 China 客户的任何 H20 发货。预计 GAAP 与 non-GAAP 毛利率分别为 73.3% 与 73.5%,上下浮动 50 个基点。我们仍预计全年末 non-GAAP 毛利率将处于 70% 中段。预计 GAAP 与 non-GAAP 营业费用分别约为 59 亿美元与 42 亿美元。全年来看,我们预计营业费用同比增长将在 30% 高段,高于此前“30% 中段”的预期。我们正在加速业务投资,以应对前方巨大的增长机遇。

GAAP and non-GAAP other income and expenses are expected to be an income of approximately $500 million, excluding gains and losses from nonmarketable and public held equity securities. GAAP and non-GAAP tax rates are expected to be 16.5%, plus or minus 1%, excluding any discrete items. Further financial data are included in the CFO commentary and other information available on our website.
预计 GAAP 与 non-GAAP 的其他收支为约 5 亿美元的净收益,不含非可交易及公开持有的股权证券的损益。预计 GAAP 与 non-GAAP 的实际税率为 16.5%,上下浮动 1%,不含一次性事项。更多财务数据见我们网站上的 CFO 评论与相关资料。

In closing, let me highlight upcoming events for the financial community. We will be at the Goldman Sachs Technology Conference on September 8 in San Francisco. Our annual NDR will commence the first part of October. GTC data center begins on October 27, with Jensen’s keynote scheduled for the 28. We look forward to seeing you at these events. Our earnings call to discuss the results of our third quarter of fiscal 2026 is scheduled for November 19. We will now open the call for questions. Operator, would you please poll for questions?
最后,提醒财务界关注我们即将出席的活动。9 月 8 日我们将参加在 San Francisco 举行的 Goldman Sachs Technology Conference。我们的年度 NDR(非交易路演)将于 10 月上旬启动。GTC 数据中心活动将于 10 月 27 日开启,Jensen 的主题演讲定于 10 月 28 日。期待在这些活动上与各位相见。我们讨论 2026 财年第三季度业绩的财报电话会定于 11 月 19 日举行。现在进入问答环节。Operator,请开始提问轮询。

Question-and-Answer Session
问答环节

Operator

[Operator Instructions] Your first question comes from CJ Muse with Cantor Fitzgerald.
[接线员提示] 第一位提问来自 Cantor Fitzgerald 的 CJ Muse。

Christopher James Muse

I guess with wafer in to rack out lead times of 12 months, you confirmed on the call today that Rubin is on track for ramp in the second half. And obviously, many of these investments are multiyear projects contingent upon power, cooling, et cetera. I was hoping perhaps could you take a high-level view and speak to your vision for growth into 2026. And as part of that, if you can kind of comment between network and data center would be very helpful.
我理解从晶圆投片到机架出货(wafer in to rack out)的交付周期为 12 个月,你们今天在电话会上也确认 Rubin 将在下半年按计划爬坡。显然,其中许多投资是为期多年的项目,取决于电力、制冷等条件。希望你能从高层次谈谈你们对迈向 2026 年增长的愿景;同时,如果能就 networking 与 data center 之间的情况作些评论,将非常有帮助。

Jen-Hsun Huang

Yes. Thanks, CJ. At the highest level of growth drivers would be the evolution, the introduction, if you will, of reasoning agentic AI. Where chatbots used to be one shot, you give it a prompt and it would generate the answer, now the AI does research. It thinks and does a plan, and it might use tools. And so it’s called long thinking; and the longer it thinks, oftentimes, it produces better answers.
是的。谢谢你,CJ。从最高层次看,增长驱动来自 reasoning、agentic AI 的演进与引入。过去的聊天机器人是 one shot——给一个提示就生成答案;而现在的 AI 会去做 research,它会思考、制定 plan,并且可能调用 tools。这被称为 long thinking;而且思考得越久,往往能给出更好的答案。

And the amount of computation necessary for 1 shot versus reasoning agentic AI models could be 100x, 1,000x and potentially even more as the amount of research and basically reading and comprehension that it goes off to do. And so the amount of computation that has resulted in agentic AI has grown tremendously. And of course, the effectiveness has also grown tremendously. Because of agentic AI, the amount of hallucination has dropped significantly. You can now use tools and perform tasks. Enterprises have been opened up. As a result of agentic AI and vision language models, we now are seeing a breakthrough in physical AI, in robotics, autonomous systems. So the last year, AI has made tremendous progress and agentic systems, reasoning systems is completely revolutionary.
而与 one shot 相比,reasoning/agentic AI 模型所需的计算量可能是 100 倍、1,000 倍,甚至更高,因为它会进行大量 research,基本上是“阅读与理解”。因此,agentic AI 带来的算力需求大幅增长;当然,其有效性也显著提升。由于 agentic AI,hallucination 明显下降;现在它可以使用 tools 并执行 tasks,企业级应用由此被打开。受 agentic AI 与 vision language models 的推动,我们正看到 physical AI、机器人与自治系统方面的突破。过去一年里,AI 取得了巨大的进步,而 agentic/reasoning 系统可谓彻底的革命。

Now we built the Blackwell NVLink 72 system, a rack scale computing system, for this moment. We’ve been working on it for several years. This last year, we transitioned from NVLink 8, which is a node scale computing, each node is a computer, to now NVLink 72, where each rack is a computer. That disaggregation of NVLink 72 into a rack scale system was extremely hard to do, but the results are extraordinary. We’re seeing orders of magnitude speed up and therefore, energy efficiency and therefore, cost effectiveness of token generation because of NVLink 72.
现在我们为这一时刻打造了 Blackwell NVLink 72 系统——一个 rack scale 的计算系统。我们为此已研发多年。过去一年里,我们从 NVLink 8(node scale,每个节点是一台计算机)转向 NVLink 72(rack scale,每个机架是一台计算机)。将 NVLink 72 解耦为机架级系统的过程极其困难,但成果非凡。借助 NVLink 72,我们看到了数量级的加速,也因此带来能效与 token 生成成本效益的数量级提升。

And so over the next couple of years, you’re going to – well, you asked about longer term. Over the next 5 years, we’re going to scale into with Blackwell, with Rubin and follow-ons to scale into effectively a $3 trillion to $4 trillion AI infrastructure opportunity. The last couple of years, you have seen that CapEx has grown in just the top 4 CSPs by – has doubled and grown to about $600 billion. So we’re in the beginning of this build-out, and the AI technology advances has really enabled AI to be able to adopt and solve problems to many different industries.
因此,在接下来的几年——更长期来看,在未来 5 年里——我们将依托 Blackwell、Rubin 及其后续产品扩展,切入实质规模达 3 万亿至 4 万亿美元的 AI 基础设施机遇。过去两年里,你已经看到前四大 CSP 的 CapEx 翻倍,增长至大约 6,000 亿美元。我们仍处于这一建设周期的早期阶段,而 AI 技术的进步确实让 AI 得以被更多行业采用并解决各种问题。

Operator
主持人

Your next question comes from Vivek Arya with Bank of America Securities.
下一位提问来自 Bank of America Securities 的 Vivek Arya。

Vivek Arya

Colette, I just wanted to clarify the $2 billion to $5 billion in China. What needs to happen? And what is the sustainable pace of that China business as you get into Q4?
Colette,我想确认一下关于 China 的 20 亿至 50 亿美元这一数字。需要满足哪些条件?进入第四季度后,这块 China 业务的可持续节奏大概会是怎样?

And then, Jensen, for you on the competitive landscape. Several of your large customers already have or are planning many ASIC projects. I think 1 of your ASIC competitors, Broadcom, signaled that they could grow their AI business almost 55%, 60% next year. Any scenario in which you see the market moving more towards ASICs and away from NVIDIA GPU? Just what are you hearing from your customers? How are they managing this split between the use of merchant silicon and ASICs?
然后,Jensen,我想请你谈谈竞争格局。你们的几家大型客户已经在做或计划做很多 ASIC 项目。我认为你们的一家 ASIC 竞品 Broadcom 表示其明年的 AI 业务或将增长近 55% 到 60%。你是否看到某些情形下市场会更多转向 ASICs、而弱化 NVIDIA GPU 的使用?你从客户那里听到的反馈是什么?他们如何在通用芯片(merchant silicon)与 ASICs 之间进行分配管理?

Colette M. Kress

Thanks, Vivek. So let me first answer your question regarding what will it take for the H20s to be shipped. There is interest in our H20s. There is the initial set of license that we received. And then additionally, we do have supply that we are ready, and that’s why we communicated that somewhere in the range of about $2 billion to $5 billion this quarter, we could potentially ship.
谢谢你,Vivek。先回答关于 H20 何时可以发货的问题。市场对我们的 H20 是有兴趣的;我们也已经拿到第一批许可证。此外,我们也具备可立即供货的产能,这就是为什么我们表示本季度潜在可发运约 20 亿至 50 亿美元的 H20。

We’re still waiting on several of the geopolitical issues going back and forth between the governments and the companies trying to determine their purchases and what they want to do. So it’s still open at this time, and we’re not exactly sure what that full amount will be this quarter. However, if more interest arrives, more licenses arrives, again, we can also still build additional H20 and ship more as well.
我们仍在等待若干地缘政治事项的推进;各国政府与公司之间在敲定采购与下一步安排。因此目前仍不确定,本季度最终金额也无法精确判断。不过,如果有更多需求、更多许可证到位,我们也能追加生产 H20 并交付更多。

Jen-Hsun Huang

NVIDIA builds very different things in ASICs. So let’s talk about ASICs first. A lot of projects are started. Many start-up companies are created. Very few products go into production. And the reason for that is it’s really hard. Accelerated computing is unlike general- purpose computing. You don’t write software and just compile it into a processor. Accelerated computing is a full-stack co-design problem. And AI factories in the last several years have become so much more complex because of the scale of the problems have grown so significantly. It is really the ultimate, the most extreme computer science problem the world’s ever seen obviously.
NVIDIA 在 ASICs 上做的事情与众不同。先谈 ASICs:很多项目会启动,很多初创公司会成立,但真正进入量产的产品很少,原因在于这件事非常难。加速计算不同于通用计算(general-purpose computing),不是写完软件直接编译到处理器上就行。加速计算是一个全栈协同设计的问题。过去几年里,随着问题规模急剧扩大,AI factories 变得复杂得多。这显然已经是计算机科学领域前所未有的“终极难题”。

And so the stack is complicated. The models are changing incredibly fast from generative based on auto regressive to degenerative based on diffusion to mixed models to multi-modality. The number of different models that are coming out that are either derivatives of transformers or evolutions of transformers is just daunting.
因此软件栈非常复杂。模型演进极其迅速:从基于自回归的 generative,到基于 diffusion 的“degenerative”,再到混合模型与多模态;基于 transformer 的各种分支或演化模型层出不穷,令人目不暇接。

One of the advantages that we have is that NVIDIA is available in every cloud. We’re available from every computer company. We’re available from the cloud to on-prem to edge to robotics on the same programming model. And so it’s sensible that every framework in the world supports NVIDIA.
我们的优势之一是:NVIDIA 覆盖每一家云,出现在每一家计算机公司;同一编程模型贯穿云、在端(on-prem)、边缘到机器人。因此,全球所有框架都支持 NVIDIA 是合理的。

When you’re building a new model architecture, releasing it on NVIDIA is most sensible. And so the diversity of our platform, both in the ability to evolve into any architecture, the fact that we’re everywhere, and also, we accelerate the entire pipeline, everything from data processing to pretraining to post training with reinforcement learning, all the way out to inference. And so when you build a data center with NVIDIA platform in it, the utility of it is best. The lifetime usefulness is much, much longer.
当你在构建新的模型架构时,首先在 NVIDIA 上发布是最明智的。我们的平台既能演进适配任何架构,又无处不在;并且我们加速的是整条流水线:从数据处理、预训练、带强化学习的后训练,一直到推理。因此,用 NVIDIA 平台建设的数据中心,其效用最佳、使用寿命也要长得多。

And then I would just say that in addition to all of that – and it’s just a really extremely complex systems problem anymore. People talk about the chip itself. There’s one ASIC, the GPU that many people talk about. But in order to build Blackwell the platform and Rubin the platform, we had to build CPUs that connect fast memory, low – extremely energy-efficient memory for large KB caching necessary for agentic AI to the GPU to a SuperNIC to a scale up switch, we call NVLink, completely revolutionary, we’re in our fifth generation now, to a scale out switch, whether it’s Quantum or Spectrum-X Ethernet, to now scale across switches so that we could prepare for these AI super factories with multiple gigawatts of computing all connected together. We call that Spectrum-XGS. We just announced that at Hot Chips this week. And so the complications, the complexity of everything that we do is really quite extraordinary. It’s just done at a really, really extreme scale now.
此外,这已经不仅是“芯片本身”的事,而是一个极其复杂的系统工程。大家谈论的一种 ASIC,就是 GPU。但要把 Blackwell 和 Rubin 打造成“平台”,我们必须同时打造 CPU(连接高速、极高能效、满足 agentic AI 所需的大规模 KB 级缓存的内存)、GPU、SuperNIC、纵向扩展交换(scale-up switch,我们称为 NVLink,目前已到第五代,属于颠覆性技术)、横向扩展交换(scale-out switch,无论是 Quantum 还是 Spectrum-X Ethernet),再到“跨域扩展交换”(scale-across switches),为多吉瓦级算力互联的 AI 超级工厂做准备。我们称之为 Spectrum-XGS,本周刚在 Hot Chips 上发布。因此,我们所做的一切在复杂性上都非同寻常,而且是以极端的规模来实现。

And then lastly, if I could just say one more thing, we’re in every cloud for a good reason. Not only are we the most energy efficient. Our perf per watt is the best of any computing platform. And in a world of power-limited data centers, perf per watt drives directly to revenues. And you’ve heard me say before that, in a lot of ways, the more you buy, the more you grow. And because our perf per dollar, the performance per dollar is so incredible, you also have extremely great margins.
最后再补充一点:我们之所以无处不在,是有充分理由的——不仅能效最高,我们的每瓦性能(perf per watt)在所有计算平台中最佳。在受电力约束的数据中心世界里,perf per watt 会直接转化为收入。我也多次说过,在很多方面,“买得越多,增长越快”。而且由于我们的每美元性能(perf per dollar)也极其出色,客户还能获得非常可观的利润率。

So the growth opportunity with NVIDIA’s architecture and the gross margins opportunity with NVIDIA’s architecture is absolutely the best. And so there’s a lot of reasons why NVIDIA is chosen by every cloud and every start-up and every computer company. We’re really a holistic full-stack solution for AI factories.
因此,就增长机会与毛利空间而言,基于 NVIDIA 架构的选择都是最优解。这也是为什么每一家云、每一家初创公司与每一家计算机公司都会选择 NVIDIA——我们提供的是面向 AI factories 的整体性全栈解决方案。

Operator
主持人

Your next question comes from Ben Reitzes with Melius.
下一位提问来自 Melius 的 Ben Reitzes。

Benjamin Alexander Reitzes

Jensen, I wanted to ask you about your $3 trillion to $4 trillion in data center infrastructure spend by the end of the decade. Previously, you talked about something in the $1 billion range, which I believe was just for compute by 2028. If you take past comments, $3 trillion to $4 trillion would imply maybe $2 billion plus in compute spend. And just wanted to know if that was right and that’s what you’re seeing by the end of the decade. And wondering what you think your share will be of that. Your share right now of total
Jensen,我想请教你关于到本十年末数据中心基础设施支出达到 $3 trillion 到 $4 trillion 的判断。此前你提到过大约 $1 billion 的规模,我理解那只是到 2028 年用于 compute 的支出。如果参照你过去的说法,$3 trillion 到 $4 trillion 或许意味着超过 $2 billion 的 compute 支出。我想确认这是否正确,以及这是否是你对本十年末的预期。另外,你认为其中属于你们的份额会有多大。你们目前在 total 的份额——

infrastructure compute-wise is very high, so I wanted to see. And also if there’s any bottlenecks you’re concerned about like power to get to the $3 trillion to $4 trillion.
在基础设施的 compute 端非常高,所以我想了解一下。另一个问题是,为实现 $3 trillion 到 $4 trillion,你是否担心诸如电力等方面的瓶颈?

Jen-Hsun Huang

Thanks. As you know, the CapEx of just the top 4 hyperscalers has doubled in 2 years. As the AI revolution went into full steam, as the AI race is now on, the CapEx spend has doubled to $600 billion per year. There’s 5 years between now and the end of the decade, and $600 billion only represents the top 4 hyperscalers. We still have the rest of the enterprise companies building on-prem. You have cloud service providers building around the world. United States represents about 60% of the world’s compute. And over time, you would think that artificial intelligence would reflect GDP scale and growth and so – and would be, of course, accelerating GDP growth.
谢谢。如你所知,前四大 hyperscalers 的 CapEx 在两年内翻了一倍。随着 AI 革命全面加速、AI 竞赛开启,CapEx 年支出已翻倍至 $600 billion。距离本十年结束还有 5 年,而这 $600 billion 只代表前四大 hyperscalers。除此之外,还有大量企业在自建 on-prem,还有全球范围内的云服务商在建设。United States 约占全球算力的 60%。从长期看,人工智能应当反映 GDP 的体量与增长,并且当然会加速 GDP 增长。

And so our contribution to that is a large part of the AI infrastructure. Out of a gigawatt AI factory, which can go anywhere from $50 billion to plus or minus 10%, let’s say, $50 billion to $60 billion, we represent about $35 billion plus or minus of that and $35 billion out of $50 billion per gigawatt data center.
因此,我们对其中的贡献覆盖了 AI 基础设施的很大一部分。以 1 吉瓦(gigawatt)的 AI 工厂为例,总投入大约 $50 billion,上下浮动 10%,也就是 $50 billion 到 $60 billion,其中约有 $35 billion(上下浮动)由我们提供——换言之,每 1 吉瓦数据中心里的 $50 billion 中,大约 $35 billion 与我们相关。

And of course, what you get for that is not a GPU. I think people – we’re famous for building the GPU and inventing the GPU, but as you know, over the last decade, we’ve really transitioned to become an AI infrastructure company. It takes 6 chips just to build – 6 different types of chips just to build a Rubin AI supercomputer. And just to scale that out to a gigawatt, you have hundreds of thousands of GPU compute nodes and a whole bunch of racks. And so we’re really an AI infrastructure company, and we’re hoping to continue to contribute to growing this industry, making AI more useful and then very importantly, driving the performance per watt because the world, as you mentioned, limiters, it will always likely be power limitations or AI – building limitations. And so we need to squeeze as much out of that factory as possible.
当然,你因此获得的不只是一个 GPU。大家常说我们因打造并发明 GPU 而闻名,但正如你所知,在过去十年里我们确实转型成为一家 AI 基础设施公司。仅仅构建一台 Rubin AI supercomputer 就需要 6 种不同类型的芯片。将其扩展到 1 吉瓦规模时,你需要成百上千个 GPU 计算节点和大量机架。所以我们真正是一家 AI 基础设施公司;我们希望持续助力行业成长、让 AI 更有用,并且非常重要的是提升每瓦性能(performance per watt),因为正如你提到的,全球面临的限制往往是电力约束,或者 AI——建设约束。因此我们需要尽可能从那座“工厂”中挤出更多产出。

NVIDIA’s performance per unit of energy used drives the revenue growth of that factory. It directly translates. If you have a 100- megawatt factory, perf per 100 megawatt drives your revenues. It’s tokens per 100 megawatts of factory. In our case also, the performance per dollar spent is so high that your gross margins are also the best. But anyhow, these are the limiters going forward and $3 trillion to $4 trillion is fairly sensible for the next 5 years.
NVIDIA 的单位能耗性能直接推动那座“工厂”的收入增长——是直接转化的。如果你有一座 100- megawatt 的工厂,那么每 100 兆瓦的性能就驱动你的收入;本质上是“每 100 兆瓦可产出的 tokens 数”。同时,在我们的体系下,每美元性能也非常高,因此你的毛利率也会是最优。无论如何,这些都是未来的关键约束;而 $3 trillion 到 $4 trillion 的规模,用未来 5 年来衡量,是相当合理的。

Operator
主持人

Next question comes from Joe Moore of Morgan Stanley.
下一位提问来自 Morgan Stanley 的 Joe Moore。

Joseph Lawrence Moore

Great. Congratulations on reopening the China opportunity. Can you talk about the long-term prospects there? You’ve talked about, I think, half of AI software world being there. How much can NVIDIA grow in that business? And how important is it that you get the Blackwell architecture ultimately licensed there?
太好了,恭喜你们重启 China 的机会。能否谈谈那里的长期前景?我记得你提到过,AI software 世界大约有一半在那边。NVIDIA 在这块业务上还能增长到什么程度?此外,最终让 Blackwell 架构在那里获得许可有多重要?

Jen-Hsun Huang

The China market, I’ve estimated to be about $50 billion of opportunity for us this year if we were able to address it with competitive products. And if it’s $50 billion this year, you would expect it to grow, say, 50% per year. As the rest of the world’s AI market is growing as well.
China 市场如果我们能够以具有竞争力的产品去覆盖,我估计今年对我们来说大约有 $50 billion 的机会。如果今年是 $50 billion,可以预期其年增长大约 50%,因为全球其他地区的 AI 市场也在增长。

It is the second largest computing market in the world, and it is also the home of AI researchers. About 50% of the world’s AI researchers are in China. The vast majority of the leading open source models are created in China. And so it’s fairly important, I think, for the American technology companies to be able to address that market. And open source, as you know, is created in one country, but it’s used all over the world.
它是全球第二大计算市场,也是 AI researchers 的聚集地。全球大约 50% 的 AI researchers 在 China,绝大多数领先的 open source models 诞生于 China。因此,我认为 American technology companies 能否覆盖该市场非常重要。而且正如你所知,open source 在一个国家创造,却在全世界被使用。

The open source models that have come out of China are really excellent. DeepSeek, of course, gained global notoriety. Qwen is excellent. Kimi’s excellent. There’s a whole bunch of new models that are coming out. They’re multimodal. They’re great language models. And it’s really fueled the adoption of AI in enterprises around the world because enterprises want to build their own custom proprietary software stacks. And so open source model’s really important for enterprise. It’s really important for SaaS who also would like to build proprietary systems. It has been really incredible for robotics around the world.
来自 China 的 open source models 确实非常出色。DeepSeek 自然是全球知名;Qwen 很优秀,Kimi 也很优秀。还有一大批新模型不断涌现,它们具备 multimodal 特性,是很棒的 language models。这实际上推动了全球企业对 AI 的采用,因为企业希望构建自己的定制化 proprietary software stacks。因此 open source model 对 enterprise 至关重要,对希望打造 proprietary systems 的 SaaS 同样至关重要;它也极大地推动了全球 robotics 的发展。

And so open source is really important, and it’s important that the American companies are able to address it. This is – it’s going to be a very large market. We’re talking to the administration about the importance of American companies to be able to address the Chinese market. And as you know, H20 has been approved for companies that are not on the entities list, and many licenses have been approved. And so I think the opportunity for us to bring Blackwell to the China market is a real possibility. And so we just have to keep advocating the sensibility of and the importance of American tech companies to be able to lead and win the AI race and help make the American tech stack the global standard.
因此 open source 极其重要,而且 American companies 能够覆盖这一领域同样重要。这将会是一个非常大的市场。我们正与 the administration 沟通,强调 American companies 能够覆盖 Chinese 市场的重要性。正如你所知,H20 已获准供给不在 entities list 上的公司,且已有许多 licenses 获批。因此,我认为我们将 Blackwell 带到 China 市场是真实存在的可能性。我们需要持续阐明其合理性与重要性,使 American tech companies 能够引领并赢得 AI 竞赛,进而帮助 American tech stack 成为全球标准。

Operator
主持人

Your next question comes from the line of Aaron Rakers with Wells Fargo.
下一位提问来自 Wells Fargo 的 Aaron Rakers。

Aaron Christopher Rakers

Yes. Thank you for the question. I want to go back to the Spectrum-XGS announcement this week and thinking about the Ethernet product now pushing over $10 billion of annualized revenue. Jensen, how – what is the opportunity set that you see for Spectrum- XGS? Do we think about this as kind of the data center interconnect layer? Any thoughts on the sizing of this opportunity within that Ethernet portfolio?
好的,谢谢这个问题。我想回到本周的 Spectrum-XGS 发布,同时谈谈如今年化收入已超过 100 亿美元的 Ethernet 产品线。Jensen,你怎么看待 Spectrum- XGS 的机会空间?我们是否应将其理解为数据中心的互连层?在整个 Ethernet 产品组合中,这一机会的规模有何判断?

Jen-Hsun Huang

We now offer 3 networking technologies. One is for scale up. One is for scale out and one for scale across. Scale up is so that we could build the largest possible virtual GPU, the virtual compute node. NVLink is revolutionary. NVLink 72 is what made it possible for Blackwell to deliver such an extraordinary generational jump over Hopper’s NVLink 8. At a time when we have long thinking models, agentic AI reasoning systems, the NVLink basically amplifies the memory bandwidth, which is really critical for reasoning systems. And so NVLink 72 is fantastic.
我们现在提供三类网络技术:一类用于 scale up,一类用于 scale out,另一类用于 scale across。scale up 的目的,是构建尽可能大的虚拟 GPU(虚拟计算节点)。NVLink 是颠覆性的;正是 NVLink 72 让 Blackwell 相较 Hopper 的 NVLink 8 实现了如此非凡的代际跃升。在 long thinking 模型、agentic AI 推理系统兴起的当下,NVLink 实质上放大了内存带宽——这对推理系统至关重要。因此,NVLink 72 的表现非常出色。

We then scale out with networking, which we have 2. We have InfiniBand, which is unquestionably the lowest latency, the lowest jitter, the best scale-out network. It does require more expertise in managing those networks. And for supercomputing, for the leading model makers, InfiniBand, Quantum InfiniBand is the unambiguous choice. If you were to benchmark an AI factory, the ones with InfiniBand are the best performance.
在 scale out 方面,我们有两种网络:其一是 InfiniBand——无可置疑地具备最低时延、最低抖动,是最佳的横向扩展网络,但在运维管理上需要更高专业度。对于超级计算和领先的模型构建者而言,InfiniBand、尤其是 Quantum InfiniBand,是不言自明的首选。若对 AI 工厂做基准测试,采用 InfiniBand 的系统性能最佳。

For those who would like to use Ethernet because their whole data center is built with Ethernet, we have a new type of Ethernet called Spectrum Ethernet. Spectrum Ethernet is not off the shelf. It has a whole bunch of new technologies designed for low latency and low jitter and congestion control. And it has the ability to come closer, much, much closer to InfiniBand than anything that’s out there. And that is – we call that Spectrum-X Ethernet.
对于希望沿用 Ethernet(其数据中心完全基于以太网建设)的客户,我们提供一种新型以太网——Spectrum Ethernet。Spectrum Ethernet 并非“现成的标准货”(off the shelf),而是集成了面向低时延、低抖动与拥塞控制的一系列新技术;其性能较现有任何以太网方案都更接近 InfiniBand。我们将这一方案称为 Spectrum-X Ethernet。

And then finally, we have Spectrum-XGS, a giga scale for connecting multiple data centers, multiple AI factories into a super factory, a gigantic system. And we’re going to – you’re going to see that networking obviously is very important in AI factories. In fact, choosing the right networking, the performance, the throughput improvement, going from 65% to 85% or 90%, that kind of step-up because of your networking capability effectively makes networking free. Choosing the right networking, you’re basically paying – you’ll get a return on it like you can’t believe because the AI factory, a gigawatt, as I mentioned before, could be $50 billion. And so the ability to improve the efficiency of that factory by tens of percent is – results in $10 billion, $20 billion worth of effective benefit. And so this – the networking is a very important part of it.
最后是 Spectrum-XGS,用于以 giga scale 将多个数据中心、多个 AI 工厂互联成一座超级工厂、一个超大型系统。你会看到,网络在 AI 工厂中显然至关重要。实际上,选对网络后,性能/吞吐可从 65% 提升至 85% 或 90%——凭借网络能力带来的这种跃升,等同于让“网络成本”变为免费。选对网络,你所投入的费用将获得超乎想象的回报,因为正如我之前所说,1 吉瓦规模的 AI 工厂可能需要 500 亿美元投入;而把工厂效率提升几十个百分点,带来的有效收益就是 100 亿、200 亿美元。因此,网络是其中极为重要的组成部分。

It’s the reason why NVIDIA dedicates so much in networking. That’s the reason why we purchased Mellanox 5.5 years ago. And Spectrum-X, as we mentioned earlier, is now quite a sizable business, and it’s only about 1.5 years old. So Spectrum-X is a home run. All 3 of them are going to be fantastic. NVLink scale up, Spectrum-X and InfiniBand scale out, and then Spectrum-XGS for scale across.
这也是 NVIDIA 在网络领域投入巨大的原因——也是我们在 5.5 年前收购 Mellanox 的原因。正如先前所述,Spectrum-X 目前已发展成相当可观的业务,而其问世仅约 1.5 年。可以说,Spectrum-X 一鸣惊人。三条路径都会非常出色:NVLink 对应 scale up,Spectrum-X 与 InfiniBand 对应 scale out,而 Spectrum-XGS 对应 scale across。

Operator
主持人

Your next question comes from Stacy Rasgon with Bernstein Research.
下一位提问来自 Bernstein Research 的 Stacy Rasgon。

Stacy Aaron Rasgon

I have a more tactical question for Colette. So on the guidance, we’re up over $7 billion. The vast bulk of that is going to be from data center. How do I think about apportioning that $7 billion out across Blackwell versus Hopper versus networking? I mean it looks
我有一个更偏战术层面的提问给 Colette。关于指引,我们环比增加超过 70 亿美元。其中绝大部分将来自 data center。如何看待将这 70 亿美元在 Blackwell、Hopper 与 networking 之间的分配?我的意思是这看起来

like Blackwell was probably $27 billion in the quarter, up from maybe $23 billion last quarter. Hopper is still $6 billion or $7 billion post the H20. Like do you think the Hopper strength continues? Just how do I think about parsing that $7 billion out across those 3 different components?
像是本季度 Blackwell 大概是 270 亿美元,高于上季度或许 230 亿美元。Hopper 在 H20 之后仍有 60 亿或 70 亿美元。你认为 Hopper 的强势还会延续吗?我该如何在这三块之间拆分这 70 亿美元?

Colette M. Kress

Thanks, Stacy, for the question. First part of it, looking at our growth between Q2 and Q3, Blackwell is still going to be the lion’s share of what we have in terms of data center. But keep in mind, that helps both our compute side as well as it helps our networking side because we are selling those significant systems that are incorporating the NVLink that Jensen just spoke about.
谢谢你,Stacy。首先,就二三季度之间的增长而言,Blackwell 依然会占据我们 data center 业务中的“大头”。同时请记住,这既拉动了我们的 compute,也带动了 networking,因为我们在销售包含 Jensen 刚才所说 NVLink 的整套大型系统。

Selling Hopper, we are still selling it. H100, H200s, we are. Again, they are HGX systems, and I still believe our Blackwell will be the lion’s share of what we’re doing on there. So we’ll continue. We don’t have any more specific details in terms of how we’ll finish our quarter, but you should expect Blackwell again to be the driver of the growth.
至于 Hopper,我们仍在销售——H100、H200 都在卖。它们同样是 HGX 系统。不过我仍然认为,在这方面 Blackwell 会占据最大的比重。所以我们会延续这一趋势。关于季度收官的更细项拆分我们暂时没有更多细节,但你应该预期 Blackwell 仍将是增长的主要驱动。

Operator

Your next question comes from Jim Schneider of Goldman Sachs.
下一位提问来自 Goldman Sachs 的 Jim Schneider。

James Edward Schneider

You’ve been very clear about the reasoning model opportunity that you see, and you’ve also been relatively clear about technical specs for Rubin. But maybe you could provide a little bit of context about how you view the Rubin product transition going forward. What incremental capability does that offer to customers? And would you say that Rubin is a bigger, smaller or similar step-up in terms of performance from a capability perspective relative to what we saw with Blackwell?
你们对 reasoning 模型的机会阐述得很清楚,对 Rubin 的技术规格也相对明确。但能否再介绍一下你们如何看待接下来的 Rubin 产品过渡?它会为客户带来哪些增量能力?相较我们在 Blackwell 上看到的提升,Rubin 在能力与性能上的跨越会更大、更小,还是相近?

Jen-Hsun Huang

Yes, thanks. Rubin. Rubin, we’re on an annual cycle. And the reason why we’re on an annual cycle is because we can do so to accelerate the cost reduction and maximize the revenue generation for our customers. When we increase the perf per watt, the token generation per amount of usage of energy, we are effectively driving the revenues of our customers. The perf per watt of Blackwell will be for reasoning systems in order of magnitude higher than Hopper. And so for the same amount of energy, and everybody’s data center is energy limited by definition, for any data center, we – using Blackwell, you’ll be able to maximize your revenues compared to anything we’ve done in the past, compared to anything in the world today and because the perf per dollar, the performance is so good that the perf per dollar invested in the capital would also allow you to improve your gross margins.
好的,谢谢。关于 Rubin——我们采用年度节奏(annual cycle)。之所以这样做,是为了更快实现成本下降、并最大化帮助客户创造收入。当我们提升每瓦性能(perf per watt)、也就是单位能耗的 token 生成量时,本质上就在推动客户的营收。就 reasoning 系统而言,Blackwell 的每瓦性能将较 Hopper 提升一个数量级。因此,在相同能耗下——按定义每一家数据中心都受能耗约束——使用 Blackwell,任何数据中心都能把收入最大化,优于我们过去做过的一切、也优于当今世界的其他方案;同时由于每美元性能(perf per dollar)也足够出色,单位资本对应的性能提升还能帮助你改善毛利率。

To the extent that we have great ideas for every single generation, we could improve the revenue generation, improve the AI capability, improve the margins of our customers by releasing new architectures. And so we advise our partners, our customers to pace themselves and to build these data centers on an annual rhythm. And Rubin is going to have a whole bunch of new ideas.
只要我们每一代都能带来新的好点子,通过发布新架构,我们就能提升客户的创收能力、AI 能力以及利润率。因此我们建议合作伙伴与客户把握好节奏,按照年度节奏建设数据中心。Rubin 将引入一整套新的理念与能力。

I’ll pause for a second because I’ve got plenty of time between now and a year from now to tell you about all the breakthroughs that Rubin is going to bring, but Rubin has a lot of great ideas. I’m anxious to tell you, but I can’t right now. And I’ll save it for GTC to tell you more and more about it. But nonetheless, for the next year, we’re ramping really hard into now Grace Blackwell, GB200, and then now Blackwell Ultra, GB300, we’re ramping really hard into data centers. This year is obviously a record-breaking year. I expect next year to be a record-breaking year. And while we continue to increase the performance of AI capabilities as we race towards artificial superintelligence on the one hand and continue to increase the revenue generation capabilities of our hyperscalers on the other hand.
我先按下不表,因为从现在到一年后的这段时间里,我会有很多机会详细介绍 Rubin 将带来的所有突破——它确实包含许多很棒的想法。我很想现在就说,但此刻还不能;我会留到 GTC 上逐步披露更多。不过无论如何,未来一年我们都会在数据中心侧加速爬坡:先是 Grace Blackwell、GB200,然后是 Blackwell Ultra、GB300。今年显然将创下新纪录;我预计明年也会是创纪录的一年。一方面我们将持续提升 AI 能力的性能,朝着 artificial superintelligence 迈进;另一方面也会持续增强 hyperscalers 的创收能力。

Operator
主持人

Your final question comes from Timothy Arcuri with UBS.
最后一个问题来自 UBS 的 Timothy Arcuri。

Timothy Michael Arcuri

Jensen, I wanted to ask you, just answered the question. You threw out a number. You said 50% CAGR for the AI market. So I’m wondering how much visibility that you have into next year. Is that kind of a reasonable bogey in terms of how much your data center revenue should grow next year? I would think you’ll grow at least in line with that CAGR? And maybe are there any puts and takes to that?
Jensen,我想就你刚才的回答追问一下。你提到一个数字,说 AI 市场的 CAGR 为 50%。那么你们对明年的能见度有多高?把它当作明年数据中心收入增速的一个“参考目标”(bogey)是否合理?我认为你们至少会与该 CAGR 持平增长?以及其中是否存在一些正负因素(puts and takes)需要考虑?

Jen-Hsun Huang

Well, I think the best way to look at it is we have reasonable forecasts from our large customers for next year, a very, very significant forecast. And we still have a lot of businesses that we’re still winning and a lot of start-ups that are still being created. Don’t forget that the number of start-ups for – native-AI start-ups was $100 billion was funded last year. This year, the year is not even over yet, it’s $180 billion funded. If you look at AI native, the top AI-native start-ups that are generating revenues last year was $2 billion. This year, it’s $20 billion. Next year be 10x higher than this year is not inconceivable. And the open source models is now opening up large enterprises, SaaS companies, industrial companies, robotics companies to now join the AI revolution, another source of growth. And whether it’s AI natives or enterprise SaaS or industrial AI or start-ups, we’re just seeing just enormous amount of interest in AI and demand for AI.
我认为最好的看法是:我们从大型客户那里拿到了关于明年的合理预测,规模非常可观。同时我们仍在持续拿单,并且还有大量初创公司在不断成立。别忘了,关于 AI-native 初创公司,去年获得融资的是 100 亿美元;而今年尚未结束,融资已达 180 亿美元。若看 AI-native,去年头部 AI-native 初创公司的营收是 20 亿美元,今年是 200 亿美元;明年较今年再增长 10 倍也并非不可想象。与此同时,open source models 正在让大型企业、SaaS 公司、工业企业与机器人公司加入这场 AI 革命,这是另一股增长动能。无论是 AI-native、enterprise SaaS、industrial AI 还是初创公司,我们都看到对 AI 的兴趣与需求极其强劲。

Right now, the buzz is – I’m sure all of you know about the buzz out there. The buzz is everything sold out. H100 sold out. H200s are sold out. Large CSPs are coming out renting capacity from other CSPs. And so the AI-native start-ups are really scrambling to get capacity so that they could train their reasoning models. And so the demand is really, really high.
目前的市场热度——相信大家都感受到了——就是“供不应求”。H100 卖光了,H200 也卖光了;大型 CSPs 开始向其他 CSPs 租用产能。AI-native 初创公司在抢算力,好训练他们的 reasoning models。需求非常、非常旺盛。

But the long-term outlook between where we are today, CapEx has doubled in 2 years. It is now running about $600 billion a year just in the large hyperscalers. For us to grow into that $600 billion a year, representing a significant part of that CapEx isn’t unreasonable. And so I think the next several years, surely through the decade, we see just a really fast growing, really significant growth opportunities ahead.
从长期前景看,相比两年前,CapEx 已经翻倍;仅大型 hyperscalers 的年 CapEx 就在约 6,000 亿美元的水平。我们在其中占据相当份额并随之成长,并非不合理。因此我认为未来数年、乃至整个十年,我们面前都是非常快速且相当可观的增长机会。

Let me conclude with this. Blackwell is the next-generation AI platform the world has been waiting for. It delivers an exceptional generational leap. NVIDIA’s NVLink 72 rack scale computing is revolutionary, arriving just in time as reasoning AI models drive order of magnitude increases in training and inference performance requirement. Blackwell Ultra is ramping at full speed and the demand is extraordinary.
最后用几句话总结。Blackwell 是全球期待已久的下一代 AI 平台,带来了非凡的代际跃升。NVIDIA 的 NVLink 72 机架级计算是革命性的,恰逢 reasoning AI 模型推动训练与推理算力需求提升一个数量级之际问世。Blackwell Ultra 正在全速爬坡,需求极其强劲。

Our next platform Rubin, is already in fab. We have 6 new chips that represents the Rubin platform. They have all ticked up at TSMC. Rubin will be our third-generation NVLink rack scale AI supercomputer. And so we expect to have a much more mature and fully scaled up supply chain. Blackwell and Rubin AI factory platforms will be scaling into the $3 trillion to $4 trillion global AI factory build out through the end of the decade.
我们的下一代平台 Rubin 已在晶圆厂生产。代表 Rubin 平台的 6 款新芯片,已经在 TSMC 全部启动。Rubin 将是我们第三代 NVLink 机架级 AI 超级计算机。由此,我们预计供应链将更加成熟并实现全面放量。Blackwell 与 Rubin 两大 AI factory 平台将伴随本十年末前全球 3 万亿至 4 万亿美元的 AI 工厂建设而扩张。

Customers are building ever greater scale AI factories from thousands of Hopper GPUs in tens of megawatt data centers to now hundreds of thousands of Blackwells in 100-megawatt facilities. And soon, we’ll be building millions of Rubin GPU platforms, powering multi-gigawatt multisite AI super factories.
客户正把 AI 工厂规模不断放大:从数以千计 Hopper GPUs 的数十兆瓦级数据中心,发展到如今拥有数十万 Blackwells 的百兆瓦级设施。不久我们将打造数百万套 Rubin GPU 平台,驱动多吉瓦、跨园区的 AI 超级工厂。

With each generation, demand only grows. One shot chatbots have evolved into reasoning agentic AI that research, plan and use tools, driving orders of magnitude jump in compute for both training and inference. Agentic AI is reaching maturity and has opened the enterprise market to build domain and company-specific AI agents for enterprise workflows, products and services.
每一代产品都会带来更高的需求。one shot 聊天机器人已进化为能够 research、plan、use tools 的 reasoning/agentic AI,推动训练与推理算力需求成数量级跃升。agentic AI 正趋于成熟,已打开企业市场,用于为企业工作流、产品与服务打造特定领域与公司专属的 AI agents。

The age of physical AI has arrived, unlocking entirely new industries in robotics, industrial automation. Every industrial company will need to build 2 factories: 1 to build the machines and another to build their robotic AI.
physical AI 的时代已经到来,催生机器人与工业自动化等全新产业。每一家工业公司都需要建设两座工厂:一座造机器,另一座“制造他们的机器人 AI”。

This quarter, NVIDIA reached record revenues and an extraordinary milestone in our journey. The opportunity ahead is immense. A new industrial revolution has started. The AI race is on. Thanks for joining us today, and I look forward to addressing you next week – next earnings call. Thank you.
本季度,NVIDIA 创下历史新高的营收,达成我们历程中的重要里程碑。前方机遇无比广阔。一场新的工业革命已经开启,AI 竞赛正在进行。感谢各位今天的参与,期待下周——下次财报电话会再与各位交流。谢谢。

Operator

This concludes today's conference call. You may now disconnect.

    热门主题