NVIDIA Corporation (NASDAQ:NVDA) Q4 2025 Earnings Conference Call February 26, 2025 5:00 PM ET
C.J. Muse - Cantor Fitzgerald
Operator 操作员
Good afternoon. My name is Krista and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's Fourth Quarter Earnings Call. All lines have been placed on mute to prevent any background noise. After the speakers’ remarks, there will be a question-and-answer session. [Operator Instructions]
下午好。我是今天的会议接线员 Krista。现在,我谨代表公司欢迎各位参加英伟达第四季度财报电话会议。为避免背景噪音,所有线路均已静音。发言人讲话结束后,将进行问答环节。[接线员说明]
Thank you. Stewart Stecker, you may begin your conference.
谢谢。Stewart Stecker,您可以开始您的会议了。
Stewart Stecker
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the fourth quarter of fiscal 2025. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer and Colette Kress, Executive Vice President and Chief Financial Officer.
谢谢。各位下午好,欢迎参加英伟达 2025 财年第四季度的电话会议。今天与我一同出席会议的有英伟达总裁兼首席执行官黄仁勋,以及执行副总裁兼首席财务官 Colette Kress。
I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the first quarter of fiscal 2026. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without prior written consent.
我想提醒各位,我们的电话会议正在英伟达投资者关系网站上进行网络直播。此次网络直播的回放将持续提供,直至我们召开 2026 财年第一季度财务业绩电话会议为止。今天电话会议的内容属于英伟达的财产,未经事先书面同意,不得复制或转录。
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission.
在本次电话会议中,我们可能会基于当前预期作出前瞻性陈述。这些陈述受到诸多重大风险和不确定因素的影响,我们的实际结果可能会与之存在重大差异。有关可能影响我们未来财务业绩和业务的因素,请参阅今天发布的财报、我们最新的 10-K 和 10-Q 表格,以及我们可能向美国证券交易委员会提交的 8-K 表格报告中的相关披露。
All our statements are made as of today, February 26, 2025, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements. During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
我们所有声明均基于截至 2025 年 2 月 26 日我们目前可获得的信息。除法律要求外,我们不承担更新任何此类声明的义务。在本次电话会议中,我们将讨论非公认会计准则(non-GAAP)财务指标。您可以在我们网站发布的首席财务官评论中找到这些非公认会计准则财务指标与公认会计准则(GAAP)财务指标的对账表。
With that, let me turn the call over to Colette.
接下来,我将把电话交给科莱特。
Colette Kress
Thanks, Stewart. Q4 was another record quarter. Revenue of $39.3 billion was up 12% sequentially and up 78% year-on-year, and above our outlook of $37.5 billion. For fiscal 2025, revenue was $130.5 billion, up 114% in the prior year. Let's start with data center.
谢谢,Stewart。第四季度再次创下纪录。收入达到 393 亿美元,环比增长 12%,同比增长 78%,超过了我们 375 亿美元的预期。2025 财年收入为 1305 亿美元,同比增长 114%。我们先从数据中心开始。
Data Center revenue for fiscal 2025 was $115.2 billion, more than doubling from the prior year. In the fourth quarter, data center revenue of $35.6 billion was a record of 16% sequentially and 93% year-on-year. As the Blackwell ramp commenced and Hopper 200 continued sequential growths.
2025 财年数据中心收入为 1152 亿美元,比上一年增长了一倍多。第四季度数据中心收入达到创纪录的 356 亿美元,环比增长 16%,同比增长 93%。随着 Blackwell 开始量产以及 Hopper 200 持续环比增长。
In Q4, Blackwell sales exceeded our expectations. We delivered $11 billion of Blackwell revenue to meet strong demand. This is the fastest product ramp in our company's history, unprecedented in its speed and scale. Blackwell production is in full gear across multiple configurations, and we are increasing supply quickly in expanding customer adoption. Our Q4 data center compute revenue jumped 18% sequentially and over 2 times year-on-year. Customers are racing to scale infrastructure to train the next generation of cutting edge models and unlock the next level of AI capabilities.
第四季度,Blackwell 的销售额超出了我们的预期。我们实现了 110 亿美元的 Blackwell 收入,以满足强劲的需求。这是我们公司历史上最快的产品产能提升,其速度和规模前所未有。Blackwell 的生产正以多种配置全面展开,随着客户采用规模的扩大,我们正在迅速增加供应。第四季度,我们的数据中心计算收入环比增长 18%,同比增长超过两倍。客户正加速扩展基础设施,以训练下一代尖端模型,并释放下一阶段的人工智能能力。
With Blackwell, it will be common for these clusters to start with 100,000 GPUs or more. Shipments have already started for multiple infrastructures of this size. Post-training and model customization are fueling demand for NVIDIA infrastructure and software as developers and enterprises leverage techniques such as fine tuning, reinforcement learning, and distillation to tailor models for domain-specific use cases.
有了 Blackwell,这些集群通常会以 10 万块或更多 GPU 为起点。这种规模的多个基础设施的出货已经开始。随着开发者和企业利用微调、强化学习和蒸馏等技术,为特定领域的用例定制模型,训练后的模型定制正在推动对 NVIDIA 基础设施和软件的需求。
Hugging Face alone hosts over 90,000 derivatives created from the Llama Foundation model. The scale of post-training and model customization is massive and can collectively demand orders of magnitude more compute than pre-training. Our inference demand is accelerating, driven by test time scaling and new reasoning models like OpenAI’s o3, DeepSeek-R1, and Grok-3. Long thinking reasoning AI can require 100 times more compute per task compared to one-shot inferences.
仅 Hugging Face 平台上托管的基于 Llama Foundation 模型创建的衍生模型就超过了 90,000 个。训练后的模型定制规模庞大,其所需的计算资源总量可能比预训练阶段高出数个数量级。随着测试时扩展和 OpenAI 的 o3、DeepSeek-R1 以及 Grok-3 等新型推理模型的出现,我们的推理需求正在加速增长。长程思考推理 AI 每个任务所需的计算量可能是一轮推理的 100 倍以上。
Blackwell was architected for reasoning AI inference. Blackwell supercharges reasoning AI models with up to 25 times higher token throughput and 20 times lower cost versus Hopper 100. It is revolutionary. Transformer engine is built for LLM and mixture of experts in front. And its NVLink domain delivers 14 times the throughput of PCIe Gen 5, ensuring the response time, throughput and cost efficiency needed to tackle the growing complexity of inference at scale. Companies across industries are tapping into NVIDIA's full-stack inference platform to boost performance and slash costs.
Blackwell 专为推理型人工智能而设计。与 Hopper 100 相比,Blackwell 可将推理型人工智能模型的令牌吞吐量提高多达 25 倍,成本降低多达 20 倍。这是一项革命性的技术。Transformer 引擎专为 LLM 和前端专家混合架构而构建。其 NVLink 域的吞吐量是 PCIe 第五代的 14 倍,确保了应对日益复杂的大规模推理所需的响应时间、吞吐量和成本效益。各行各业的公司都在利用 NVIDIA 的全栈推理平台来提高性能并降低成本。
Now tripled inference throughput and cut costs by 66%, using NVIDIA TensorRT for its screenshot feature. Perplexity sees 435 million monthly queries and reduced its inference costs 3 times with NVIDIA Triton Inference Server and TensorRT-LLM. Microsoft Bing achieved a 5x speed up and major TCO savings for visual search across billions of images with NVIDIA, TensorRT, and acceleration libraries. Blackwell has great demand for inference. Many of the early GB200 deployments are earmarked for inference, a first for a new architecture.
现在,通过使用 NVIDIA TensorRT,其截图功能的推理吞吐量提高了三倍,成本降低了 66%。Perplexity 每月查询量达到 4.35 亿次,借助 NVIDIA Triton 推理服务器和 TensorRT-LLM,其推理成本降低了三倍。微软必应通过 NVIDIA、TensorRT 和加速库,在数十亿张图片的视觉搜索中实现了 5 倍的速度提升,并大幅节省了总体拥有成本。Blackwell 对推理的需求巨大,许多早期的 GB200 部署首次专门用于推理,这在新架构中尚属首次。
Blackwell addresses the entire AI market from pre-training, post-training to inference across clouds, to on-premise, to enterprise. [HUDA's] (ph) programmable architecture accelerates every AI model and over 4,400 applications, ensuring large infrastructure investments against obsolescence in rapidly evolving markets. Our performance and pace of innovation is unmatched. We're driven to a 200 times reduction in inference costs in just the last 2 years. We deliver the lowest TCO and the highest ROI. And full stack optimizations for NVIDIA and our large ecosystem, including 5.9 million developers, continuously improve our customers' economics.
Blackwell 面向整个 AI 市场,从云端到本地再到企业,涵盖从预训练、后训练到推理的全部环节。[HUDA](ph)的可编程架构可加速每个 AI 模型和超过 4,400 个应用程序,确保在快速发展的市场中,大规模基础设施投资不会过时。我们的性能和创新速度无与伦比。在过去短短两年内,我们已成功将推理成本降低了 200 倍。我们提供最低的总体拥有成本(TCO)和最高的投资回报率(ROI)。针对 NVIDIA 和我们庞大的生态系统(包括 590 万开发者)的全栈优化,不断改善客户的经济效益。
In Q4, large CSPs represented about half of our data center revenue. And these sales increased nearly 2 times year-on-year. Large CSPs were some of the first to stand up Blackwell with Azure, GCP, AWS, and OCI bringing GB200 systems to cloud regions around the world to meet surging customer demand for AI. Regional cloud hosting and video GPUs increased as a percentage of data center revenue, reflecting continued AI factory build-outs globally and rapidly rising demand for AI reasoning models and agents. Core, we've launched a 100, 000 GV200 cluster-based instance with NVLink Switch and Quantum2 InfiniBand.
在第四季度,大型云服务提供商约占我们数据中心收入的一半,这些销售额同比增长了近两倍。大型云服务提供商率先部署了 Blackwell,Azure、GCP、AWS 和 OCI 将 GB200 系统引入全球各地的云区域,以满足客户对人工智能激增的需求。区域云托管和视频 GPU 占数据中心收入的比例有所增加,反映出全球范围内持续建设 AI 工厂,以及对 AI 推理模型和代理的需求迅速增长。在核心方面,我们推出了一个基于 NVLink Switch 和 Quantum2 InfiniBand 的 100,000 个 GV200 集群实例。
Consumer Internet Revenue Group 3x year-on-year driven by an expanding set of generative AI and deep learning use cases. These include recommender systems, vision language understanding, synthetic data, generation search, and agentic AI. For example, xAI is adopting the GB200 to train and inference its next generation of grog AI models. Meta's cutting-edge Andromeda advertising engine runs on NVIDIA's Grace Hopper Superchip, serving vast quantities of ads across Instagram, Facebook applications.
受生成式人工智能和深度学习用例不断扩展的推动,消费互联网收入同比增长 3 倍。这些用例包括推荐系统、视觉语言理解、合成数据、生成式搜索和自主智能体 AI。例如,xAI 正在采用 GB200 来训练和推理其下一代 grog AI 模型。Meta 先进的 Andromeda 广告引擎运行于 NVIDIA 的 Grace Hopper 超级芯片之上,在 Instagram 和 Facebook 应用程序中投放大量广告。
Andromeda harnesses Grace Hopper's fast interconnect and large memory to boost inference throughput by 3x, enhanced ad personalization and deliver meaningful jumps in monetization and ROI. Enterprise revenue increased nearly 2 times year on accelerating demand for model fine-tuning, RAG and Agentic AI workflows and GPU accelerated data processing. We introduced NVIDIA Llama Nemotron model family NIMs to help developers create and deploy AI agents across a range of applications including customer support, fraud detection and product supply chain and inventory management.
Andromeda 利用 Grace Hopper 的高速互联和大容量内存,将推理吞吐量提高了 3 倍,增强了广告个性化,并显著提升了盈利能力和投资回报率。随着模型微调、RAG 和智能体 AI 工作流以及 GPU 加速数据处理需求的加速增长,企业收入同比增长近 2 倍。我们推出了 NVIDIA Llama Nemotron 模型系列 NIM,以帮助开发人员在客户支持、欺诈检测、产品供应链和库存管理等多个应用领域创建和部署 AI 智能体。
Leading AI agent platform providers, including SAP and ServiceNow are among the first to use new models. Health care leaders IQVIA, Illumina, Mayo Clinic and Arc Institute are using NVIDIA AI to speed drug discovery enhanced genomic research and Pioneer advanced health care services with generative and Agentic AI. As AI expands beyond the digital world, NVIDIA infrastructure and software platforms are increasingly being adopted to power robotics and physical AI development.
领先的 AI 智能体平台提供商,包括 SAP 和 ServiceNow,是率先使用新模型的企业之一。医疗保健领域的领导者 IQVIA、Illumina、梅奥诊所和 Arc 研究所正在使用 NVIDIA AI,加速药物发现、强化基因组研究,并利用生成式和智能体 AI 开拓先进的医疗保健服务。随着 AI 从数字世界扩展到现实世界,越来越多的企业采用 NVIDIA 的基础设施和软件平台,以推动机器人技术和实体 AI 的发展。
One of the early and largest robotics applications and autonomous vehicles where virtually every AV company is developing on NVIDIA in the data center, the car or both. NVIDIA's automotive vertical revenue is expected to grow to approximately $5 billion this fiscal year. At CES, Hyundai Motor Group announced it is adopting NVIDIA technologies to accelerate AV and robotics development and smart factory initiatives.
机器人和自动驾驶汽车是最早且规模最大的应用领域之一,几乎所有自动驾驶公司都在数据中心、汽车或两者中使用 NVIDIA 进行开发。预计本财年 NVIDIA 汽车业务收入将增长至约 50 亿美元。在 CES 展会上,现代汽车集团宣布采用 NVIDIA 技术,加速自动驾驶汽车、机器人开发以及智能工厂计划。
Vision transformers, self-supervised learning, multimodal sensor fusion and high fidelity simulation are driving breakthroughs in AV development and will require 10x more compute. At CES, we announced the NVIDIA COSMO World Foundation model platform. Just as language, foundation models have revolutionized Language AI, Cosmos is a physical AI to revolutionize robotics. The robotics and automotive companies, including ridesharing giant Uber, are among the first to adopt the platform.
视觉 Transformer、自监督学习、多模态传感器融合和高保真仿真正在推动自动驾驶开发的突破,并将需要 10 倍以上的计算能力。在 CES 上,我们发布了 NVIDIA COSMO World 基础模型平台。正如基础模型彻底改变了语言 AI 一样,Cosmos 是一种物理 AI,将彻底改变机器人技术。包括共享出行巨头 Uber 在内的机器人和汽车公司是首批采用该平台的企业之一。
From a geographic perspective, sequential growth in our Data Center revenue was strongest in the U.S., driven by the initial ramp up Blackwell. Countries across the globe are building their AI ecosystem as demand for compute infrastructure is surging. France's EUR 200 billion Euro AI investment and the EU's EUR 200 billion invest AI initiatives offer a glimpse into the build-out to set redefined global AI infrastructure in the coming years.
从地理角度来看,我们数据中心收入的环比增长在美国最为强劲,主要受 Blackwell 初步扩张的推动。随着计算基础设施需求激增,全球各国都在建设自己的人工智能生态系统。法国 2000 亿欧元的人工智能投资和欧盟 2000 亿欧元的人工智能投资计划,让我们得以一窥未来几年全球人工智能基础设施重塑的前景。
Now as a percentage of total Data Center revenue, data center sales in China remained well below levels seen on the onset of export controls. Absent any change in regulations, we believe that China shipments will remain roughly at the current percentage. The market in China for data center solutions remains very competitive. We will continue to comply with export controls while serving our customers. Networking revenue declined 3% sequentially. Our networking attached to GPU compute systems is robust at over 75%.
目前,中国数据中心销售额占数据中心总收入的百分比仍远低于出口管制实施前的水平。如果监管规定没有变化,我们认为对中国的出货量将大致维持在当前的百分比。中国数据中心解决方案市场仍然竞争激烈。我们将继续遵守出口管制,同时为客户提供服务。网络业务收入环比下降了 3%。我们的网络业务与 GPU 计算系统的关联度强劲,超过 75%。
We are transitioning from small NVLink 8 with InfiniBand, to large NVLink 72 with Spectrum-X. Spectrum-X and NVLink Switch revenue increased and represents a major new growth vector. We expect networking to return to growth in Q1.AI requires a new class of networking. NVIDIA offers NVLink Switch systems for scale-up compute. For scale out, we offer quantum incentive for HPC supercomputers and Spectrum X for Ethernet environments. Spectrum-X enhances the Ethernet for AI computing and has been a huge success. Microsoft Azure, OCI, CoreWeave and others are building large AI factories with Spectrum-X.
我们正从配备 InfiniBand 的小型 NVLink 8 过渡到配备 Spectrum-X 的大型 NVLink 72。Spectrum-X 和 NVLink 交换机的收入有所增长,成为一个重要的新增长方向。我们预计网络业务将在第一季度恢复增长。人工智能需要一种新型网络。NVIDIA 提供 NVLink 交换机系统用于纵向扩展计算。在横向扩展方面,我们为 HPC 超级计算机提供量子激励方案,并为以太网环境提供 Spectrum-X。Spectrum-X 增强了以太网在 AI 计算中的表现,并取得了巨大成功。Microsoft Azure、OCI、CoreWeave 等公司正在使用 Spectrum-X 构建大型 AI 工厂。
The first Stargate data centers will use Spectrum-X. Yesterday, Cisco announced integrating Spectrum-X into their networking portfolio to help enterprises build AI infrastructure. With its large enterprise footprint and global reach, Cisco will bring NVIDIA Ethernet to every industry.
首批 Stargate 数据中心将采用 Spectrum-X。昨日,思科宣布将 Spectrum-X 整合到其网络产品组合中,以帮助企业构建人工智能基础设施。凭借其广泛的企业覆盖和全球影响力,思科将把 NVIDIA 以太网带入各个行业。
Now moving to gaming and ARPCs. Gaming revenue of $2.5 billion decreased 22% sequentially and 11% year-on-year. Full year revenue of $11.4 billion increased 9% year-on-year, and demand remains strong throughout the holiday. However, Q4 shipments were impacted by supply constraints. We expect strong sequential growth in Q1 as supply increases. The new GeForce RTX 50 Series desktop and laptop GPUs are here. Build for gamers, creators and developers they fuse AI and graphics redefining visual computing, powered by the Blackwell architecture, fifth generation Tensor cores and fourth-generation RT cores and featuring up to 3,400 AI top. These GPUs deliver a 2x performance leap and new AI-driven rendering including neural shaders, digital human technologies, geometry and lighting.
现在转向游戏和 ARPC 业务。游戏收入为 25 亿美元,环比下降 22%,同比下降 11%。全年收入 114 亿美元,同比增长 9%,假期期间需求持续强劲。然而,第四季度出货量受到供应限制的影响。随着供应增加,我们预计第一季度将实现强劲的环比增长。全新的 GeForce RTX 50 系列台式机和笔记本电脑 GPU 现已推出。这些 GPU 专为游戏玩家、创作者和开发者打造,融合了 AI 与图形技术,重新定义视觉计算。它们基于 Blackwell 架构,配备第五代 Tensor 核心和第四代 RT 核心,AI 性能高达 3400 TOP。这些 GPU 实现了 2 倍的性能飞跃,并带来了全新的 AI 驱动渲染技术,包括神经着色器、数字人类技术、几何和光照。
The new DLSS 4 boost frame rates up to 8 times with AI-driven frame generation, turning 1 rendered frame into 3. It also features the industry's first real-time application of transformer models packing 2 times more parameters and 4 times to compute for unprecedented visual fidelity. We also announced a wave of GeForce Blackwell laptop GPUs with new NVIDIA Max-Q technology that extends battery life by up to an incredible 40%. These laptops will be available starting in March from the world's top manufacturers.
全新的 DLSS 4 通过 AI 驱动的帧生成技术,将帧率提升高达 8 倍,将 1 个渲染帧转化为 3 个。它还首次在行业内实时应用了 Transformer 模型,参数量增加了 2 倍,计算量增加了 4 倍,带来前所未有的视觉保真度。我们还发布了一系列搭载全新 NVIDIA Max-Q 技术的 GeForce Blackwell 笔记本 GPU,电池续航时间提升高达惊人的 40%。这些笔记本电脑将于 3 月起由全球顶级制造商推出。
Moving to our professional visualization business. Revenue of $511 million was up 5% sequentially and 10% year-on-year. Full year revenue of $1.9 billion increased 21% year-on-year. Key industry verticals driving demand include automotive and health care. NVIDIA Technologies and generative AI are reshaping design, engineering and simulation workloads. Increasingly, these technologies are being leveraged in leading software platforms from ANSYS, Cadence and Siemens fueling demand for NVIDIA RTX workstations.
接下来是我们的专业可视化业务。收入为 5.11 亿美元,环比增长 5%,同比增长 10%。全年收入达 19 亿美元,同比增长 21%。推动需求的关键行业垂直领域包括汽车和医疗保健。NVIDIA 技术和生成式人工智能正在重塑设计、工程和仿真工作负载。这些技术正越来越多地被 ANSYS、Cadence 和 Siemens 等领先的软件平台所采用,推动了对 NVIDIA RTX 工作站的需求。
Now moving to Automotive. Revenue was a record $570 million, up 27% sequentially and up 103% year-on-year. Full year revenue of $1.7 billion increased 55% year-on-year. Strong growth was driven by the continued ramp in autonomous vehicles, including cars and robotaxis. At CES, we announced Toyota, the world's largest auto maker will build its next-generation vehicles on NVIDIA Orin running the safety certified NVIDIA Drive OS. We announced Aurora and Continental will deploy driverless trucks at scale powered by NVIDIA Drive Thor.
现在转向汽车业务。收入达到创纪录的 5.7 亿美元,环比增长 27%,同比增长 103%。全年收入为 17 亿美元,同比增长 55%。强劲增长得益于包括汽车和自动驾驶出租车在内的自动驾驶车辆的持续扩张。在 CES 上,我们宣布全球最大的汽车制造商丰田将基于 NVIDIA Orin 平台打造其下一代汽车,该平台运行通过安全认证的 NVIDIA Drive OS。我们还宣布 Aurora 和大陆集团将大规模部署由 NVIDIA Drive Thor 驱动的无人驾驶卡车。
Finally, our end-to-end autonomous vehicle platform NVIDIA Drive Hyperion has passed industry safety assessments like TÜV SUD and TUV Rheinland, 2 of the industry's foremost authorities for automotive grade safety and cybersecurity. NVIDIA is the first AV platform that received a comprehensive set of third-party assessments.
最终,我们的端到端自动驾驶汽车平台 NVIDIA Drive Hyperion 已通过了 TÜV SÜD 和 TÜV Rheinland 等行业安全评估,这两家机构是汽车级安全和网络安全领域最权威的机构之一。NVIDIA 是首个获得全面第三方评估的自动驾驶平台。
Okay. Moving to the rest of the P&L. GAAP gross margin was 73% and non-GAAP gross margin was 73.5% and down sequentially as expected with our first deliveries of the Blackwell architecture. As discussed last quarter, Blackwell is a customizable AI infrastructure with several different types of NVIDIA build chips multiple networking options and for air and liquid-cooled data center. We exceeded our expectations in Q4 in ramping Blackwell, increasing system availability, providing several configurations to our customers. As Blackwell ramps, we expect gross margins to be in the low 70s.
好的。接下来是损益表的其他部分。GAAP 毛利率为 73%,非 GAAP 毛利率为 73.5%,如预期般环比下降,原因是我们首次交付了 Blackwell 架构产品。如上季度所述,Blackwell 是一种可定制的 AI 基础设施,配备多种不同类型的 NVIDIA 自研芯片、多种网络选项,并适用于风冷和液冷数据中心。第四季度,我们在 Blackwell 的产能提升、系统可用性提高以及为客户提供多种配置方面超出了预期。随着 Blackwell 产能的提升,我们预计毛利率将处于 70%出头的水平。
Initially, we are focused on expediting the manufacturing of Blackwell systems to meet strong customer demand as they race to build out Blackwell infrastructure. When fully ramped, we have many opportunities to improve the cost and gross margin will improve and return to the mid-70s, late this fiscal year. Sequentially, GAAP operating expenses were up 9% and non-GAAP operating expenses were 11%, reflecting higher engineering development costs and higher compute and infrastructure costs for new product introductions. In Q4, we returned $8.1 billion to shareholders in the form of share repurchases and cash dividends.
最初,我们专注于加快 Blackwell 系统的生产,以满足客户在迅速建设 Blackwell 基础设施过程中产生的强劲需求。当产能完全提升后,我们有很多机会改善成本,毛利率将提高,并在本财年后期回升至 70%中段水平。环比来看,GAAP 运营费用增长了 9%,非 GAAP 运营费用增长了 11%,反映出工程开发成本增加,以及新产品推出所需的计算和基础设施成本增加。在第四季度,我们通过股票回购和现金股息的形式向股东返还了 81 亿美元。
Let me turn to the outlook in the first quarter. Total revenue is expected to be $43 billion, plus or minus 2%. Continuing with its strong demand, we expect a significant ramp of Blackwell in Q1. We expect sequential growth in both Data Center and Gaming. Within Data Center, we expect sequential growth from both compute and networking. GAAP and non-GAAP gross margins are expected to be 70.6% and 71%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $5.2 billion and $3.6 billion, respectively. We expect full year fiscal year '26 operating expenses to grow to be in the mid-30s.
让我来谈一下第一季度的展望。预计总收入为 430 亿美元,上下浮动 2%。随着需求持续强劲,我们预计 Blackwell 将在第一季度显著增长。我们预计数据中心和游戏业务都将实现环比增长。在数据中心业务中,我们预计计算和网络业务都将实现环比增长。GAAP 和非 GAAP 毛利率预计分别为 70.6%和 71%,上下浮动 50 个基点。GAAP 和非 GAAP 运营费用预计分别约为 52 亿美元和 36 亿美元。我们预计 2026 财年全年运营费用的增长将达到 30%左右的中段水平。

现在是27倍的PE,如果利润再增加50%,PE会降到20倍以内。
GAAP and non-GAAP other incoming expenses are expected to be an income of approximately $400 million, excluding gains and losses from nonmarketable and publicly held equity securities. GAAP and non-GAAP tax rates are expected to be 17%, plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR website, including a new financial information AI agent.
GAAP 和非 GAAP 的其他收入支出预计约为 4 亿美元,不包括非上市和公开持有股权证券的损益。GAAP 和非 GAAP 税率预计为 17%,上下浮动 1%,不包括任何一次性项目。更多财务细节包含在首席财务官的评论和我们投资者关系网站上的其他信息中,包括新的财务信息 AI 代理。
In closing, let me highlight upcoming events for the financial community. We will be at the TD Cowen Health Care Conference in Boston on March 3 and at the Morgan Stanley Technology, Media and Telecom Conference in San Francisco on March 5. Please join us for our Annual GTC conference starting Monday, March 17 in San Jose, California. Jensen will deliver a news-packed keynote on March 18, and we will host a Q&A session for our financial analysts for the next day, March 19.
最后,我想强调一下金融界即将举行的活动。我们将于 3 月 3 日参加在波士顿举行的 TD Cowen 医疗保健大会,并于 3 月 5 日参加在旧金山举行的摩根士丹利技术、媒体和电信大会。此外,请参加我们将于 3 月 17 日(星期一)在加利福尼亚州圣何塞举行的年度 GTC 大会。Jensen 将于 3 月 18 日发表内容丰富的主题演讲,我们将在次日(3 月 19 日)为金融分析师举办问答环节。
We look forward to seeing you at these events. Our earnings call to discuss the results for our first quarter of fiscal 2026 is scheduled for May 28, 2025. We are going to open up the call, operator, to questions. If you could start that, that would be great.
我们期待在这些活动中与您见面。我们定于 2025 年 5 月 28 日召开财年 2026 年第一季度财报电话会议,讨论相关业绩。接下来我们将开放问答环节,接线员,请开始提问环节,谢谢。
Question-and-Answer Session
问答环节
Operator 操作员
Thank you. [Operator Instructions]. And your first question comes from CJ Muse with Cantor Fitzgerald. Please go ahead.
谢谢。[操作员说明]。您的第一个问题来自 Cantor Fitzgerald 的 CJ Muse。请开始。
C.J. Muse
Yeah, good afternoon. Thank you taking the question. I guess for me, Jensen, as Tepcom' compute and reinforcement learning shows such promise, we're clearly seeing an increasing blurring of the lines between training and inference -- what does this mean for the potential future of potentially inference dedicated clusters? And how do you think about the overall impact to NVIDIA and your customers? Thank you.
是的,下午好。感谢您接受提问。我想问一下 Jensen,随着 Tepcom 的计算和强化学习展现出如此大的前景,我们明显看到训练和推理之间的界限越来越模糊,这对未来可能专用于推理的集群意味着什么?您如何看待这对 NVIDIA 和你们客户的整体影响?谢谢。
Jensen Huang 黄仁勋
Yes, I appreciate that C.J. There are now multiple scaling laws. There's the pre-training scaling law, and that's going to continue to scale because we have multimodality, we have data that came from reasoning that are now used to do pretraining. And then the second is post-training scaling law, using reinforcement learning human feedback, reinforcement learning AI feedback, reinforcement learning, verifiable rewards. The amount of computation you use for post training is actually higher than pretraining. And it's kind of sensible in the sense that you could, while you're using reinforcement learning, generate an enormous amount of synthetic data or synthetically generated tokens. AI models are basically generating tokens to train AI models. And that's post training.
是的,我很赞同,C.J.。现在存在多种扩展定律。一种是预训练扩展定律,这种扩展将继续进行,因为我们拥有多模态,我们拥有来自推理的数据,这些数据现在被用于预训练。第二种是后训练扩展定律,使用强化学习人类反馈、强化学习 AI 反馈、强化学习可验证奖励。你在后训练中使用的计算量实际上比预训练更高。这在某种意义上是合理的,因为当你使用强化学习时,可以生成大量合成数据或合成生成的标记。AI 模型基本上是在生成标记来训练 AI 模型,这就是后训练。
And the third part, this is the part that you mentioned is test time compute or reasoning, long thinking, inference scaling. They're all basically the same ideas. And there you have a chain of thought, you've search. The amount of tokens generated the amount of inference compute needed is already 100x more than the one-shot examples and the one-shot capabilities of large language models in the beginning. And that's just the beginning. This is just the beginning. The idea that the next generation could have thousands times and even hopefully, extremely thoughtful and simulation-based and search-based models that could be hundreds of thousands, millions of times more compute than today is in our future.
第三部分,这部分就是你提到的测试时计算或推理,即长时间思考、推理扩展。它们本质上都是相同的概念。在这里,你有思维链条,有搜索。生成的标记数量和推理计算量已经比最初大型语言模型的一次性示例和一次性能力高出 100 倍。而这仅仅是个开始。这只是个开始。下一代模型可能拥有数千倍的计算量,甚至有望出现极其深入的、基于模拟和搜索的模型,其计算量可能达到今天的数十万倍甚至数百万倍,这种想法就在我们的未来。
And so the question is how do you design such an architecture? Some of the models are auto regressive. Some of the models are diffusion based. Some of the times you want your data center to have disaggregated inference. Sometimes it is compacted. And so it's hard to figure out what is the best configuration of a data center, which is the reason why NVIDIA's architecture is so popular. We run every model. We are great at training. The vast majority of our compute today is actually inference and Blackwell takes all of that to a new level. We designed Blackwell with the idea of reasoning models in mind. And when you look at training, it's many times more performing.
因此问题是,你该如何设计这样一种架构?有些模型是自回归的,有些模型是基于扩散的。有时你希望数据中心进行分散式推理,有时则是紧凑式的。因此,很难确定数据中心的最佳配置,这也是 NVIDIA 架构如此受欢迎的原因。我们能运行所有模型,我们擅长训练。目前我们绝大部分的计算实际上是推理,而 Blackwell 将这一切提升到了一个全新的水平。我们设计 Blackwell 时考虑到了推理模型。当你观察训练时,它的性能提升了许多倍。
But what's really amazing is for long thinking test time scaling, reasoning AI models were tens of times faster, 25 times higher throughput. And so Blackwell is going to be incredible across the board. And when you have a data center that allows you to configure and use your data center based on are you doing more pretraining now, post training now or scaling out your inference, our architecture is fungible and easy to use in all of those different ways. And so we're seeing, in fact, much, much more concentration of a unified architecture than ever before.
但真正令人惊叹的是,在长时间思考测试的扩展中,推理型 AI 模型的速度快了数十倍,吞吐量提高了 25 倍。因此,Blackwell 将在各个方面都表现出色。当你拥有一个数据中心,能够根据你当前是更多地进行预训练、后续训练还是扩展推理来配置和使用数据中心时,我们的架构在所有这些不同的使用方式中都是灵活且易于使用的。因此,我们实际上看到统一架构的集中程度比以往任何时候都要高得多。
Operator 操作员
Your next question comes from the line of Joe Moore with JPMorgan. Please go ahead.
下一个问题来自摩根大通的乔·摩尔,请开始提问。
Joe Moore 乔·摩尔
Good morning, Morgan Stanley, actually. I wonder if you could talk about GB200 at CES, you sort of talked about the complexity of the rack level systems and the challenges you have. And then as you said in the prepared remarks, we've seen a lot of general availability -- where are you in terms of that ramp? Are there still bottlenecks to consider at a systems level above and beyond the chip level? And just have you maintained your enthusiasm for the NVL72 platforms?
早上好,实际上我是摩根士丹利。我想请您谈谈 CES 上的 GB200,您之前提到了机架级系统的复杂性以及所面临的挑战。另外,正如您在准备好的发言中所说,我们已经看到很多产品进入普遍供应阶段——您目前在这一产能提升方面处于什么阶段?在芯片层面之外,系统层面是否仍存在瓶颈需要考虑?您对 NVL72 平台的热情是否依旧?
Jensen Huang 黄仁勋
Well, I'm more enthusiastic today than I was at CES. And the reason for that is because we've shipped a lot more since CES. We have some 350 plants manufacturing the 1.5 million components that go into each one of the Blackwell racks, Grace Blackwell racks. Yes, it's extremely complicated. And we successfully and incredibly ramped up Grace Blackwell, delivering some $11 billion of revenues last quarter. We're going to have to continue to scale as demand is quite high, and customers are anxious and impatient to get their Blackwell systems. You've probably seen on the web, a fair number of celebrations about Grace Blackwell systems coming online and we have them, of course. We have a fairly large installation of Grace Blackwell goes for our own engineering and our own design teams and software teams.
嗯,我今天比在 CES 时更有热情了。这是因为自 CES 以来,我们的出货量大幅增加。我们拥有大约 350 家工厂,生产用于每个 Blackwell 机架(Grace Blackwell 机架)的 150 万个零部件。是的,这极其复杂。我们成功且令人难以置信地提高了 Grace Blackwell 的产能,上个季度带来了约 110 亿美元的收入。由于需求非常高,客户迫切希望尽快获得他们的 Blackwell 系统,我们必须继续扩大规模。你可能已经在网上看到不少关于 Grace Blackwell 系统上线的庆祝活动,我们当然也有。我们自己也有相当规模的 Grace Blackwell 安装,用于我们的工程、设计和软件团队。
CoreWeave has now been quite public about the successful bring up of theirs. Microsoft has, of course, open AI has, and you're starting to see many come online. And so I think the answer to your question is nothing is easy about what we're doing, but we're doing great, and all of our partners are doing great.
CoreWeave 现在已经相当公开地表示他们的系统已成功上线。当然,微软和 OpenAI 也已上线,你开始看到许多系统陆续上线。因此,我认为你的问题的答案是,我们所做的一切并不容易,但我们做得很好,我们所有的合作伙伴也都表现出色。
Operator 操作员
Your next question comes from the line of Vivek Arya with Bank of America Securities. Please go ahead.
您的下一个问题来自美国银行证券的 Vivek Arya,请开始提问。
Vivek Arya
Thank you for taking my questions. Colette if you wouldn't mind confirming if Q1 is the bottom for gross margins? And then Jensen, my question is for you. What is on your dashboard to give you the confidence that the strong demand can sustain into next year? And has DeepSeek and whatever innovations they came up with, has that changed that view in any way? Thank you.
感谢你接受我的提问。Colette,能否请你确认一下第一季度是否为毛利率的最低点?然后 Jensen,我的问题是给你的。你看到哪些指标,让你有信心强劲的需求能够持续到明年?DeepSeek 及其带来的创新是否以任何方式改变了你的看法?谢谢。
Colette Kress
Let me first take the first part of the question there regarding the gross margin. During our Blackwell ramp, our gross margins will be in the low 70s. At this point, we are focusing on expediting our manufacturing, expediting our manufacturing to make sure that we can provide to customers as soon as possible. Our Blackwell is fully round. And once it does -- I'm sorry, once our Blackwell fully rounds, we can improve our cost and our gross margin. So we expect to probably be in the mid-70s later this year.
让我先回答有关毛利率的第一个问题。在我们的 Blackwell 产能提升期间,我们的毛利率将处于 70%出头的水平。目前,我们正专注于加快生产制造,以确保尽快向客户供货。我们的 Blackwell 产能尚未完全达产。一旦 Blackwell 完全达产,我们就能改善成本和毛利率。因此,我们预计今年晚些时候毛利率可能会达到 75%左右。
Walking through what you heard Jensen speak about the systems and their complexity, they are customizable in some cases. They've got multiple networking options. They have liquid cool and water cooled. So we know there is an opportunity for us to improve these gross margins going forward. But right now, we are going to focus on getting the manufacturing complete and to our customers as soon as possible.
回顾一下你刚才听到詹森谈到的系统及其复杂性,这些系统在某些情况下是可定制的。它们拥有多种网络选项,包括液冷和水冷。因此,我们知道未来有机会提高这些毛利率。但目前,我们将专注于尽快完成生产并交付给客户。
Jensen Huang 黄仁勋
We know several things Vivek, we have a fairly good line of sight of the amount of capital investment that data centers are building out towards. We know that going forward, the vast majority of software is going to be based on machine learning. And so accelerated computing and generative AI, reasoning AI are going to be the type of architecture you want in your data center.
Vivek,我们了解几件事情,我们对数据中心正在进行的资本投资规模有相当清晰的认识。我们知道未来绝大多数软件都将基于机器学习。因此,加速计算、生成式人工智能和推理型人工智能将成为你数据中心所需的架构类型。
We have, of course, forecast and plans from our top partners. And we also know that there are many innovative, really exciting start-ups that are still coming online as new opportunities for developing the next breakthroughs in AI, whether it's Agentic AIs, reasoning AI or physical AIs. The number of start-ups are still quite vibrant and each 1 of them need a fair amount of computing infrastructure.
当然,我们已经拥有来自顶级合作伙伴的预测和计划。同时我们也知道,还有许多创新且令人兴奋的初创公司正在涌现,为开发下一代人工智能突破提供新的机遇,无论是自主智能体、推理型人工智能还是实体人工智能。这些初创公司的数量依然非常活跃,每一家都需要相当规模的计算基础设施。
And so I think the -- whether it's the near term signals or the midterm signals, near-term signals, of course, are POs and forecasts and things like that. Midterm signals would be the level of infrastructure and CapEx scale-out compared to previous years. And then the long-term signals has to do with the fact that we know fundamentally software has changed from hand coding that runs on CPUs, to machine learning and AI-based software that runs on GPUs and accelerated computing systems. And so we have a fairly good sense that this is the future of software.
因此我认为,无论是短期信号还是中期信号,短期信号当然是指采购订单、预测等类似内容;中期信号则是基础设施水平和资本支出规模与往年的对比;而长期信号则与我们所了解的基本事实有关,即软件已经从在 CPU 上运行的手工编码转变为在 GPU 和加速计算系统上运行的机器学习和基于 AI 的软件。因此,我们相当清楚,这就是软件的未来。
And then maybe as you roll it out, another way to think about that is we've really only tapped consumer AI and search and some amount of consumer generative AI, advertising, recommenders, kind of the early days of software. The next wave is coming, Agentic AI for enterprise, physical AI for robotics and sovereign AI as different regions build out their AI for their own ecosystems. And so each one of these are barely off the ground, and we can see them. We can see them because, obviously, we're in the center of much of this development and we can see great activity happening in all these different places and these will happen. So near term, midterm, long-term.
然后,也许当你逐步推出时,另一种思考方式是,我们实际上只触及了面向消费者的人工智能、搜索,以及一定程度上的消费者生成式人工智能、广告、推荐系统,这些还只是软件发展的早期阶段。下一波浪潮即将到来,包括面向企业的自主智能体 AI、用于机器人领域的实体 AI,以及随着不同地区为自身生态系统构建 AI 而兴起的主权 AI。这些领域每一个都才刚刚起步,但我们已经能够看到它们的前景。我们之所以能看到,是因为我们显然处于这些发展的中心,我们能看到各个领域正在发生大量的活动,这些趋势必将实现。因此,这些分别对应着短期、中期和长期的发展。
Operator 操作员
Your next question comes from the line of Harlan Sur with JPMorgan. Please go ahead.
下一个问题来自摩根大通的 Harlan Sur,请开始提问。
Harlan Sur
Good afternoon. Thanks for taking my question. Your next-generation Blackwell Ultra is set to launch in the second half of this year, in line with the team's annual product cadence. Jensen, can you help us understand the demand dynamics for Ultra given that you'll still be ramping the current generation Blackwell solutions? How do your customers and the supply chain also manage the simultaneous ramps of these two products? And -- is the team still on track to execute Blackwell Ultra in the second half of this year?
下午好,谢谢你接受我的提问。你们的下一代 Blackwell Ultra 计划于今年下半年推出,这与团队的年度产品节奏一致。Jensen,鉴于你们仍在提升当前一代 Blackwell 解决方案的产能,你能否帮助我们理解 Ultra 的需求动态?你们的客户和供应链如何同时管理这两款产品的产能提升?另外,团队是否仍按计划在今年下半年推出 Blackwell Ultra?
Jensen Huang 黄仁勋
Yes. Blackwell Ultra is second half. As you know, the first Blackwell was we had a hiccup that probably cost us a couple of months. We're fully recovered, of course. The team did an amazing job recovering and all of our supply chain partners and just so many people helped us recover at the speed of light. And so now we've successfully ramped production of Blackwell.
是的,Blackwell Ultra 是下半年的事。如你所知,第一个 Blackwell 项目中我们遇到了一些小问题,可能让我们损失了几个月的时间。当然,我们现在已经完全恢复了。团队在恢复过程中表现出色,我们所有的供应链合作伙伴以及许多人都帮助我们以最快速度恢复了过来。因此,现在我们已成功提高了 Blackwell 的产量。
But that doesn't stop the next train. The next train is on an annual rhythm and Blackwell Ultra with new networking, new memories and of course, new processors, and all of that is coming online. We've have been working with all of our partners and customers, laying this out. They have all of the necessary information, and we'll work with everybody to do the proper transition. This time between Blackwell and Blackwell Ultra, the system architecture is exactly the same. It's a lot harder going from Hopper to Blackwell because we went from an NVLink 8 system to a NVLink 72-based system. So the chassis, the architecture of the system, the hardware, the power delivery, all of that had to change. This was quite a challenging transition.
但这并不会阻止下一代产品的推出。下一代产品遵循年度节奏,即 Blackwell Ultra,配备了全新的网络连接、新的存储器,当然还有新的处理器,所有这些都将陆续上线。我们一直在与所有合作伙伴和客户合作,规划这一进程。他们已掌握所有必要的信息,我们将与各方共同完成适当的过渡。从 Blackwell 到 Blackwell Ultra,这次系统架构完全相同。从 Hopper 到 Blackwell 的转变则困难得多,因为我们从基于 NVLink 8 的系统转向了基于 NVLink 72 的系统。因此,机箱、系统架构、硬件和供电方式都必须改变。这是一次相当具有挑战性的转型。
But the next transition will slot right in Blackwell Ultra will slot right in. We've also already revealed and been working very closely with all of our partners on the click after that. And the click after that is called Vera Rubin and all of our partners are getting up to speed on the transition of that and so preparing for that transition. And again, we're going to provide a big, huge step-up. And so come to GTC, and I'll talk to you about Blackwell Ultra, Vera Rubin and then show you what we place after that. Really exciting new products, so to come to GTC piece.
但下一个过渡将直接由 Blackwell Ultra 接替。我们也已经与所有合作伙伴密切合作,揭示并准备好了再下一步的计划。再下一步名为 Vera Rubin,我们所有的合作伙伴都在加快熟悉并为该过渡做准备。同样,我们将再次实现巨大飞跃。因此,请来参加 GTC,我将向你们介绍 Blackwell Ultra、Vera Rubin,并展示之后的产品。真正令人兴奋的新产品,敬请期待 GTC。
Operator 操作员
Your next question comes from the line of Timothy Arcuri with UBS. Please go ahead.
下一个问题来自瑞银的 Timothy Arcuri,请开始提问。
Timothy Arcuri
Thanks a lot. Jensen, we heard a lot about custom ASICs. Can you kind of speak to the balance between customer ASIC and merchant GPU. We hear about some of these heterogeneous superclusters to use both GPU and ASIC? Is that something customers are planning on building? Or will these infrastructures remain fairly distinct. Thanks.
非常感谢。Jensen,我们听到了很多关于定制 ASIC 的消息。您能否谈谈定制 ASIC 与通用 GPU 之间的平衡问题?我们听说一些异构超级集群会同时使用 GPU 和 ASIC,这是否是客户计划构建的方向?还是说这些基础设施将保持相当明确的区分?谢谢。
Jensen Huang 黄仁勋
Well, we built very different things than ASICs, in some ways, completely different in some areas we intercept. We're different in several ways. One, NVIDIA's architecture is general whether you're -- you've optimized for unaggressive models or diffusion-based models or vision-based models or multimodal models or text models. We're great in all of it.
嗯,我们所打造的东西与 ASIC 截然不同,在某些方面,我们所涉及的领域完全不同。我们在几个方面都有差异。首先,NVIDIA 的架构是通用的,无论你优化的是非激进模型、基于扩散的模型、基于视觉的模型、多模态模型还是文本模型,我们在所有这些方面都表现出色。
We're great on all of it because our software stack is so -- our architecture is sensible, our software stack ecosystem is so rich that were the initial target of most exciting innovations and algorithms. And so by definition, we're much, much more general than narrow. We're also really good from the end-to-end from data processing, the curation of the training data, to the training of the data, of course, to reinforcement learning used in post training, all the way to inference with tough time scaling. So we're general, we're end-to-end, and we're everywhere. And because we're not in just one cloud, we're in every cloud, we could be on-prem. We could be in a robot. Our architecture is much more accessible and a great target initial target for anybody who's starting up a new company. And so we're everywhere.
我们在所有这些方面都表现出色,因为我们的软件堆栈非常强大——我们的架构合理,软件生态系统非常丰富,因此成为大多数令人兴奋的创新和算法的首选目标。因此,从定义上讲,我们远比狭隘的方案更具通用性。从数据处理、训练数据的整理,到数据训练,再到训练后的强化学习,一直到推理阶段的严格时间扩展,我们在端到端的每个环节都表现出色。因此,我们是通用的、端到端的,并且无处不在。我们不仅仅存在于某个云平台,而是存在于所有云平台,我们也可以部署在本地,甚至可以部署在机器人中。我们的架构更易于访问,对于任何新创公司的初始目标来说都是一个绝佳的选择。因此,我们无处不在。
And the third thing I would say is that our performance in our rhythm is so incredibly fast. Remember that these data centers are always fixed in size. They're fixed in size or they're fixing power. And if our performance per watt is anywhere from 2x to 4x to 8x, which is not unusual, it translates directly to revenues. And so if you have a 100-megawatt data center, if the performance or the throughput in that 100-megawatt or the gigawatt data center is 4 times or 8 times higher, your revenues for that gigawatt data center is 8 times higher.
我要说的第三点是,我们在节奏上的性能提升速度非常惊人。请记住,这些数据中心的规模通常是固定的。它们的规模或功率是固定的。如果我们的每瓦性能提升了 2 倍、4 倍甚至 8 倍(这种情况并不少见),那么这将直接转化为收入。因此,如果你拥有一个 100 兆瓦的数据中心,而该 100 兆瓦或千兆瓦数据中心的性能或吞吐量提高了 4 倍或 8 倍,那么该千兆瓦数据中心的收入也将提高 8 倍。
And the reason that is so different than data centers of the past is because AI factories are directly monetizable through its tokens generated. And so the token throughput of our architecture being so incredibly fast is just incredibly valuable to all of the companies that are building these things for revenue generation reasons and capturing the fast ROI. And so I think the third reason is performance. And then the last thing that I would say is the software stack is incredibly hard. Building an ASIC is no different than what we do. We build a new architecture.
如今的数据中心与过去截然不同的原因在于,AI 工厂可以通过其生成的代币直接实现货币化。因此,我们架构的代币吞吐量极快,这对所有出于创收目的并希望快速获得投资回报的公司而言都极具价值。因此,我认为第三个原因是性能。最后一点我想说的是,软件堆栈极其困难。构建 ASIC 与我们所做的并无不同,我们构建的是一种全新的架构。
And the ecosystem that sits on top of our architecture is 10x more complex today than it was 2 years ago. And that's fairly obvious because the amount of software that the world is building on top of architecture is growing exponentially and AI is advancing very quickly. So bringing that whole ecosystem on top of multiple chips is hard. And so I would say that those four reasons. And then finally, I will say this, just because the chip is designed doesn’t mean it gets deployed. And you’ve seen this over and over again. There are a lot of chips that gets built, but when the time comes, a business decision has to be made, and that business decision is about deploying a new engine, a new processor into a limited AI factory in size, in power and in fine.
如今,建立在我们架构之上的生态系统比两年前复杂了十倍。这一点非常明显,因为全球基于架构开发的软件数量正呈指数级增长,而人工智能也在迅速发展。因此,将整个生态系统整合到多个芯片之上是困难的。我认为主要是上述四个原因。最后,我还想指出,仅仅芯片设计完成并不意味着它一定会被部署。这种情况我们已经反复见过。许多芯片被制造出来,但到了关键时刻,必须做出商业决策,而这种商业决策涉及是否将新的引擎、新的处理器部署到一个在规模、功耗和资金方面都有限的 AI 工厂中。

规模化有成本上的优势。
And our technology is not only more advanced, more performance, it has much, much better software capability and very importantly, our ability to deploy is lightning fast. And so these things are enough for the faint of heart, as everybody knows now. And so there’s a lot of different reasons why we do well, why we win.
而且我们的技术不仅更先进、性能更强,软件能力也要好得多,更重要的是,我们的部署能力快如闪电。正如现在大家所知,这些优势足以让竞争者望而却步。因此,我们表现出色并取得胜利的原因有很多。
Operator 操作员
Your next question comes from the line of Ben Reitzes with Melius Research. Please go ahead.
您的下一个问题来自美利斯研究公司的本·赖茨,请开始。
Ben Reitzes
Yeah, I've been right to this here. Hey, thanks a lot for the question. Hi, Jensen, it's a geography-related question, you did a great job explaining some of the demand underlying factors here on the strength. But U.S. was up about $5 billion or so sequentially. And I think there is a concern about whether U.S. can pick up the slack if there's regulations towards other geographies. And I was just wondering, as we go throughout the year, if this kind of surge in the U.S. continues and it's going to be -- whether that's okay. And if that underlies your growth rate, how can you keep growing so fast with this mix shift towards the U.S.? Your guidance looks like China is probably up sequentially. So just wondering if you could go through that dynamic and maybe collect can weigh in. .
是的,我刚才正好谈到这里。非常感谢你的提问。嗨,Jensen,这是一个与地理相关的问题,你刚才很好地解释了这里需求背后的一些推动因素。但美国市场环比增长了大约 50 亿美元左右。我认为人们担心的是,如果其他地区受到监管限制,美国市场能否弥补这一缺口。我只是想知道,随着今年的发展,如果美国市场的这种激增持续下去,这是否会成为问题。如果这是你们增长率背后的原因,那么在市场份额向美国转移的情况下,你们如何能继续保持如此快速的增长?你们的指引似乎表明中国市场可能会环比增长。所以我想知道你能否详细谈谈这种动态,也许 collect 也可以发表一下看法。
Jensen Huang 黄仁勋
China is approximately the same percentage as Q4 and as previous quarters. It's about half of what it was before the export control. But it's approximately the same in percentage. With respect to geographies, the takeaway is that AI is software. It's modern software. It's incredible modern software, but it's modern software and AI has gone mainstream. AI is used in delivery services everywhere, shopping services everywhere. If you were to buy a quarter, from milk is delivered to you, AI was involved.
中国的比例与第四季度及之前几个季度大致相同,大约是出口管制前的一半,但百分比大致相同。从地域角度来看,关键在于 AI 是一种软件,是现代软件。它是一种令人难以置信的现代软件,但本质上仍是现代软件,并且 AI 已经进入主流。AI 广泛应用于各地的配送服务和购物服务。如果你购买一夸脱牛奶并送货上门,这个过程中就有 AI 的参与。
And so almost everything that a consumer service provides AIs at the core of it. Every student will use AI as a tutor, health care services use AI, financial services use AI. No fintech company will not use AI. Every Fintech company will. Climate tech company use AI. Mineral discovery now uses AI. The number of -- every higher education, every university uses AI and so I think it is fairly safe to say that AI has gone mainstream and that it's being integrated into every application.
因此,几乎所有面向消费者的服务都以人工智能为核心。每个学生都会使用人工智能作为导师,医疗保健服务使用人工智能,金融服务使用人工智能。没有一家金融科技公司会不使用人工智能,每家金融科技公司都会使用。气候技术公司使用人工智能,矿产勘探现在也使用人工智能。所有高等教育机构、所有大学都在使用人工智能。因此,我认为可以相当肯定地说,人工智能已经成为主流,并且正在被整合到每个应用程序中。
And -- and our hope is that, of course, the technology continues to advance safely and advance in a helpful way to society. And with that, we're -- I do believe that we're at the beginning of this new transition. And what I mean by that in the beginning is, remember, behind us has been decades of data centers and decades of computers that have been built. And they've been built for a world of hand coding and general purpose computing and CPUs and so on and so forth. And going forward, I think it's fairly safe to say that world is going to be almost all software to be infused with AI. All software and all services will be based on -- ultimately, based on machine learning, the data flywheel is going to be part of improving software and services and that the future computers will be accelerated, the future computers will be based on AI. And we're really two years into that journey. And in modernizing computers that have taken decades to build out. And so I'm fairly sure that we're in the beginning of this new era. And then lastly, no technology has ever had the opportunity to address a larger part of the world's GDP than AI. No software tool ever has.
当然,我们的希望是技术能够继续安全地进步,并以对社会有益的方式发展。我确实相信,我们正处于这一新转型的开端。我所说的开端是指,我们身后已有数十年数据中心和计算机的发展历史。这些计算机是为手工编码、通用计算和 CPU 等而构建的。而展望未来,可以相当肯定地说,几乎所有的软件都将融入人工智能。所有的软件和服务最终都将基于机器学习,数据飞轮将成为改进软件和服务的一部分。未来的计算机将是加速的,未来的计算机将基于人工智能。我们真正踏上这一旅程才两年时间,而要对几十年来构建的计算机进行现代化改造。因此,我相当确定我们正处于这个新时代的开端。最后,从未有任何一种技术像人工智能这样,有机会触及全球如此大比例的 GDP。以往任何软件工具都未曾做到过。
And so this is now a software tool that can address a much larger part of the world's GDP more than any time in history. And so the way we think about growth and the way we think about whether something is big or small, has to be in the context of that. And when you take a step back and look at it from that perspective, we're really just in the beginning.
因此,这现在是一种软件工具,能够触及全球 GDP 中比历史上任何时候都更大的一部分。因此,我们思考增长的方式,以及判断某件事物规模大小的方式,都必须放在这一背景下。当你退一步,从这个角度来看,我们实际上才刚刚开始。
Operator 操作员
Your next question comes from the line of Aaron Rakers with Wells Fargo. Aaron your line is open. Your next question comes from Mark Lipacis with Evercore ISI. Please go ahead.
下一个问题来自富国银行的 Aaron Rakers。Aaron,请提问。下一个问题来自 Evercore ISI 的 Mark Lipacis。请继续。
Mark Lipacis
I had a clarification and a question. Colette up for the clarification. Did you say that enterprise within the data center grew 2x year-on-year for the January quarter? And if so, does that -- would that make it the fast faster growing than the hyperscalers? And then, Jensen, for you, the question, hyperscalers are the biggest purchasers of your solutions, but they buy equipment for both internal and external workloads, external workflows being cloud services that enterprise is used. So the question is, can you give us a sense of how that hyperscaler spend splits between that external workload and internal? And as these new AI workflows and applications come up, would you expect enterprises to become a larger part of that consumption mix? And does that impact how you develop your service, your ecosystem.
我有一个需要澄清的地方和一个问题。首先请 Colette 澄清一下,你刚才是否提到数据中心业务在 1 月季度的企业端同比增长了两倍?如果是这样,这是否意味着企业端的增长速度超过了超大规模数据中心?然后,Jensen,我的问题是:超大规模数据中心是你们解决方案的最大购买方,但他们购买设备既用于内部工作负载,也用于外部工作负载,其中外部工作负载指的是企业使用的云服务。因此,我想了解一下超大规模数据中心的支出在外部和内部工作负载之间的比例大致如何?随着这些新的 AI 工作流和应用的出现,你是否预计企业在整体消费中的占比会增加?这是否会影响你们服务和生态系统的开发方式?
Colette Kress
Sure. Thanks for the question regarding our Enterprise business. Yes, it grew to X and very similar to what we were seeing with our large CSPs. Keep in mind, these are both important areas to understand working with the CSPs and be working on large language models, can you working on inference in their own work. But keep in mind, that is also where the enterprises are servicing. Your enterprises are both with your CSPs as well as in terms of building on their own. They're both are growing quite well.
当然。感谢你关于我们企业业务的提问。是的,它增长到了 X,这与我们在大型 CSP 客户中看到的情况非常相似。请记住,这两个领域都很重要,要理解与 CSP 合作并致力于大型语言模型,同时也要关注他们自身工作的推理能力。但请记住,这也是企业所服务的领域。你的企业既与 CSP 合作,也在自行建设方面,两方面都增长得相当不错。
Jensen Huang 黄仁勋
The CSPs are about half of our business. And the CSPs have internal consumption and external consumption, as you say. And we are using -- of course, used for internal consumption. We work very closely with all of them to optimize workloads that are internal to them, because they have a large infrastructure of NVIDIA gear that they could take advantage of. And the fact that we could be used for AI on the one hand, video processing on the other hand, data processing like Spark, we're fungible. And so the useful life of our infrastructure is much better. If the useful life is much longer, then the TCO is also lower.
CSP 约占我们业务的一半左右。正如你所说,CSP 有内部消费和外部消费。当然,我们也用于内部消费。我们与所有 CSP 密切合作,优化其内部工作负载,因为他们拥有大量的 NVIDIA 设备基础设施可以利用。我们既可以用于 AI,也可以用于视频处理,还可以用于像 Spark 这样的数据处理,因此我们的用途非常灵活。因此,我们基础设施的使用寿命要长得多。如果使用寿命更长,那么总体拥有成本(TCO)也会更低。
And so -- the second part is how do we see the growth of enterprise or not CSPs, if you will, going forward? And the answer is, I believe, long-term, it is by far larger and the reason for that is because if you look at the computer industry today and what is not served by the computer industry is largely industrial. So let me give you an example. When we say enterprise, and let's use the car company as an example because they make both soft things and hard things. And so in the case of a car company, the employees will be what we call enterprise and Agentic AI and software planning systems and tools, and we have some really exciting things to share with you guys at GTC, build Agentic systems are for employees to make employees more productive to design to market plan to operate their company. That's Agentic AI.
因此,第二部分是我们如何看待企业或非通信服务提供商(CSP)的增长前景?我认为,从长期来看,这个市场规模要大得多,原因在于,如果你观察当今计算机行业,会发现尚未被计算机行业充分服务的领域主要是工业领域。我举个例子,当我们提到企业时,以汽车公司为例,因为他们同时生产软性产品和硬性产品。在汽车公司的案例中,员工属于我们所称的企业范畴,而智能体人工智能(Agentic AI)和软件规划系统及工具——我们将在 GTC 大会上与大家分享一些非常令人兴奋的内容——构建的智能体系统是为员工服务的,旨在提高员工的生产力,以便更好地设计、营销、规划和运营他们的公司。这就是智能体人工智能(Agentic AI)。
On the other hand, the cars that they manufacture also need AI. They need an AI system that trains the cars, treats this entire giant fleet of cars. And today, there's 1 billion cars on the road. Someday, there will be 1 billion cars on the road, and every single one of those cars will be robotic cars, and they'll all be collecting data, and we'll be improving them using an AI factory. Whereas they have a car factory today in the future they'll have a car factory and an AI factory.
另一方面,他们制造的汽车也需要人工智能。他们需要一个人工智能系统来训练汽车,管理整个庞大的汽车车队。如今,道路上已有 10 亿辆汽车。未来某一天,道路上也会有 10 亿辆汽车,而这些汽车中的每一辆都将是自动驾驶汽车,它们都会收集数据,我们将通过人工智能工厂来改进它们。如今他们拥有汽车工厂,而未来他们将同时拥有汽车工厂和人工智能工厂。
And then inside the car itself is a robotic system. And so as you can see, there are three computers involved and there's the computer that helps the people. There's the computer that build the AI for the machineries that could be, of course, could be a tractor, it could be a lawn mower. It could be a human or robot that's being developed today. It could be a building, it could be a warehouse. These physical systems require new type of AI we call physical AI. They can't just understand the meaning of words and languages, but they have to understand the meaning of the world, friction and inertia, object permanence and cause and effect. And all of those type of things that are common sense to you and I, but AIs have to go learn those physical effects. So we call that physical AI.
然后在汽车内部本身就是一个机器人系统。如你所见,这里涉及三台计算机,其中一台计算机用于帮助人类,另一台计算机用于为机械设备构建人工智能,这些机械设备当然可以是拖拉机,也可以是割草机,还可以是目前正在开发的人类或机器人,也可以是一栋建筑或一个仓库。这些物理系统需要一种我们称之为“物理 AI”的新型人工智能。它们不仅要理解词汇和语言的含义,还必须理解现实世界的含义,比如摩擦力、惯性、物体恒存性以及因果关系。这些对你我来说是常识,但人工智能却必须学习这些物理现象。因此,我们称之为物理 AI。
That whole part of using Agentic AI to revolutionize the way we work inside companies, that's just starting. This is now the beginning of the agent AI era, and you hear a lot of people talking about it and we got some really great things going on. And then there's the physical AI after that, and then there are robotic systems after that. And so these 3 computers are all brand new. And my sense is that long term, this will be by far the larger of a mall, which kind of makes sense. The world’s GDP is representing – represented by either heavy industries or industrials and companies that are providing for those.
利用智能体人工智能彻底改变企业内部工作方式的整个过程才刚刚开始。现在正是智能体人工智能时代的开端,你会听到很多人都在谈论它,我们也正在进行一些非常棒的项目。接下来还有实体人工智能,再之后是机器人系统。因此,这三种计算机都是全新的。我感觉,从长远来看,这将是迄今为止规模最大的领域,这也合乎情理。全球 GDP 主要由重工业或工业企业以及为这些企业提供服务的公司所代表。
Operator 操作员
Your next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead.
您的下一个问题来自富国银行的 Aaron Rakers,请开始提问。
Aaron Rakers
Thanks for let me back in. Jensen, I'm curious as we now approach the 2-year anniversary of really the Hopper inflection that you saw in 2023 in GenAI in general. And when we think about the road map you have in front of us, how do you think about the infrastructure that's been deployed from a replacement cycle perspective? And whether if it's GB300 or if it's the Rubin cycle where we start to see maybe some refresh opportunity. I'm just curious how you look at that.
谢谢让我再次参与。Jensen,随着我们即将迎来 2023 年你在生成式人工智能领域看到的 Hopper 拐点的两周年纪念,我很好奇。当我们考虑你们面前的路线图时,你如何看待已部署基础设施的替换周期?无论是 GB300 还是 Rubin 周期,我们是否会开始看到一些更新换代的机会?我只是很好奇你对此怎么看。
Jensen Huang 黄仁勋
I appreciate it. First of all, people are still using Voltus and Pascal and amperes. And the reason for that is because there are always things that because CUDA is so programmable you could use it Blackwell, one of the major use cases right now is data processing and data curation. You find a circumstance that an AI model is not very good at. You present that circumstance to a vision language model, let's say, it is a car. You present that circumstance to a vision language model.
我很感激。首先,人们仍在使用 Voltus、Pascal 和 Ampere。原因在于,由于 CUDA 具有很强的可编程性,你可以将其用于 Blackwell,目前主要的应用场景之一是数据处理和数据整理。当你发现某种情况 AI 模型表现不佳时,你可以将这种情况呈现给视觉语言模型,比如说,这是一辆汽车。你将这种情况呈现给视觉语言模型。
The vision language model actually looks in the circumstances, said, this is what happened and I was very good at it. You then take that response to the prompt and you go and prompt an AI model to go find in your whole lake of data other circumstances like that, whatever that circumstance was. And then you use an AI to do domain randomization and generate a whole bunch of other examples. And then from that, you can go train the bottle. And so you could use amperes to go and do data processing and data curation and machine learning-based search. And then you create the training data set, which you then present to your Hopper systems for training. And so each one of these architectures are completely -- they're all CUDA-compatible and so everything wants on everything. But if you have infrastructure in place, then you can put the less intensive workloads onto the installed base of the past. All of our [GBUs are] (ph) very well employed.
视觉语言模型实际上会观察环境并表示:“事情是这样发生的,我对此非常擅长。”然后,你将该响应作为提示,去提示一个 AI 模型在你的整个数据湖中寻找类似的其他情境,无论该情境是什么。接着,你使用 AI 进行领域随机化,生成大量其他示例。然后,你可以用这些数据去训练模型。因此,你可以使用安培架构进行数据处理、数据整理和基于机器学习的搜索。随后,你创建训练数据集,并将其提供给你的 Hopper 系统进行训练。因此,这些架构中的每一个都是完全兼容 CUDA 的,所有内容都可以在所有设备上运行。但如果你已有基础设施,就可以将不太密集的工作负载放到已有的旧设备上。我们所有的[GBU](语音)都得到了充分利用。
Operator 操作员
We have time for one more question, and that question comes from Atif Malik with Citi. Pleaese go ahead.
我们还有时间再回答一个问题,这个问题来自花旗的 Atif Malik。请开始提问。
Atif Malik
I have a follow-up question on gross margins for Colette. So I understand there are many moving parts the Blackwell yields, NVLink 72 and Ethernet mix. And you kind of tipped to the earlier question, the April quarter is the bottom,; but second half would have to ramp like 200 basis points per quarter to get to the mid-70s range that you're giving for the end of the fiscal year. And we still don't know much about tariff impact to broader semiconductor. So what kind of gives you the confidence in that trajectory in the back half of this year?
关于 Colette 的毛利率,我还有一个后续问题。我理解其中涉及许多变动因素,比如 Blackwell 的良率、NVLink 72 和以太网的产品组合。你刚才也提到过,四月季度是谷底;但要达到你们给出的财年末 70%中段的目标,下半年每个季度的毛利率需要提升大约 200 个基点。此外,我们对关税对更广泛半导体行业的影响仍然了解不多。那么,是什么让你们对今年下半年的这一趋势有信心呢?
Colette Kress
Yes. Thanks for the question. Our Gross margins, there are quite complex in terms of the material and everything that we put together in a Blackwell system, a tremendous amount of opportunity to look at a lot of different pieces of that on how we can better improve our gross margins over time.
是的,谢谢你的提问。我们的毛利率在材料和我们整合到 Blackwell 系统中的各个方面都相当复杂,但也有大量机会去审视其中的不同环节,以便逐步提高我们的毛利率。
Remember, we have many different configurations as well on Blackwell that will be able to help us do that. So together, working after we get some of these really strong ramping completed for our customers, we can begin a lot of that work. If not, we are going to probably start as soon as possible if we can. If we can improve it in the short term, we will also do that. Tariff at this point, it's a little bit of an unknown it's an unknown until we understand further what the U.S. government's plan is, both its timing, it's where and how much. So at this time, we are awaiting, but again, we would, of course, always follow export controls and/or tariffs in that manner.
请记住,我们在 Blackwell 上也有许多不同的配置,可以帮助我们实现这一点。因此,在为客户完成一些非常强劲的产能提升后,我们可以共同开始大量相关工作。如果不能,我们可能会尽快开始。如果短期内能够改善,我们也会这样做。目前关税方面还有些不确定性,在我们进一步了解美国政府的计划之前,包括其时间安排、涉及范围和具体额度,这仍是未知的。因此,目前我们正在等待,但我们当然会一如既往地遵守出口管制和/或关税方面的规定。
Operator 操作员
Ladies and gentlemen, that does conclude our question-and-answer session. I'm sorry.
女士们先生们,我们的问答环节到此结束。很抱歉。
Jensen Huang 黄仁勋
Thank you. 谢谢你。
Colette Kress
We are going to open up to Jensen and I believe he has a couple things.
我们将向詹森开放,我相信他有一些事情要说。
Jensen Huang 黄仁勋
I just wanted to thank you. Thank you, Colette. The demand for Blackwell is extraordinary. AI is evolving beyond perception and generative AI into reasoning. With resenting AI, we're observing another scaling law, inference time or test time scaling, more computation. The more the model thinks the smarter the answer. Models like OpenAI, Grok-3, DeepSeek-R1 are reasoning models that apply inference time scaling. Reasoning models can consume 100x more compute. Future reasoning models can consume much more compute. DeepSeek-R1 has ignited global enthusiast -- it's an excellent innovation. But even more importantly, it has open source a world-class reasoning AI model. Nearly every AI developer is applying R1 or chain of thought and reinforcement learning techniques like R1 to scale their model's performance.
我只是想感谢你。谢谢你,Colette。对 Blackwell 的需求非同寻常。人工智能正从感知和生成式 AI 向推理方向发展。随着推理型 AI 的出现,我们观察到另一种扩展定律,即推理时间或测试时间的扩展,需要更多的计算量。模型思考得越多,答案就越智能。像 OpenAI、Grok-3、DeepSeek-R1 这样的模型都是应用推理时间扩展的推理模型。推理模型可能消耗多达 100 倍的计算资源。未来的推理模型可能会消耗更多的计算资源。DeepSeek-R1 已激发了全球的热情——它是一项卓越的创新。但更重要的是,它开源了一个世界级的推理 AI 模型。几乎每个 AI 开发者都在应用 R1 或类似 R1 的思维链和强化学习技术,以提升他们模型的性能。
We now have three scaling loss, as I mentioned earlier, driving the demand for AI computing. The traditional scaling loss of AI remains intact. Foundation models are being enhanced with multimodality, and pretraining is still growing. But it's no longer enough. We have two additional scaling dimensions. Post-training skilling, where reinforcement learning, fine-tuning, model distillation require orders of magnitude more compute than pretraining alone. Inference time scaling and reasoning where a single query and demand 100x more compute. We defined Blackwell for this moment, a single platform that can easily transition from pre-trading, post training and test time scaling.
正如我之前提到的,我们现在面临着三种扩展带来的损耗,这推动了对人工智能计算的需求。传统的人工智能扩展损耗依然存在。基础模型正通过多模态得到增强,预训练规模仍在增长。但这已不再足够。我们还有两个额外的扩展维度:训练后的技能提升,即强化学习、微调和模型蒸馏所需的计算量比单纯的预训练高出几个数量级;推理时的扩展和推理过程,即单个查询可能需要高达 100 倍的计算量。我们为此时刻定义了 Blackwell,这是一个单一平台,可以轻松地在预训练、训练后和测试时扩展之间进行转换。
Blackwell's FP4 transformer engine and NVLink 72 scale-up fabric and new software technologies led Blackwell process reasoning AI models, 25x faster than Hopper. Blackwell in all of this configuration is in full production. Each Grace Blackwell NVLink 72 rack is an engineering marvel. 1.5 million components produced across 350 manufacturing sites by nearly 100,000 factory operators. AI is advancing at life speed. We are at the beginning of reasoning AI and inference time scaling. But we are just at the start of the age of AI, multimodal AIs, enterprise AI sovereign AI and physical AI are right around the corner. We will grow strongly in 2025.
Blackwell 的 FP4 变压器引擎、NVLink 72 扩展架构和全新软件技术,使 Blackwell 处理推理 AI 模型的速度比 Hopper 快 25 倍。所有这些配置的 Blackwell 已全面投入生产。每个 Grace Blackwell NVLink 72 机架都是工程奇迹,由近 10 万名工厂操作员在 350 个制造基地生产了 150 万个零部件。AI 正以生命般的速度前进。我们正处于推理 AI 和推理时间扩展的初期。但我们才刚刚进入 AI 时代,多模态 AI、企业 AI、主权 AI 和物理 AI 即将到来。我们将在 2025 年实现强劲增长。
Going forward, data centers will dedicate most of CapEx to accelerated computing and AI. Data centers will increasingly become AI factories and every company will have either renting or self-operated. I want to thank all of you for joining us today. I’m joining us at GTC in a couple of weeks. We’re going to be talking about Blackwell Ultra, Rubin and other new computing, networking, reasoning AI, physical AI products. And a whole bunch more. Thank you.
未来,数据中心将把大部分资本支出用于加速计算和人工智能。数据中心将日益成为人工智能工厂,每家公司都会选择租用或自营。我想感谢大家今天的参与。我将在几周后的 GTC 大会上与大家见面。届时我们将讨论 Blackwell Ultra、Rubin 以及其他新的计算、网络、推理 AI、物理 AI 产品,还有更多内容。谢谢。
Operator 操作员
This concludes today's conference call. You may now disconnect.
今天的电话会议到此结束,各位现在可以挂断了。