2024-11-20 NVIDIA Corporation (NVDA) Q3 2025 Earnings Call Transcript

2024-11-20 NVIDIA Corporation (NVDA) Q3 2025 Earnings Call Transcript

NVIDIA Corporation (NASDAQ:NVDA) Q3 2025 Earnings Conference Call November 20, 2024 5:00 PM ET
英伟达公司(纳斯达克:NVDA)2025 年第三季度收益电话会议 2024 年 11 月 20 日 下午 5:00 ET

Company Participants 公司参与者

Stewart Stecker - Investor Relations
Stewart Stecker - 投资者关系
Colette Kress - Executive Vice President and Chief Financial Officer
科莱特·克雷斯 - 执行副总裁兼首席财务官
Jensen Huang - President and Chief Executive Officer
黄仁勋 - 总裁兼首席执行官

Conference Call Participants
电话会议参与者

C.J. Muse - Cantor Fitzgerald
Toshiya Hari - Goldman Sachs
Toshiya Hari - 高盛
Timothy Arcuri - UBS
Vivek Arya - Bank of America Securities
Vivek Arya - 美国银行证券
Stacy Rasgon - Bernstein Research
Joe Moore - Morgan Stanley
Aaron Rakers - Wells Fargo
Atif Malik - Citigroup Atif Malik - 花旗集团
Ben Reitzes - Melius Research
Pierre Ferragu - New Street Research

Operator 操作员

Good afternoon. My name is Joel, and I'll be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's Third Quarter Earnings Call. All lines have been placed on mute to prevent any background noise. After the speakers' remarks, there will be a question-and-answer session. [Operator Instructions] Thank you.
下午好。我叫乔尔,今天将担任您的会议操作员。现在,我想欢迎大家参加英伟达的第三季度财报电话会议。所有线路已静音,以防止背景噪音。在发言者发言后,将进行问答环节。[操作员说明] 谢谢。

Stewart Stecker, you may begin your conference.
Stewart Stecker,您可以开始您的会议。

Stewart Stecker

Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the third quarter of fiscal 2025. With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.
谢谢。大家下午好,欢迎参加英伟达 2025 财年第三季度的电话会议。今天和我一起参加会议的有英伟达的总裁兼首席执行官黄仁勋,以及执行副总裁兼首席财务官 Colette Kress。

I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the fourth quarter of fiscal 2025. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
我想提醒您,我们的电话会议正在 NVIDIA 投资者关系网站上进行现场直播。网络直播将在 2025 财年第四季度财务业绩电话会议之前可供重播。今天电话会议的内容是 NVIDIA 的财产。未经我们事先书面同意,不得复制或转录。

During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, November 20, 2024, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.
在此次电话会议中,我们可能会根据当前预期做出前瞻性声明。这些声明受到许多重大风险和不确定性的影响,我们的实际结果可能会有实质性差异。有关可能影响我们未来财务结果和业务的因素的讨论,请参阅今天的收益发布中披露的信息、我们最近的 10-K 和 10-Q 表格,以及我们可能向证券交易委员会提交的 8-K 表格报告。我们所有的声明均基于截至今天,即 2024 年 11 月 20 日,我们目前可获得的信息。除法律要求外,我们不承担更新任何此类声明的义务。

During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
在此次电话会议中,我们将讨论非 GAAP 财务指标。您可以在我们网站上发布的首席财务官评论中找到这些非 GAAP 财务指标与 GAAP 财务指标的对账。

With that, let me turn the call over to Colette.
好了,我把电话交给科莱特。

Colette Kress

Thank you, Stewart. Q3 was another record quarter. We continued to deliver incredible growth. Revenue of $35.1 billion was up 17% sequentially and up 94% year-on-year and, well above our outlook of $32.5 billion. All market platforms posted strong sequential and year-over-year growth, fueled by the adoption of NVIDIA accelerated computing and AI.
谢谢你,Stewart。第三季度又是一个创纪录的季度。我们继续实现了惊人的增长。351 亿美元的收入环比增长 17%,同比增长 94%,远超我们 325 亿美元的预期。所有市场平台都实现了强劲的环比和同比增长,这得益于 NVIDIA 加速计算和 AI 的采用。

Starting with Data Center, another record was achieved in Data Center. Revenue of $30.8 billion, up 17% sequential and up 112% year-on-year. NVIDIA Hopper demand is exceptional and sequentially, NVIDIA H200 sales increased significantly to double-digit billions, the fastest product ramp in our company's history. The H200 delivers up to 2 times faster inference performance and up to 50% improved TCO. Cloud service providers were approximately half of our data center sales with revenue increasing more than 2 times year-on-year.
从数据中心开始,数据中心又创下了一项记录。收入为 308 亿美元,环比增长 17%,同比增长 112%。NVIDIA Hopper 的需求异常强劲,环比来看,NVIDIA H200 的销售额显著增长至数十亿美元,是我们公司历史上最快的产品增长。H200 的推理性能提高了多达 2 倍,总拥有成本提高了多达 50%。云服务提供商约占我们数据中心销售额的一半,收入同比增长超过 2 倍。
Warning
云计算的市场规模小于广告,大概5000亿美元,但这个业务的成本比较高,硬件设备5年就需要更换,每年的折旧是个很大的数字,最终亚马逊、微软、Google,3个公司每年赚了200多亿。
CSPs deployed NVIDIA H200 infrastructure and high-speed networking with installations scaling to tens of thousands of GPUs to grow their business and serve rapidly rising demand for AI training and inference workloads. NVIDIA H200-powered cloud instances are now available from AWS, CoreWeave, and Microsoft Azure with Google Cloud and OCI coming soon. Alongside significant growth from our large CSPs, NVIDIA GPU regional cloud revenue jumped 2 times year-on-year as North America, EMEA, and Asia Pacific regions ramped NVIDIA cloud instances and sovereign cloud buildout.
CSP 部署了 NVIDIA H200 基础设施和高速网络,安装规模达到数万块 GPU,以发展业务并满足对 AI 训练和推理工作负载迅速增长的需求。由 NVIDIA H200 驱动的云实例现已在 AWS、CoreWeave 和 Microsoft Azure 上提供,Google Cloud 和 OCI 即将推出。随着我们大型 CSP 的显著增长,NVIDIA GPU 区域云收入同比增长了两倍,因为北美、EMEA 和亚太地区加速了 NVIDIA 云实例和主权云的建设。

Consumer Internet revenue more than doubled year-on-year as companies scaled their NVIDIA Hopper infrastructure to support next-generation AI models, training, multimodal and agentic AI, deep learning recommender engines, and generative AI inference and content creation workloads. NVIDIA's Ampere and Hopper infrastructures are fueling inference revenue growth for customers. NVIDIA is the largest inference platform in the world.
消费者互联网收入同比增长超过一倍,因为公司扩大了他们的 NVIDIA Hopper 基础设施,以支持下一代 AI 模型、训练、多模态和代理 AI、深度学习推荐引擎以及生成式 AI 推理和内容创建工作负载。NVIDIA 的 Ampere 和 Hopper 基础设施正在推动客户的推理收入增长。NVIDIA 是全球最大的推理平台。

Our large installed base and rich software ecosystem encourage developers to optimize for NVIDIA and deliver continued performance and TCL improvements. Rapid advancements in NVIDIA's software algorithms boosted Hopper inference throughput by an incredible 5 times in one year and cut time to first token by 5 times. Our upcoming release of NVIDIA NIM will boost Hopper Inference performance by an additional 2.4 times.
我们庞大的安装基础和丰富的软件生态系统鼓励开发者为 NVIDIA 进行优化,并持续提升性能和 TCL 改进。NVIDIA 软件算法的快速进步使 Hopper 推理吞吐量在一年内惊人地提高了 5 倍,并将首次令牌时间缩短了 5 倍。我们即将发布的 NVIDIA NIM 将使 Hopper 推理性能再提高 2.4 倍。

Continuous performance optimizations are a hallmark of NVIDIA and drive increasingly economic returns for the entire NVIDIA installed base. Blackwell is in full production after a successfully executed mass change. We shipped 13,000 GPU samples to customers in the third quarter, including one of the first Blackwell DGX engineering samples to OpenAI. Blackwell is a full stack, full infrastructure, AI data center scale system with customizable configurations needed to address a diverse and growing AI market from x86 to ARM, training to inferencing GPUs, InfiniBand to Ethernet switches, and NVLINK and from liquid-cooled to air-cooled. Every customer is racing to be the first to market. Blackwell is now in the hands of all of our major partners and they are working to bring up their Data Centers.
持续的性能优化是 NVIDIA 的标志,并推动整个 NVIDIA 安装基础的经济回报不断增加。Blackwell在成功执行大规模变更后已全面投产。我们在第三季度向客户交付了 13,000 个 GPU 样品,其中包括向 OpenAI 交付的首批Blackwell DGX 工程样品之一。Blackwell是一个全栈、全基础设施、AI 数据中心规模的系统,具有可定制的配置,能够满足从 x86 到 ARM、从训练到推理 GPU、从 InfiniBand 到以太网交换机、从 NVLINK 到液冷和风冷的多样化和不断增长的 AI 市场需求。每个客户都在争先恐后地抢占市场。Blackwell现在已交付给我们所有的主要合作伙伴,他们正在努力建立他们的数据中心。

We are integrating Blackwell systems into the diverse Data Center configurations of our customers. Blackwell demand is staggering and we are racing to scale supply to meet the incredible demand customers are placing on us. Customers are gearing up to deploy Blackwell at scale. Oracle announced the world's first Zettascale AI Cloud computing clusters that can scale to over 131,000 Blackwell GPUs to help enterprises train and deploy some of the most demanding next-generation AI models.
我们正在将Blackwell系统集成到客户多样化的数据中心配置中。Blackwell的需求惊人,我们正在加紧扩大供应以满足客户对我们的巨大需求。客户正准备大规模部署Blackwell。Oracle 宣布了全球首个 Zettascale AI 云计算集群,可以扩展到超过 131,000 个Blackwell GPU,以帮助企业训练和部署一些最苛刻的下一代 AI 模型。

Yesterday, Microsoft announced they will be the first CSP to offer in private preview Blackwell-based cloud instances powered by NVIDIA GB200, and Quantum InfiniBand. Last week, Blackwell made its debut on the most recent round of MLPerf Training results, sweeping the per GPU benchmarks and delivering a 2.2 times leap in performance over Hopper. The results also demonstrate our relentless pursuit to drive down the cost of compute.
昨天,微软宣布他们将成为首个在私密预览中提供基于Blackwell的云实例的 CSP,这些实例由 NVIDIA GB200 和 Quantum InfiniBand 提供支持。上周,Blackwell在最近一轮的 MLPerf Training 结果中首次亮相,横扫每 GPU 基准测试,并在性能上比 Hopper 提升了 2.2 倍。结果还展示了我们在降低计算成本方面的不懈追求。

Just 64 Blackwell GPUs are required to run the GPT-3 benchmark compared to 256 H100s or a 4 times reduction in cost. NVIDIA Blackwell architecture with NVLINK Switch enables up to 30 times faster inference performance and a new level of inference scaling throughput and response time that is excellent for running new reasoning inference applications like OpenAI's o1 model. With every new platform shift, a wave of start-ups is created. Hundreds of AI native companies are already delivering AI services with great success. Through Google, Meta, Microsoft, and OpenAI are the headliners and Anthropic, Perplexity, Mistral, Adobe Firefly, Runway, Midjourney, Lightricks, Harvey, Codeium, Cursor, and Bridge are seeing great success, while thousands of AI-native startups are building new services.
仅需要64个Blackwell GPU就能运行GPT-3基准测试,相比之下,使用256个H100的成本是其四分之一。NVIDIA的Blackwell架构配合NVLINK交换机可实现最高30倍的推理性能提升,带来了全新的推理扩展吞吐量和响应时间水平,非常适合运行像OpenAI的o1模型这样的推理应用。每一次平台的更新换代都会催生一波初创公司。目前,已有数百家AI本土公司在成功提供AI服务。通过Google、Meta、Microsoft和OpenAI等公司领头,Anthropic、Perplexity、Mistral、Adobe Firefly、Runway、Midjourney、Lightricks、Harvey、Codeium、Cursor和Bridge等公司也取得了巨大的成功,同时成千上万的AI本土初创公司正在构建新的服务。
Idea
业务清晰-》数据清晰的企业有可能更快速的成长。
The next wave of AI are Enterprise AI and Industrial AI. Enterprise AI is in full throttle. NVIDIA AI Enterprise, which includes NVIDIA NeMo and NIM microservices is an operating platform of agentic AI. Industry leaders are using NVIDIA AI to build Co-Pilots and agents. Working with NVIDIA, Cadence, Cloudera, Cohesity, NetApp, Nutanix, Salesforce, SAP and ServiceNow are racing to accelerate development of these applications with the potential for billions of agents to be deployed in the coming years.
下一波人工智能是企业人工智能和工业人工智能。企业人工智能正在全速发展。NVIDIA AI Enterprise,包括 NVIDIA NeMo 和 NIM 微服务,是一个代理人工智能的操作平台。行业领导者正在使用 NVIDIA AI 构建副驾驶和代理。与 NVIDIA 合作,Cadence、Cloudera、Cohesity、NetApp、Nutanix、Salesforce、SAP 和 ServiceNow 正在竞相加速这些应用程序的开发,未来几年可能会部署数十亿个代理。

Consulting leaders like Accenture and Deloitte are taking NVIDIA AI to the world's enterprises. Accenture launched a new business group with 30,000 professionals trained on NVIDIA AI technology to help facilitate this global build-out. Additionally, Accenture with over 770,000 employees is leveraging NVIDIA-powered Agentic AI applications internally, including in one case that cuts manual steps in marketing campaigns by 25% to 35%. Nearly 1,000 companies are using NVIDIA NIM and the speed of its uptake is evident in NVIDIA AI Enterprise monetization. We expect NVIDIA AI Enterprise full year revenue to increase over 2 times from last year and our pipeline continues to build.
像埃森哲和德勤这样的咨询领导者正在将 NVIDIA AI 带给全球企业。埃森哲推出了一个新的业务集团,拥有 3 万名接受过 NVIDIA AI 技术培训的专业人员,以帮助促进这一全球扩展。此外,埃森哲拥有超过 77 万名员工,正在内部利用 NVIDIA 支持的 Agentic AI 应用程序,其中一个案例将营销活动中的手动步骤减少了 25%到 35%。近 1000 家公司正在使用 NVIDIA NIM,其采用速度在 NVIDIA AI Enterprise 的货币化中显而易见。我们预计 NVIDIA AI Enterprise 全年的收入将比去年增加两倍以上,并且我们的管道继续增长。

Overall, our software, service, and support revenue is annualizing at $1.5 billion, and we expect to exit this year annualizing at over $2 billion. Industrial AI and robotics are accelerating. This is triggered by breakthroughs in physical AI, foundation models that understand the physical world. Like NVIDIA NeMo for enterprise AI agents, we built NVIDIA Omniverse for developers to build, train, and operate industrial AI and robotics.
总体而言,我们的软件、服务和支持收入年化达到 15 亿美元,我们预计今年结束时年化将超过 20 亿美元。工业人工智能和机器人技术正在加速。这是由物理人工智能的突破引发的,基础模型能够理解物理世界。就像 NVIDIA NeMo 用于企业 AI 代理一样,我们构建了 NVIDIA Omniverse,供开发人员构建、训练和操作工业 AI 和机器人。

Some of the largest industrial manufacturers in the world are adopting NVIDIA Omniverse to accelerate their businesses, automate their workflows, and to achieve new levels of operating efficiency. Foxconn, the world's largest electronics manufacturer is using digital twins and industrial AI built on NVIDIA Omniverse to speed the bring up of its Blackwells factories and drive new levels of efficiency. In its Mexico facility alone, Foxconn expects to reduce -- a reduction of over 30% in annual kilowatt-hour usage.
世界上一些最大的工业制造商正在采用 NVIDIA Omniverse 来加速他们的业务,自动化他们的工作流程,并实现新的运营效率水平。全球最大的电子制造商富士康正在使用基于 NVIDIA Omniverse 构建的数字孪生和工业 AI 来加速其 Blackwells 工厂的启动并推动新的效率水平。仅在其墨西哥工厂,富士康预计将减少——年千瓦时使用量减少超过 30%。

From a geographic perspective, our Data Center revenue in China grew sequentially due to shipments of export-compliant copper products to industries. As a percentage of total Data Center revenue, it remains well below levels prior to the onset of export controls. We expect the market in China to remain very competitive going forward. We will continue to comply with export controls while serving our customers. Our sovereign AI initiatives continue to gather momentum as countries embrace NVIDIA accelerated computing for a new industrial revolution powered by AI.
从地域角度来看,我们在中国的数据中心收入呈现环比增长,主要得益于向工业领域出口符合出口要求的铜制产品。作为数据中心总收入的百分比,这一比例仍远低于出口管制实施之前的水平。我们预计中国市场未来将保持高度竞争。我们将继续遵守出口管制,同时为客户提供服务。随着各国拥抱NVIDIA加速计算技术,推动由AI驱动的新工业革命,我们的主权AI项目持续获得动力。

India's leading CSPs include Tata Communications and Yotta Data Services are building AI factories for tens of thousands of NVIDIA GPUs. By year-end, they will have boosted NVIDIA GPU deployments in the country by nearly 10 times. Infosys, TSE, Wipro are adopting NVIDIA AI Enterprise and upskilling nearly 0.5 million developers and consultants to help clients build and run AI agents on our platform. In Japan, SoftBank is building the nation's most powerful AI supercomputer with NVIDIA DGX Blackwell and Quantum InfiniBand. SoftBank is also partnering with NVIDIA to transform the telecommunications network into a distributed AI network with NVIDIA AI Aerial and AI-RAN platform that can process both 5G RAN on AI on CUDA. We are launching the same in the US with T-Mobile.
印度的领先 CSP 包括塔塔通信和 Yotta 数据服务正在为数万台 NVIDIA GPU 建设 AI 工厂。到年底,他们将在该国的 NVIDIA GPU 部署增加近 10 倍。Infosys、TSE、Wipro 正在采用 NVIDIA AI Enterprise,并提升近 50 万名开发人员和顾问的技能,以帮助客户在我们的平台上构建和运行 AI 代理。在日本,软银正在使用 NVIDIA DGX Blackwell和 Quantum InfiniBand 建设全国最强大的 AI 超级计算机。软银还与 NVIDIA 合作,将电信网络转变为分布式 AI 网络,使用 NVIDIA AI Aerial 和 AI-RAN 平台,可以在 CUDA 上处理 5G RAN 的 AI。我们正在与 T-Mobile 在美国推出相同的产品。

Leaders across Japan, including Fujitsu, NEC and NTT are adopting NVIDIA AI Enterprise and major consulting companies, including EY, Strategy, and Consulting will help bring NVIDIA AI technology to Japan's industries. Networking revenue increased 20% year-on-year. Areas of sequential revenue growth include InfiniBand and Ethernet switches, SmartNICs, and BlueField DPUs. The networking revenue was sequentially down, networking demand is strong and growing and we intend -- anticipate sequential growth in Q4. CSPs and supercomputing centers are using and adopting the NVIDIA InfiniBand platform to power new H200 clusters. NVIDIA Spectrum-X Ethernet for AI revenue increased over 3 times year-on-year and our pipeline continues to build with multiple CSPs and consumer Internet companies planning large cluster deployments.
日本各地的领导者,包括富士通、NEC 和 NTT 正在采用 NVIDIA AI Enterprise,主要咨询公司,包括 EY、Strategy 和 Consulting 将帮助将 NVIDIA AI 技术引入日本的各个行业。网络收入同比增长 20%。顺序收入增长的领域包括 InfiniBand 和以太网交换机、SmartNICs 和 BlueField DPU。网络收入环比下降,网络需求强劲且不断增长,我们预计第四季度将实现顺序增长。CSP 和超级计算中心正在使用和采用 NVIDIA InfiniBand 平台为新的 H200 集群提供动力。NVIDIA Spectrum-X 以太网用于 AI 的收入同比增长超过 3 倍,我们的管道继续建设,多个 CSP 和消费互联网公司计划进行大型集群部署。

Traditional Ethernet was not designed for AI. NVIDIA Spectrum-X uniquely leverages technology previously exclusive to InfiniBand to enable customers to achieve massive scale of their GPU compute. Utilizing Spectrum-X, xAI's Colossus, 100,000 Hopper Supercomputer will experience zero application latency degradation and maintained 95% data throughput versus 60% for traditional Ethernet.
传统以太网并非为人工智能设计。NVIDIA Spectrum-X 独特地利用了以前专属于 InfiniBand 的技术,使客户能够实现其 GPU 计算的大规模扩展。利用 Spectrum-X,xAI 的 Colossus,100,000 Hopper 超级计算机将实现零应用程序延迟降级,并保持 95% 的数据吞吐量,而传统以太网则为 60%。

Now moving to gaming and AI PCs. Gaming revenue of $3.3 billion increased 14% sequentially and 15% year-on-year. Q3 was a great quarter for gaming with notebook, console, and desktop revenue, all growing sequentially and year-on-year.
现在转向游戏和人工智能电脑。游戏收入为 33 亿美元,环比增长 14%,同比增长 15%。第三季度是游戏的一个出色季度,笔记本、游戏机和台式机收入均环比和同比增长。

RTX end-demand was fueled by strong back-to-school sales as consumers continue to choose GeForce RTX GPUs and devices to power gaming, creative, and AI applications. Channel inventory remains healthy and we are gearing up for the holiday season. We began shipping new GeForce RTX AI PCs with up to 321 AI tops from ASUS and MSI with Microsoft's Copilot+ capabilities anticipated in Q4. These machines harness the power of RTX ray tracing and AI technologies to supercharge gaming, photo and video editing, image generation, and coding. This past quarter, we celebrated the 25th anniversary of the GeForce 256, the world's first GPU. The transforming computing graphics to igniting the AI revolution,
由于消费者继续选择 GeForce RTX GPU 和设备来支持游戏、创意和 AI 应用,强劲的返校销售推动了 RTX 的最终需求。渠道库存保持健康,我们正在为假日季节做准备。我们开始出货新的 GeForce RTX AI PC,配备来自华硕和微星的高达 321 AI 顶点,预计在第四季度推出微软的 Copilot+功能。这些机器利用 RTX 光线追踪和 AI 技术的力量来增强游戏、照片和视频编辑、图像生成和编码。上个季度,我们庆祝了全球首款 GPU GeForce 256 的 25 周年。从变革计算图形到点燃 AI 革命,

NVIDIA's GPUs have been the driving force behind some of the most consequential technologies of our time. Moving to ProViz. Revenue of $486 million was up 7% sequentially and 17% year-on-year. NVIDIA RTX workstations continue to be the preferred choice to power professional graphics, design, and engineering-related workloads. Additionally, AI is emerging as a powerful demand driver, including autonomous vehicle simulation, generative AI model prototyping for productivity-related use cases, and generative AI, content creation in media and entertainment.
NVIDIA 的 GPU 一直是我们这个时代一些最重要技术的推动力。转向 ProViz。收入为 4.86 亿美元,环比增长 7%,同比增长 17%。NVIDIA RTX 工作站继续成为支持专业图形、设计和工程相关工作负载的首选。此外,人工智能正在成为一个强大的需求驱动因素,包括自动驾驶汽车模拟、用于生产力相关用例的生成式 AI 模型原型设计,以及生成式 AI、媒体和娱乐中的内容创作。

Moving to Automotive. Revenue was a record $449 million, up 30% sequentially and up 72% year-on-year. Strong growth was driven by self-driving ramps of NVIDIA Orin and robust end-market demand for NAVs. Volvo Cars has rolling out its fully electric SUV built on NVIDIA Orin and DriveOS. Okay.
转向汽车业务。收入创纪录地达到 4.49 亿美元,环比增长 30%,同比增长 72%。强劲增长由 NVIDIA Orin 的自动驾驶加速和 NAVs 的强劲终端市场需求推动。沃尔沃汽车推出了基于 NVIDIA Orin 和 DriveOS 的全电动 SUV。好的。
Idea
约等于机器人。
Moving to the rest of the P&L. GAAP gross margin was 74.6% and non-GAAP gross margin was 75%, down sequentially, primarily driven by a mix-shift of the H100 systems to more complex and higher cost systems within Data Center. Sequentially, GAAP operating expenses and non-GAAP operating expenses were up 9% due to higher compute, infrastructure, and engineering development costs for new product introductions.
接下来是损益表的其余部分。GAAP 毛利率为 74.6%,非 GAAP 毛利率为 75%,环比下降,主要是由于 H100 系统在数据中心内向更复杂和更高成本系统的组合转移。环比来看,GAAP 运营费用和非 GAAP 运营费用增长了 9%,这主要是由于新产品推出的计算、基础设施和工程开发成本增加。

In Q3, we returned $11.2 billion to shareholders in the form of share repurchases and cash dividends. Well, let me turn to the outlook for the fourth quarter. Total revenue is expected to be $37.5 billion, plus or minus 2%, which incorporates continued demand for Hopper architecture and the initial ramp of our Blackwell products, while demand is greatly exceed supply, we are on track to exceed our previous Blackwell revenue estimate of several billion dollars as our visibility into supply continues to increase. On gaming, although sell-through was strong in Q3, we expect fourth quarter revenue to decline sequentially due to supply constraints.
在第三季度,我们通过股票回购和现金分红向股东返还了 112 亿美元。好吧,让我来谈谈第四季度的展望。总收入预计为 375 亿美元,上下浮动 2%,这包括对 Hopper 架构的持续需求和我们Blackwell产品的初始增长,虽然需求远远超过供应,但随着我们对供应的可见性不断提高,我们有望超过之前Blackwell收入数十亿美元的预期。在游戏方面,尽管第三季度的销售情况强劲,但由于供应限制,我们预计第四季度的收入将环比下降。

GAAP and non-GAAP gross margins are expected to be 73% and 73.5%, respectively, plus or minus 50 basis points. Blackwell is a customizable AI infrastructure with several different types of NVIDIA-build chips, multiple networking options, and for air and liquid-cooled Data Centers. Our current focus is on ramping to strong demand, increasing system availability, and providing the optimal mix of configurations to our customer.
GAAP 和非 GAAP 毛利率预计分别为 73%和 73.5%,上下浮动 50 个基点。Blackwell是一个可定制的 AI 基础设施,配备多种不同类型的 NVIDIA 制造芯片、多种网络选项,并适用于风冷和液冷数据中心。我们目前的重点是提升强劲需求、增加系统可用性,并为客户提供最佳配置组合。

As Blackwell ramps, we expect gross margins to moderate to the low-70s. When fully ramped, we expect Blackwell margins to be in the mid-70s. GAAP and non-GAAP operating expenses are expected to be approximately $4.8 billion and $3.4 billion, respectively. We are a data center scale AI infrastructure company. Our investments include building data centers for development of our hardware and software stacks and to support new introductions.
随着Blackwell的提升,我们预计毛利率将降至 70%出头。当完全提升后,我们预计Blackwell的毛利率将在 70%中段。GAAP 和非 GAAP 运营费用预计分别约为 48 亿美元和 34 亿美元。我们是一家数据中心规模的 AI 基础设施公司。我们的投资包括建设数据中心以开发我们的硬件和软件堆栈,并支持新产品的推出。

GAAP and non-GAAP other income and expenses are expected to be an income of approximately $400 million, excluding gains and losses from non-affiliated investments. GAAP and non-GAAP tax rates are expected to be 16.5% plus or minus 1%, excluding any discrete items. Further financial details are included in the CFO commentary and other information available on our IR websites.
预计 GAAP 和非 GAAP 的其他收入和支出将为约 4 亿美元的收入,不包括非关联投资的收益和损失。预计 GAAP 和非 GAAP 的税率为 16.5%,上下浮动 1%,不包括任何离散项目。更多财务细节包含在首席财务官的评论中以及我们投资者关系网站上的其他信息。

In closing, let me highlight upcoming events for the financial community. We will be attending the UBS Global Technology and AI Conference on December 3rd, in Scottsdale. Please join us at CES in Las Vegas, where Jensen will deliver a keynote on January 6th, and we will host a Q&A session for financial analysts the next day on January 7th. Our earnings call to discuss results for the fourth quarter of fiscal 2025 is scheduled for February 26th, 2025.
最后,我想强调一下金融界即将举行的活动。我们将于 12 月 3 日参加在斯科茨代尔举行的瑞银全球技术与人工智能大会。请加入我们在拉斯维加斯的 CES,Jensen 将于 1 月 6 日发表主题演讲,次日 1 月 7 日我们将为金融分析师举办问答环节。我们讨论 2025 财年第四季度业绩的电话会议定于 2025 年 2 月 26 日举行。

We will now open the call for questions. Operator, can you poll for questions, please?
我们现在开始接受提问。接线员,请您进行提问调查,好吗?

Question-and-Answer Session
问答环节

Operator 操作员

[Operator Instructions] Your first question comes from the line of C.J. Muse of Cantor Fitzgerald. Your line is open.
[操作员说明] 您的第一个问题来自 Cantor Fitzgerald 的 C.J. Muse。您的线路已打开。

C.J. Muse

Yes, good afternoon. Thank you for taking the question. I guess just a question for you on the debate around whether scaling for large language models have stalled. Obviously, we're very early here but would love to hear your thoughts on this front. How are you helping your customers as they work through these issues? And then obviously, part of the context here as we're discussing clusters that have yet to benefit from Blackwell. So is this driving even greater demand for Blackwell? Thank you.
是的,下午好。感谢您提出这个问题。我想问您一个关于大型语言模型扩展是否停滞的辩论问题。显然,我们现在还处于早期阶段,但很想听听您在这方面的看法。您如何帮助您的客户解决这些问题?显然,这里的部分背景是我们正在讨论尚未从Blackwell中受益的集群。那么这是否推动了对Blackwell的更大需求?谢谢。

Jensen Huang

A foundation model pre-training scaling is intact and it's continuing. As you know, this is an empirical law, not a fundamental physical law, but the evidence is that it continues to scale. What we're learning, however, is that it's not enough that we've now discovered two other ways to scale. One is post-training scaling. Of course, the first generation of post-training was reinforcement learning human feedback, but now we have reinforcement learning AI feedback and all forms of synthetic data generated data that assists in post-training scaling.
基础模型的预训练扩展是完整的,并且正在继续。如你所知,这是一条经验法则,而不是基本物理定律,但证据表明它继续扩展。然而,我们正在学习的是,这还不够,我们现在已经发现了另外两种扩展方式。一种是后训练扩展。当然,第一代后训练是强化学习人类反馈,但现在我们有了强化学习 AI 反馈和所有形式的合成数据生成的数据,这些数据有助于后训练扩展。

And one of the biggest events and one of the most exciting developments is Strawberry, ChatGPT o1, OpenAI's o1, which does inference time scaling, what's called test time scaling. The longer it thinks, the better and higher-quality answer it produces and it considers approaches like chain of thought and multi-path planning and all kinds of techniques necessary to reflect and so on and so forth and it's intuitively, it's a little bit like us doing thinking in our head before we answer a question.
其中一个最大的事件和最令人兴奋的发展之一是 Strawberry,ChatGPT o1,OpenAI 的 o1,它进行推理时间缩放,也就是所谓的测试时间缩放。思考时间越长,产生的答案就越好、质量越高,并且它考虑了诸如思维链、多路径规划等各种必要的反思技术等等,直观上有点像我们在回答问题前在脑海中思考。
Idea
显然需要更多的算力。
And so we now have three ways of scaling and we're seeing all three ways of scaling. And as a result of that, the demand for our infrastructure is really great. You see now that at the tail-end of the last generation of foundation models were at about 100,000 Hoppers. The next generation starts at 100,000 Blackwells. And so that kind of gives you a sense of where the industry is moving with respect to pre-training scaling, post-training scaling, and then now very importantly inference time scaling. And so the demand is really great for all of those reasons. But remember, simultaneously, we're seeing inference really starting to scale out for our company. We are the largest inference platform in the world today because our installed base is so large and everything that was trained on Amperes and Hoppers inference incredibly on Amperes and Hoppers.
所以我们现在有三种扩展方式,并且我们正在看到这三种扩展方式的应用。由于这些原因,我们的基础设施需求非常强劲。你现在可以看到,在上一代基础模型的末期,我们大约用了100,000个Hopper。下一代模型的起点是100,000个Blackwell。这给你提供了一个关于行业在预训练扩展、后训练扩展以及现在非常重要的推理时间扩展方面发展的方向感。所以,出于这些原因,需求非常强劲。但请记住,与此同时,我们也看到推理开始在我们公司大规模扩展。今天,我们是全球最大的推理平台,因为我们的安装基础非常庞大,所有在Ampere和Hopper上训练的模型,在Ampere和Hopper上都能进行极其高效的推理。

And as we move to Blackwells for training foundation models, it leads behind it a large installed base of extraordinary infrastructure for inference. And so we're seeing inference demand go up. We're seeing inference time scaling go up. We see the number of AI-native companies continue to grow. And of course, we're starting to see enterprise adoption of agentic AI really is the latest rage. And so we're seeing a lot of demand coming from a lot of different places.
当我们转向在 Blackwells 进行训练基础模型时,它在背后留下了大量用于推理的卓越基础设施。因此,我们看到推理需求上升。我们看到推理时间的扩展增加。我们看到 AI 原生公司的数量继续增长。当然,我们开始看到企业采用代理 AI 确实是最新的潮流。因此,我们看到来自许多不同地方的巨大需求。

Operator 操作员

Your next question comes from the line of Toshiya Hari of Goldman Sachs. Your line is open.
接下来的问题来自高盛的 Toshiya Hari。您的线路已打开。

Toshiya Hari

Hi, good afternoon. Thank you so much for taking the question. Jensen, you executed the mass change earlier this year. There were some reports over the weekend about some heating issues. On the back of this, we've had investors ask about your ability to execute to the roadmap you presented at GTC this year with Ultra coming out next year and the transition to [Ruben] (ph) in 2026. Can you sort of speak to that? And some investors are questioning that. So if you can sort of speak to your ability to execute on time, that would be super helpful.
嗨,下午好。非常感谢您提出这个问题。Jensen,您在今年早些时候执行了大规模变更。周末有一些关于加热问题的报道。在此背景下,我们有投资者询问您是否有能力执行您在今年 GTC 上提出的路线图,明年推出 Ultra,并在 2026 年过渡到[Ruben] (ph)。您能谈谈这个问题吗?一些投资者对此表示质疑。所以如果您能谈谈您按时执行的能力,那将非常有帮助。

And then a quick part B, on supply constraints, is it a multitude of componentry that's causing this? Or is it specifically [HBM] (ph)? Is it supply constraints? Are the supply constraints getting better? Are they worsening? Any sort of color on that would be super helpful as well. Thank you.
接下来是一个简短的 B 部分,关于供应限制,是多种组件导致的吗?还是特别是[HBM](ph)?是供应限制吗?供应限制是否有所改善?是否在恶化?任何相关信息都会非常有帮助。谢谢。

Jensen Huang

Yes, thanks. Thanks. So let's see, back to the first question. Blackwell production is in full steam. In fact, as Colette mentioned earlier, we will deliver this quarter more Blackwells than we had previously estimated. And so the supply chain team is doing an incredible job working with our supply partners to increase Blackwell, and we're going to continue to work hard to increase Blackwell through next year. It is the case that demand exceeds our supply and that's expected as we're in the beginnings of this generative AI revolution as we all know. And we're at the beginning of a new generation of foundation models that are able to do reasoning and able to do long thinking and of course, one of the really exciting areas is physical AI, AI that now understands the structure of the physical world.
是的,谢谢。谢谢。那么让我们看看,回到第一个问题。Blackwell的生产正在全速进行。事实上,正如 Colette 之前提到的,我们将在本季度交付比之前估计更多的 Blackwell。因此,供应链团队正在与我们的供应合作伙伴一起做出令人难以置信的工作来增加Blackwell,我们将继续努力在明年增加Blackwell。确实,需求超过了我们的供应,这是预料之中的,因为我们正处于这场生成式 AI 革命的开端,正如我们所知。我们正处于能够进行推理和长时间思考的新一代基础模型的开端,当然,其中一个真正令人兴奋的领域是物理 AI,即现在能够理解物理世界结构的 AI。

And so Blackwell demand is very strong. Our execution is on -- is going well. And there's obviously a lot of engineering that we're doing across the world. You see now systems that are being stood up by Dell and CoreWeave, I think you saw systems from Oracle stood up. You have systems from Microsoft and they're about to preview their Grace Blackwell systems. You have systems that are at Google. And so all of these CSPs are racing to be first. The engineering that we do with them is, as you know, rather complicated. And the reason for that is because although we build full stack and full infrastructure, we disaggregate all of the -- this AI supercomputer and we integrate it into all of the custom data centers in architectures around the world. That integration process is something we've done several generations now. We're very good at it, but still, there's still a lot of engineering that happens at this point. But as you see from all of the systems that are being stood up, Blackwell is in great shape. And as we mentioned earlier, the supply and what we're planning to ship this quarter is greater than our previous estimates.
因此,Blackwell的需求非常强劲。我们的执行进展顺利。显然,我们在全球范围内进行了大量的工程工作。你现在看到由戴尔和 CoreWeave 建立的系统,我想你也看到了 Oracle 建立的系统。微软的系统也即将预览他们的 Grace Blackwell系统。谷歌也有系统。因此,所有这些 CSP 都在争先恐后。我们与他们合作的工程,如你所知,相当复杂。原因是,尽管我们构建了完整的堆栈和完整的基础设施,但我们将所有的 AI 超级计算机分解,并将其集成到全球各地的定制数据中心架构中。这一集成过程是我们已经进行了几代的工作。我们对此非常擅长,但仍然有很多工程工作在进行中。但正如你从所有正在建立的系统中看到的那样,Blackwell状况良好。正如我们之前提到的,本季度计划发货的供应量超过了我们之前的估计。

With respect to the supply chain, all right, there are seven different chips, seven custom chips that we built in order for us to deliver the Blackwell systems. The Blackwell systems go in air-cooled or liquid-cooled, NVLink 8 or NVLink 72 or NVLink 8, NVLink 36, NVLink 72 we have x86 or Grace and the integration of all of those systems into the world's data centers is nothing short of a miracle. And so the component supply chain necessary to ramp at the scale you have to go back and take a look at how much Blackwell we shipped last quarter, which was zero.
关于供应链,好吧,我们制造了七种不同的芯片,七种定制芯片,以便我们交付Blackwell系统。Blackwell系统可以是风冷或液冷,NVLink 8 或 NVLink 72 或 NVLink 8,NVLink 36,NVLink 72,我们有 x86 或 Grace,将所有这些系统集成到世界的数据中心简直是个奇迹。因此,要在规模上加速所需的组件供应链,您必须回过头来看看上个季度我们发运了多少Blackwell,那是零。

And in terms of how much Blackwell total systems will ship this quarter, which is measured in billions, the ramp is incredible. And so almost every company in the world seems to be involved in our supply chain. And we've got great partners, everybody from, of course, TSMC and Amphenol, the connector company, incredible company, Vertiv and SK Hynix and Micron Spill, Amkor and KYEC and there's Foxconn and the many the factories that they've built and Quanta and Wiwynn and gosh, Dell and HP and Super Micro, Lenovo and the number of companies is just really quite incredible, Quanta. And I'm sure I've missed partners that are involved in the ramping up of Blackwell, which I really appreciate. And so anyways, I think we're in great shape with respect to the Blackwell ramp at this point.
至于本季度Blackwell总系统的出货量,以十亿为单位,增长是惊人的。因此,世界上几乎每家公司似乎都参与了我们的供应链。我们有很棒的合作伙伴,当然包括台积电和连接器公司安费诺,令人难以置信的公司,Vertiv 和 SK 海力士和美光,Amkor 和 KYEC,还有富士康及其建造的许多工厂,广达和纬颖,天哪,戴尔和惠普和超微,联想,公司的数量实在是令人难以置信,广达。我肯定我遗漏了一些参与Blackwell扩展的合作伙伴,我真的很感激。所以无论如何,我认为我们在Blackwell扩展方面处于良好状态。

And then lastly, your question about our execution of our roadmap. We're on an annual roadmap and we're expecting to continue to execute on our annual roadmap. And by doing so, we increase the performance, of course, of our platform, but it's also really important to realize that when we're able to increase performance and do so at X factors at a time, we're reducing the cost of training, we're reducing the cost of inferencing, we're reducing the cost of AI so that it could be much more accessible.
最后,关于我们执行路线图的问题。我们有一个年度路线图,并期望继续执行我们的年度路线图。通过这样做,我们当然提高了平台的性能,但同样重要的是要意识到,当我们能够提高性能并一次性提高 X 倍时,我们正在降低训练成本,降低推理成本,降低 AI 成本,使其更加可及。

But the other factor that's very important to note is that when there's a data center of some fixed size and a data center always is of some fixed size. It could be, of course, 10s of megawatts in the past and now it's most data centers are now 100 megawatts to several 100 megawatts and we're planning on gigawatt data centers, it doesn't really matter how large the data centers are, the power is limited. And when you're in the power-limited data center, the best -- the highest performance per watt translates directly into the highest revenues for our partners. And so on the one hand, our annual roadmap reduces cost, but on the other hand, because our perf per watt is so good compared to anything out there, we generate for our customers the greatest possible revenues. And so that annual rhythm is really important to us and we have every intentions of continuing to do that. And everything is on track as far as I know.
但另一个非常重要的因素是,当数据中心的规模是固定的,而数据中心总是有一个固定的规模时,这一点需要注意。过去,数据中心可能只有几十兆瓦,而现在大多数数据中心的规模通常是100兆瓦到几百兆瓦,我们正在规划的是千兆瓦级别的数据中心。实际上,数据中心的大小并不重要,关键是电力的限制。当你处于电力受限的数据中心时,最高的每瓦性能直接转化为我们合作伙伴的最高收入。所以,一方面,我们的年度路线图会降低成本,但另一方面,由于我们的每瓦性能相较于市场上的任何产品都非常优秀,我们为客户创造了最大的可能收入。因此,这种年度节奏对我们来说非常重要,我们完全有意继续保持这一节奏。就我所知,一切都在按计划进行。

Operator 操作员

Your next question comes from the line of Timothy Arcuri of UBS. Your line is open.
接下来的问题来自瑞银的 Timothy Arcuri。您的线路已打开。

Timothy Arcuri

Thanks a lot. I'm wondering if you can talk about the trajectory of how Blackwell is going to ramp this year. I know, Jensen, you did just talk about Blackwell being better than I think you had said several billions of dollars in January. It sounds like you're going to do more than that. But I think in recent months also, you said that Blackwell crosses over Hopper in the April quarter. So I guess I had two questions. First of all, is that still the right way to think about it that Blackwell will crossover Hopper in April?
非常感谢。我想知道你能否谈谈Blackwell今年的增长轨迹。我知道,Jensen,你刚刚谈到Blackwell比我想你在一月份说的几十亿美元要好。听起来你会做得更多。但我想在最近几个月,你也说过Blackwell将在四月季度超过 Hopper。所以我想我有两个问题。首先,Blackwell在四月超过 Hopper 仍然是正确的想法吗?

And then Colette, you kind of talked about Blackwell bringing down gross margin to the low-70s as it ramps. So I guess if April is the crossover, is that the worst of the pressure on gross margin? So you're going to be kind of in the low-70s as soon as April. I'm just wondering if you can sort of shape that for us. Thanks.
然后,Colette,你提到Blackwell将毛利率降到 70%出头的水平。所以我想如果四月是交叉点,那是毛利率压力最大的时期吗?所以你们四月就会处于 70%出头的水平。我只是想知道你能否为我们描述一下。谢谢。

Jensen Huang

Colette, why don't you start?
科莱特,你为什么不开始呢?

Colette Kress

Sure. Let me first start with your question, Tim. Thank you regarding our gross margins, and we discussed our gross margins as we are ramping Blackwell in the very beginning and the many different configurations, the many different chips that we are bringing to market, we are going to focus on making sure we have the best experience for our customers as they stand that up. We will start growing into our gross margins, but we do believe those will be in the low 70s in that first part of the ramp. So you're correct, as you look at the quarters following after that, we will start increasing our gross margins and we hope to get to the mid-70s quite quickly as part of that ramp.
当然。让我先从你的问题开始,Tim。感谢您关于我们毛利率的提问,我们在最初阶段讨论了我们的毛利率,因为我们正在加速Blackwell的推出,以及我们将推出市场的许多不同配置和芯片,我们将专注于确保为客户提供最佳体验。我们将开始增加我们的毛利率,但我们确实相信在加速的第一阶段,这些毛利率将处于 70%出头的水平。因此,您是正确的,当您查看之后的季度时,我们将开始提高我们的毛利率,并希望在加速的过程中很快达到 70%中段。

Jensen Huang

Hopper demand will continue through next year, surely the first several quarters of the next year. And meanwhile, we will ship more Blackwells next quarter than this. And we'll ship more Blackwells the quarter after that than our first quarter. And so that kind of puts it in perspective. We are really at the beginnings of two fundamental shifts in computing that is really quite significant. The first is moving from coding that runs on CPUs to machine learning that creates neural networks that runs on GPUs.
对 Hopper 的需求将持续到明年,肯定是明年的前几个季度。同时,我们将在下个季度比这个季度运送更多的 Blackwells。再下个季度,我们将比第一个季度运送更多的 Blackwells。这就让我们有了一个视角。我们正处于计算领域两个根本性转变的开端,这确实相当重要。第一个是从在 CPU 上运行的编码转向在 GPU 上运行的创建神经网络的机器学习。

And that fundamental shift from coding to machine learning is widespread at this point. There are no companies who are not going to do machine learning. And so machine learning is also what enables generative AI. And so on the one hand, the first thing that's happening is a trillion dollars’ worth of computing systems and data centers around the world is now being modernized for machine learning. On the other hand, secondarily, I guess, is that, that on top of these systems are going to be -- we're going to be creating a new type of capability called AI.
从编码到机器学习的这种根本转变目前已经是普遍的了。没有任何公司能够忽视机器学习。因此,机器学习也正是生成式AI得以实现的基础。从一方面来看,首先发生的事情是,全球价值一万亿美元的计算系统和数据中心正在为机器学习进行现代化改造。另一方面,第二步,我想是,在这些系统之上,我们将创建一种新的能力,称为AI。

And when we say generative AI, we're essentially saying that these data centers are really AI factories. They're generating something. Just like we generate electricity, we're now going to be generating AI. And if the number of customers is large, just as the number of consumers of electricity is large, these generators are going to be running 24/7. And today, many AI services are running 24/7, just like an AI factory. And so we're going to see this new type of system come online, and I call it an AI factory because that's really as close to what it is. It's unlike a data center of the past. And so these two fundamental trends are really just beginning. And so we expect this to happen this growth -- this modernization and the creation of a new industry to go on for several years.
当我们说生成式 AI 时,我们实际上是在说这些数据中心实际上是 AI 工厂。它们正在生成某些东西。就像我们发电一样,我们现在将生成 AI。如果客户数量很大,就像电力消费者数量很大一样,这些生成器将全天候运行。如今,许多 AI 服务正像 AI 工厂一样全天候运行。因此,我们将看到这种新型系统上线,我称之为 AI 工厂,因为这确实是最接近它的东西。它不同于过去的数据中心。因此,这两种基本趋势才刚刚开始。因此,我们预计这种增长——这种现代化和新行业的创建将持续数年。

Operator 操作员

Your next question comes from the line of Vivek Arya of Bank of America Securities. Your line is open.
接下来的问题来自美国银行证券的 Vivek Arya。您的线路已打开。

Vivek Arya

Thanks for taking my question. Colette, just to clarify, do you think it's a fair assumption to think NVIDIA could recover to kind of mid-70s gross margin in the back half of calendar 2025? Just wanted to clarify that. And then, Jensen my main question historically, when we have seen hardware deployment cycles, they have inevitably included some digestion along the way. When do you think we get to that phase, or is it just too premature to discuss that because you're just the start of Blackwell? So how many quarters of shipments do you think is required to kind of satisfy this first wave? Can you continue to grow this into calendar 2026? Just how should we be prepared to see what we have seen historically, right, the periods of digestion along the way of a long-term kind of secular hardware deployment?
感谢你提出问题。Colette,为了澄清一下,你认为假设NVIDIA能在2025年下半年恢复到中70%的毛利率,这个假设是否合理?我只是想确认一下。然后,Jensen,我的主要问题是,历史上,当我们看到硬件部署周期时,通常会包含一些消化期。你认为我们什么时候会进入这个阶段,还是说现在讨论这个问题为时尚早,因为Blackwell才刚刚开始?你认为需要多少个季度的出货量才能满足这一波的需求?你们能否继续在2026年日历年继续增长?我们应该如何准备,去观察我们历史上看到的情况——即在长期的硬件部署过程中,往往会有消化期?

Colette Kress

Okay. Vivek, thank you for the question. Let me clarify your question regarding gross margins. Could we reach the mid-70s in the second half of next year? And yes, I think it is reasonable assumption or a goal for us to do, but we'll just have to see how that mix of ramp goes. But yes, it is definitely possible.
好的,Vivek,谢谢你的提问。让我澄清一下你关于毛利率的问题。我们能否在明年下半年达到中70%的毛利率?是的,我认为这是一个合理的假设或目标,但我们还需要观察 ramp 的混合情况。但确实,这是有可能实现的。

Jensen Huang

The way to think through that, Vivek, is I believe that there will be no digestion until we modernize a trillion dollars with the data centers. Those -- if you look at the world's data centers, the vast majority of it is built for a time when we wrote applications by hand and we ran them on CPUs. It's just not a sensible thing to do anymore. If you have -- if every company's CapEx, if they're ready to build a data center tomorrow, they ought to build it for a future of machine-learning and generative AI.
Vivek,思考这个问题的方式是,我相信在我们将一万亿美元的数据中心现代化之前,不会出现消化期。如果你看看全球的数据中心,绝大多数都是在我们手动编写应用程序并运行在CPU上的时代建造的,现在已经不再是一个明智的做法了。如果每家公司准备明天建设一个数据中心,他们应该为机器学习和生成式AI的未来来建设它。

Because they have plenty of old data centers. And so what's going to happen over the course of next X number of years, and let's assume that over the course of four years, the world's data centers could be modernized as we grow into IT. As you know, IT continues to grow about 20%, 30% a year, let's say. And let's say by 2030, the world's data centers for computing is, call it a couple of trillion dollars. And we have to grow into that. We have to modernize the data center from coding to machine learning. That's number one.
因为他们有很多旧的数据中心。所以在未来 X 年中会发生什么,让我们假设在四年内,随着我们进入 IT,世界的数据中心可以实现现代化。如你所知,IT 每年大约增长 20%、30%,我们假设到 2030 年,全球用于计算的数据中心价值达到几万亿美元。我们必须适应这一点。我们必须从编码到机器学习来实现数据中心的现代化。这是第一点。
Idea
以1000亿为起点,每年翻一番,2025年是2000亿,2026年4000亿,2027年8000亿,2028亿1.6万亿,2029年3.2万亿,2030年6.4万亿,按这个速度,有可能是2027-2028年间完成旧设备的更新。
The second part of it is generative AI, and we're now producing a new type of capability that world has never known, a new market segment that the world has never had. If you look at OpenAI, it didn't replace anything. It's something that's completely brand new. It's in a lot of ways as when the iPhone came, it was completely brand new. It wasn't really replacing anything. And so we're going to see more and more companies like that. And they're going to create and generate out of their services, essentially intelligence. Some of it would be digital artist intelligence like Runway.
它的第二部分是生成式人工智能,我们现在正在生产一种世界从未见过的新型能力,一个世界从未有过的新市场细分。如果你看看 OpenAI,它并没有取代任何东西。它是完全全新的。在很多方面,就像 iPhone 出现时,它是完全全新的。它并没有真正取代任何东西。因此,我们将看到越来越多这样的公司。它们将从其服务中创造和生成本质上的智能。其中一些将是像 Runway 这样的数字艺术家智能。

Some of it would be basic intelligence, like OpenAI. Some of it would be legal intelligence like Harvey. Digital marketing intelligence like [Reuters] (ph), so on and so forth. And the number of these companies, these -- what are they call AI-native companies are just in hundreds and almost every platform shift there was -- there were Internet companies as you recall, there were cloud-first companies. They were mobile-first companies and now they're AI natives. And so these companies are being created because people see that there's a platform shift and there's a brand new opportunity to do something completely new.
其中一些将是基础智能,比如OpenAI。一些将是法律智能,如Harvey;数字营销智能,如[Reuters](音);等等。这些公司,所谓的AI本土公司,数量已经达到数百家,几乎每一次平台的转变都会出现类似的公司,比如你记得的互联网公司、云优先公司、移动优先公司,现在则是AI本土公司。正是因为人们看到平台的转变以及一个全新的机会来做一些完全不同的事情,这些公司才得以创造。

And so my sense is that we're going to continue to build out to modernize IT, modernize computing, number one. And then number two, create these AI factories that are going to be for a new industry for the production of artificial intelligence.
所以我的感觉是,我们将继续建设以实现 IT 现代化,计算现代化,这是第一。然后第二,创建这些人工智能工厂,这将成为生产人工智能的新产业。

Operator 操作员

Your next question comes from the line of Stacy Rasgon of Bernstein Research. Your line is open.
接下来的问题来自 Bernstein Research 的 Stacy Rasgon。您的线路已打开。

Stacy Rasgon

Hi, guys. Thanks for taking my questions. Colette, I had a clarification and a question for you. The clarification, just when you say low-70s gross margins, is 73.5 count as low-70s, or do you have something else in mind? And for my question, you're guiding total revenues and so I mean, total Data Center revenues in the next quarter must be up quote-unquote several billion dollars, but it sounds like Blackwell now should be up more than that. But you also said Hopper was still strong. So like is Hopper down sequentially next quarter? And if it is like why? Is it because of the supply constraints? Is China has been pretty strong is China is kind of rolling off a bit into Q4. So any color you can give us on sort of the Blackwell ramp and the Blackwell versus Hopper behavior into Q4 would be really helpful. Thank you.
嗨,大家好。感谢你们回答我的问题。Colette,我有一个澄清和一个问题要问你。澄清一下,当你说毛利率在 70%出头时,73.5 算是 70%出头吗,还是你有其他想法?我的问题是,你在指导总收入,所以我指的是下个季度的数据中心总收入必须增加所谓的几十亿美元,但听起来Blackwell现在应该增加更多。但你也说 Hopper 仍然很强劲。那么 Hopper 下个季度是按顺序下降吗?如果是这样,为什么?是因为供应限制吗?中国一直很强劲,中国在第四季度有点回落吗?所以你能给我们一些关于Blackwell增长和Blackwell与 Hopper 在第四季度表现的见解将非常有帮助。谢谢。
Warning
销售增长超过100%,不断纠缠于毛利率,脑残是分析师的基本特征。
Colette Kress

So first starting on your first question there, Stacy, regarding our gross margin and defined low. Low, of course, is below the mid, and let's say we might be at 71%, maybe about 72%, 72.5%, we're going to be in that range. We could be higher than that as well. We're just going to have to see how it comes through. We do want to make sure that we are ramping and continuing that improvement, the improvement in terms of our yields, the improvement in terms of the product as we go through the rest of the year. So we'll get up to the mid-70s by that point.
首先回答你的第一个问题,Stacy,关于我们的毛利率和定义的低点。低点当然是低于中位数的,我们可能在 71%左右,也许是 72%、72.5%,我们将在这个范围内。我们也可能高于这个范围。我们需要看看结果如何。我们确实希望确保我们在提高并继续改进,无论是产量方面的改进,还是产品方面的改进,随着今年剩余时间的推进。因此,到那时我们将达到 70 年代中期。

The second statement was a question regarding our Hopper and what is our Hopper doing. We have seen substantial growth for our H200, not only in terms of orders but the quickness in terms of those that are standing that up. It is an amazing product and it's the fastest-growing and ramping that we've seen. We will continue to be selling Hopper in this quarter, in Q4 for sure, that is across-the-board in terms of all of our different configurations and our configurations include what we may do in terms of China. But keep that in mind that folks are also at the same time looking to build out their Blackwell. So we've got a little bit of both happening in Q4. But yes, is it possible for Hopper to grow between Q3 and Q4, it's possible, but we'll just have to see.
第二个声明是关于我们的 Hopper 的问题,以及我们的 Hopper 在做什么。我们已经看到 H200 的显著增长,不仅在订单方面,而且在那些快速部署方面。这是一个了不起的产品,也是我们见过的增长最快的产品。我们将在本季度继续销售 Hopper,第四季度肯定会销售,这涵盖了我们所有不同的配置,包括我们可能在中国的配置。但请记住,人们同时也在寻求建立他们的Blackwell。所以在第四季度我们有两方面的事情在发生。但是,是的,Hopper 在第三季度和第四季度之间增长是有可能的,但我们只能拭目以待。

Operator 操作员

Your next question comes from the line of Joseph Moore of Morgan Stanley. Your line is open.
接下来的问题来自摩根士丹利的约瑟夫·摩尔。您的线路已打开。

Joseph Moore

Great. Thank you. I wonder if you could talk a little bit about what you're seeing in the inference market. You've talked about Strawberry and some of the ramifications of longer scaling inference projects. But you've also talked about the possibility that as some of these Hopper clusters age that you could use some of the Hopper latent chips for inference. So I guess, do you expect inference to outgrow training in the next kind of 12-month time frame, and just generally your thoughts there?
很好。谢谢。我想知道您是否可以谈谈您在推理市场中看到的一些情况。您谈到了 Strawberry 和更长规模推理项目的一些影响。但您也谈到了随着一些 Hopper 集群老化,您可以使用一些 Hopper 潜在芯片进行推理的可能性。所以我想问,您是否预计在接下来的 12 个月时间框架内,推理会超过训练的增长,您对此有何看法?

Jensen Huang

Our hopes and dreams is that someday, the world does a ton of inference. And that's when AI has really succeeded, right. It's when every single company is doing inference inside their companies for the marketing department and forecasting department and supply chain group and their legal department and engineering, of course and coding, of course. And so we hope that every company is doing inference 24/7. And that there will be a whole bunch of AI native startups, thousands of AI native startups that are generating tokens and generating AI and every aspect of your computer experience from using Outlook to PowerPointing or when you're sitting there with Excel, you're constantly generating tokens.
我们的希望和梦想是,有一天,世界能够进行大量的推理。那时,人工智能才真正成功,对吧。每家公司都在其公司内部为市场部、预测部、供应链组、法律部、当然还有工程和编码进行推理。因此,我们希望每家公司都能全天候进行推理。并且会有一大批原生 AI 初创公司,成千上万的原生 AI 初创公司正在生成代币和 AI,从使用 Outlook 到 PowerPoint 或使用 Excel 的每一个计算机体验方面,你都在不断生成代币。

And every time you read a PDF, open a PDF, it generated a whole bunch of tokens. One of my favorite applications is NotebookLM, this Google application that came out. I use the living daylights out of it just because it's fun. And I put every PDF, every archive paper into it just to listen to it as well as scanning through it. And so I think -- that's the goal is to train these models so that people use it. And there's now a whole new era of AI if you will, a whole new genre of AI called physical AI, just those large language models understand the human language and how we the thinking process, if you will. Physical AI understands the physical world and it understands the meaning of the structure and understands what's sensible and what's not and what could happen and what won't and not only does it understand but it can predict and roll out a short future. That capability is incredibly valuable for industrial AI and robotics.
每次你阅读一个 PDF,打开一个 PDF,它都会生成一大堆的标记。我最喜欢的应用之一是 NotebookLM,这是谷歌推出的一个应用。我非常频繁地使用它,因为它很有趣。我把每个 PDF,每篇存档论文都放进去,不仅是为了听它,还为了浏览它。因此,我认为——目标是训练这些模型以便人们使用它。现在有一个全新的 AI 时代,如果你愿意的话,一个全新的 AI 类型,叫做物理 AI,就是那些大型语言模型理解人类语言和我们的思维过程,如果你愿意的话。物理 AI 理解物理世界,它理解结构的意义,理解什么是合理的,什么不是,什么可能发生,什么不会发生,它不仅理解,还能预测并推出一个短期的未来。这种能力对于工业 AI 和机器人技术来说是非常有价值的。
Idea
机器人。
And so that's fired up so many AI-native companies and robotics companies and physical AI companies that you're probably hearing about. And it's really the reason why we built Omniverse. Omniverse is so that we can enable these AIs to be created and learn in Omniverse and learn from synthetic data generation and reinforcement learning physics feedback instead of human feedback is now physics feedback. To have these capabilities, Omniverse was created so that we can enable physical AI. And so that the goal is to generate tokens. The goal is to inference and we're starting to see that growth happening. So I'm super excited about that.
这激发了许多原生 AI 公司、机器人公司和物理 AI 公司的兴起,你可能已经听说过。这也是我们构建 Omniverse 的原因。Omniverse 的目的是让这些 AI 能够在 Omniverse 中创建和学习,并从合成数据生成和强化学习物理反馈中学习,而不是人类反馈,现在是物理反馈。为了拥有这些能力,Omniverse 被创建以支持物理 AI。目标是生成代币。目标是推理,我们开始看到这种增长的发生。所以我对此感到非常兴奋。

Now let me just say one more thing. Inference is super hard. And the reason why inference is super hard is because you need the accuracy to be high on the one hand. You need the throughput to be high so that the cost could be as low as possible, but you also need the latency to be low. And computers that are high throughput as well as low latency is incredibly hard to build. And these applications have long context lengths because they want to understand, they want to be able to inference within understanding the context of what's -- what they're being asked to do. And so the context length is growing larger and larger.
现在让我再说一件事。推理非常困难。推理之所以非常困难,是因为一方面你需要高精度。你需要高吞吐量以便成本尽可能低,但你也需要低延迟。而高吞吐量和低延迟的计算机非常难以构建。这些应用程序具有较长的上下文长度,因为它们希望理解,它们希望能够在理解所要求的内容的上下文中进行推理。因此,上下文长度越来越大。

On the other hand, the models are getting larger, they're multimodality. Just the number of dimensions that inference is innovating is incredible. And this innovation rate is what makes NVIDIA's architecture so great because our ecosystem is fantastic. Everybody knows that if they innovate on top of CUDA on top of NVIDIA's architecture, they can innovate more quickly and they know that everything should work. And if something were to happen, it's probably likely their code and not ours. And so that ability to innovate in every single direction at the same time, having a large installed base so that whatever you create could land on a NVIDIA computer and be deployed broadly all around the world in every single data center all the way out to the edge into robotic systems, that capability is really quite phenomenal.
另一方面,模型变得越来越大,它们是多模态的。仅仅是推理创新的维度数量就令人难以置信。而这种创新速度正是让 NVIDIA 架构如此出色的原因,因为我们的生态系统非常棒。大家都知道,如果他们在 CUDA 之上、在 NVIDIA 架构之上进行创新,他们可以更快地创新,并且知道一切都应该正常工作。如果出现问题,很可能是他们的代码问题而不是我们的。因此,这种在每个方向同时创新的能力,加上庞大的安装基础,使得无论你创造什么都可以在 NVIDIA 计算机上运行,并在全球每个数据中心一直到边缘的机器人系统中广泛部署,这种能力确实相当惊人。

Operator 操作员

Your next question comes from the line of Aaron Rakers of Wells Fargo. Your line is open.
接下来的问题来自富国银行的 Aaron Rakers。您的线路已打开。

Aaron Rakers

Yes, thanks for taking the question. I wanted to ask you as we kind of focus on the Blackwell cycle and think about the data center business. When I look at the results this last quarter, Colette, you mentioned that obviously, the networking business was down about 15% sequentially, but then your comments were that you were seeing very strong demand. You mentioned also that you had multiple cloud CSP design wins for these large-scale clusters. So I'm curious if you could unpack what's going on in the networking business and where maybe you've seen some constraints and just your confidence in the pace of Spectrum-X progressing to that multiple billions of dollars that you previously had talked about. Thank you.
是的,谢谢你提出这个问题。我想问一下,因为我们有点专注于Blackwell周期并考虑数据中心业务。当我查看上个季度的结果时,Colette,你提到显然网络业务环比下降了大约 15%,但你的评论是你看到了非常强劲的需求。你还提到你在这些大规模集群中赢得了多个云 CSP 设计。所以我很好奇你能否解读一下网络业务中发生的事情,以及你可能看到的一些限制,以及你对 Spectrum-X 进展到你之前谈到的数十亿美元的速度的信心。谢谢。

Colette Kress

Let's first start with the networking. The growth year-over-year is tremendous and our focus since the beginning of our acquisition of Mellanox has really been about building together the work that we do in terms of -- in the Data Center. The networking is such a critical part of that. Our ability to sell our networking with many of our systems that we are doing in data center is continuing to grow and do quite well. So this quarter is just a slight dip down and we're going to be right back up in terms of growing. They're getting ready for Blackwell and more and more systems that will be using not only our existing networking but also the networking that is going to be incorporated in a lot of these large systems that we are providing them to.
首先让我们从网络开始。同比增长非常惊人,自从我们收购 Mellanox 以来,我们的重点一直是共同构建我们在数据中心方面的工作。网络是其中一个关键部分。我们在数据中心中与许多系统一起销售网络的能力正在持续增长,并且表现良好。因此,这个季度只是略有下降,我们将在增长方面迅速回升。他们正在为Blackwell以及越来越多的系统做准备,这些系统不仅将使用我们现有的网络,还将使用我们提供给他们的许多大型系统中整合的网络。

Operator 操作员

Your next question comes from the line of Atif Malik of Citi. Your line is open.
接下来的问题来自花旗银行的 Atif Malik。您的线路已打开。

Atif Malik

Thank you for taking my question. I have two quick ones for Collette. Colette, on the last earnings call, you mentioned that sovereign demand is in low double-digit billions. Can you provide an update on that? And then can you explain the supply-constrained situation in gaming? Is that because you're shifting your supply towards data center?
感谢你提出问题。我有两个简短的问题想问Collette。Collette,在上次的财报电话会议中,你提到主权需求处于十几亿的低双位数区间。你能提供一下最新的情况吗?另外,你能解释一下游戏领域的供给受限情况吗?是因为你们将供应转向了数据中心吗?

Colette Kress

So first starting in terms of sovereign AI, such an important part of growth, something that is really surfaced with the onset of generative AI and building models in the individual countries around the world. And we see a lot of them and we talked about a lot of them in the call today and the work that they are doing. So our sovereign AI and our pipeline going forward is still absolutely intact as those are working to build these foundational models in their own language, in their own culture, and working in terms of the enterprises within those countries.
因此,首先从主权 AI 的角度来看,这是增长中非常重要的一部分,随着生成式 AI 的出现以及在世界各地的各个国家建立模型,这一点真正浮出水面。我们看到很多这样的情况,并且在今天的电话会议中讨论了很多他们正在做的工作。因此,我们的主权 AI 和未来的计划仍然完全完好无损,因为他们正在努力用自己的语言、文化构建这些基础模型,并在这些国家的企业中开展工作。
Warning
不会好到哪里去,很难做起来的业务。
And I think you'll continue to see this be growth opportunities that you may see with our regional clouds that are being stored up and/or those that are focusing in terms of AI factories for many parts of the sovereign AI. This is areas where this is growing not only in terms of in Europe, but you're also seeing this in terms of growth in terms of -- in the Asia-Pac as well. Let me flip to your second question that you asked regarding gaming. So our gaming right now from a supply, we're busy trying to make sure that we can ramp all of our different products. And in this case, our gaming supply, given what we saw selling through was moving quite fast. Now the challenge that we have is how fast could we get that supply getting ready into the market for this quarter.
我认为,您将继续看到这成为增长机会,您可能会看到我们的区域云正在积累和/或那些专注于许多主权 AI 部分的 AI 工厂。这不仅在欧洲增长,而且您也会看到在亚太地区的增长。让我转到您关于游戏的第二个问题。我们目前在供应方面的游戏业务正忙于确保我们能够加速所有不同的产品。在这种情况下,我们的游戏供应,由于我们看到的销售速度相当快。现在我们面临的挑战是,我们能多快将这些供应准备好进入本季度的市场。

Not to worry, I think we'll be back on track with more suppliers we turn the corner into the new calendar year. We're just going to be tight for this quarter.
不用担心,我认为随着我们进入新的一年,会有更多的供应商,我们将重回正轨。这个季度我们会比较紧张。

Operator 操作员

Your next question comes from the line of Ben Reitzes of Melius Research. Your line is open.
接下来的问题来自 Melius Research 的 Ben Reitzes。您的线路已打开。

Ben Reitzes

Yes. Hi. Thanks a lot for the question. I wanted to ask Colette and Jensen with regard to sequential growth. So very strong sequential growth this quarter and you're guiding to about 7%. Do your comments on Blackwell imply that we reaccelerate from there as you get more supply? Just in the first half, it would seem that there would be some catch-ups. So I was wondering how prescriptive you could be there.
是的。你好。非常感谢你的提问。我想问一下 Colette 和 Jensen 关于连续增长的问题。本季度的连续增长非常强劲,你们的指导约为 7%。你们关于Blackwell的评论是否意味着随着供应的增加,我们会从那里重新加速?仅在上半年,似乎会有一些追赶。所以我想知道你们在这方面能有多具体。

And then, Jensen, just overall, with the change in administration that's going to take place here in the US and the China situation, have you gotten any sense, or any conversations about tariffs, or anything with regard to your China business? Any sense of what may or may not go on? It's probably too early, but wondering if you had any thoughts there. Thanks so much.
然后,詹森,总的来说,随着美国政府的更迭和中国局势的变化,你有没有得到任何关于关税的感觉或对话,或者关于你在中国业务的任何事情?对可能发生或不发生的事情有什么感觉吗?可能还为时过早,但想知道你有什么想法。非常感谢。

Jensen Huang

We guide one quarter at a time.
我们一次指导一个季度。

Colette Kress

We are working right now on the quarter that we're in and building what we need to ship in terms of Blackwell. We have every supplier on the planet working seamlessly with us to do that. And once we get to next quarter, we'll help you understand in terms of that ramp that we'll see to the next quarter and after that.
我们现在正在处理我们所在的这个季度,并构建我们需要交付的Blackwell。我们与全球的每个供应商无缝合作来实现这一目标。一旦进入下个季度,我们将帮助您了解我们将在下个季度及之后看到的增长。

Jensen Huang

Whatever the new administration decides, we will of course support the administration. And that's our -- the highest mandate. And then after that, do the best we can. And just as we always do. And so we have to simultaneously and we will comply with any regulation that comes along fully and support our customers to the best of our abilities and compete in the marketplace. We'll do all of these three things simultaneously.
无论新政府决定什么,我们当然会支持政府。这是我们的最高使命。然后在此之后,尽我们所能做到最好。就像我们一直以来所做的那样。因此,我们必须同时做到这一点,并且我们将完全遵守任何随之而来的法规,并尽最大努力支持我们的客户并在市场上竞争。我们将同时做到这三件事。

Operator 操作员

Your final question comes from the line of Pierre Ferragu of New Street Research. Your line is open.
您最后一个问题来自 New Street Research 的 Pierre Ferragu。您的线路已打开。

Pierre Ferragu

Hey, thanks for taking my question. Jensen, you mentioned in your comments you have the pre-trainings, the actual language models and you have reinforcement learning that becomes more and more important in training and in inference as well. And then you have inference itself. And I was wondering if you have a sense like a high-level typical sense of out of an overall AI ecosystem like maybe one of your clients or one of the large models that are out there. Today, how much of the compute goes into each of these buckets? How much for the pre-training, how much for the reinforcement, and how much into inference today? Do you have any sense for how it's splitting and where the growth is the most important as well?
感谢你提出问题。Jensen,你在发言中提到过预训练、实际的语言模型,还有强化学习,这在训练和推理中变得越来越重要。然后就是推理本身。我想了解的是,是否有一个大致的分配比例,像你们的客户或市场上大模型的整体AI生态系统中,今天计算资源大致是如何分配到这些不同的领域的?比如预训练占多少,强化学习占多少,推理占多少?你是否能提供一个大致的分配情况,以及在哪个领域的增长最为重要?

Jensen Huang

Well, today it's vastly in pre-training a foundation model because as you know, post-training, the new technologies are just coming online and whatever you could do in pre-training and post-training, you would try to do so that the inference cost could be as low as possible for everyone. However, there are only so many things that you could do priority. And so you'll always have to do on-the-spot thinking and in-context thinking and a reflection. And so I think that the fact that all three are scaling is actually very sensible based on what we are.
目前,计算资源主要用于预训练基础模型,因为如你所知,后训练的新技术才刚刚上线,无论是预训练还是后训练,你都会尽量做到让推理成本尽可能低。然而,能够优先处理的事情是有限的。因此,你总是需要进行现场思考、上下文思考和反思。我认为,所有三者都在扩展是非常合理的,这也是基于我们目前的情况。

And in the area of foundation model, now we have multimodality foundation models and the amount of petabytes of video that these foundation models are going to be trained on is incredible. And so my expectation is that for the foreseeable future, we're going to be scaling pre-training, post-training as well as inference time scaling and which is the reason why I think we're going to need more and more compute and we're going to have to drive as hard as we can to keep increasing the performance by X factors at a time so that we can continue to drive down the cost and continue to increase their revenues and get the AI revolution going. Thank you.
在基础模型领域,现在我们已经有了多模态基础模型,这些基础模型将要训练的视频数据量以PB(拍字节)计,规模是惊人的。因此,我的预期是,在可预见的未来,我们将继续扩展预训练、后训练以及推理时间扩展,这也是我认为我们将需要越来越多计算资源的原因,我们必须尽最大努力,通过逐步提高性能,推动成本降低,继续增加收入,并推动AI革命的原因。谢谢。

Operator 操作员

Thank you. I'll now turn the call back over to Jensen Huang for closing remarks.
谢谢。现在我将把电话交还给黄仁勋做结束发言。

Jensen Huang

Thank you. The tremendous growth in our business is being fueled by two fundamental trends that are driving global adoption of NVIDIA computing. First, the computing stack is undergoing a reinvention, a platform shift from coding to machine learning. From executing code on CPUs to processing neural networks on GPUs. The trillion-dollar installed base of traditional Data center infrastructure is being rebuilt for Software 2.0, which applies machine learning to produce AI.
谢谢。我们业务的巨大增长是由推动全球采用 NVIDIA 计算的两个基本趋势推动的。首先,计算堆栈正在进行重塑,从编码到机器学习的平台转变。从在 CPU 上执行代码到在 GPU 上处理神经网络。传统数据中心基础设施的万亿美元安装基础正在为软件 2.0 重建,软件 2.0 应用机器学习来生成 AI。

Second, the age of AI is in full steam. Generative AI is not just a new software capability, but a new industry with AI factories manufacturing digital intelligence, a new industrial revolution that can create a multi-trillion dollar AI industry. Demand for Hopper and anticipation for Blackwell, which is now in full production are incredible for several reasons. There are more foundation model makers now than there were a year ago. The computing scale of pre-training and post-training continues to grow exponentially.
其次,人工智能时代正全面展开。生成式人工智能不仅是一种新的软件能力,更是一个拥有人工智能工厂制造数字智能的新兴产业,一场可以创造数万亿美元人工智能产业的新工业革命。对 Hopper 的需求以及对Blackwell的期待,现在已全面投产,出于几个原因令人难以置信。现在的基础模型制造商比一年前更多。预训练和后训练的计算规模继续呈指数级增长。

There are more AI-native start-ups than ever and the number of successful inference services is rising. And with the introduction of ChatGPT o1, OpenAI o1, a new scaling law called test time scaling has emerged. All of these consume a great deal of computing. AI is transforming every industry, company, and country. Enterprises are adopting agentic AI to revolutionize workflows. Over time, AI coworkers will assist employees in performing their jobs faster and better. Investments in industrial robotics are surging due to breakthroughs in physical AI.
现在,AI本土初创公司比以往任何时候都更多,成功的推理服务数量也在增加。随着ChatGPT o1、OpenAI o1的推出,一种新的扩展法则——测试时间扩展(test time scaling)出现了。所有这些都消耗了大量的计算资源。AI正在改变每个行业、公司和国家。企业正在采用智能AI来革新工作流程。随着时间的推移,AI同事将帮助员工更快、更好地完成工作。由于物理AI的突破,工业机器人投资正在激增。

Driving new training infrastructure demand as researchers train world foundation models on petabytes of video and Omniverse synthetically generated data. The age of robotics is coming. Countries across the world recognize the fundamental AI trends we are seeing and have awakened to the importance of developing their national AI infrastructure. The age of AI is upon us and it's large and diverse. NVIDIA's expertise, scale, and ability to deliver full stack and full infrastructure let us serve the entire multi-trillion dollar AI and robotics opportunities ahead. From every hyperscale cloud, enterprise private cloud to sovereign regional AI clouds, on-prem to industrial edge and robotics.
随着研究人员在数拍字节的视频和 Omniverse 合成生成的数据上训练世界基础模型,推动新的训练基础设施需求。机器人时代即将到来。世界各国认识到我们所看到的基本 AI 趋势,并意识到发展国家 AI 基础设施的重要性。AI 时代已经到来,它是庞大而多样的。NVIDIA 的专业知识、规模和提供全栈和全基础设施的能力使我们能够服务于未来数万亿美元的 AI 和机器人机会。从每个超大规模云、企业私有云到主权区域 AI 云,从本地到工业边缘和机器人。

Thanks for joining us today and catch up next time.
感谢您今天加入我们,下次再见。

Operator 操作员

This concludes today's conference call. You may now disconnect.
今天的电话会议到此结束。您现在可以挂断了。

    Article Comments Update


      热门标签


        • Related Articles

        • 2024-02-21 NVIDIA Corporation (NVDA) Q4 2024 Earnings Call Transcript

          NVIDIA Corporation (NASDAQ:NVDA) Q4 2024 Earnings Conference Call February 21, 2024 5:00 PM ET 英伟达公司(纳斯达克股票代码:NVDA)2024 年第四季度收益电话会议 2024 年 2 月 21 日 下午 5:00 美东时间 Company Participants 公司参与者 Simona Jankowski - VP, IR 西蒙娜·扬科夫斯基 - 副总裁,投资者关系 Colette Kress ...
        • 2024-08-28 NVIDIA Corporation (NVDA) Q2 2025 Earnings Call Transcript

          NVIDIA Corporation (NASDAQ:NVDA) Q2 2025 Earnings Conference Call August 28, 2024 5:00 PM ET 英伟达公司(纳斯达克:NVDA)2025 年第二季度财报电话会议 2024 年 8 月 28 日 下午 5:00(东部时间) Company Participants 公司参与者 Stewart Stecker - Investor Relations 斯图尔特·斯特克 - 投资者关系 Colette Kress ...
        • 2024-05-22 NVIDIA Corporation (NVDA) Q1 2025 Earnings Call Transcript

          NVIDIA Corporation (NASDAQ:NVDA) Q1 2025 Earnings Conference Call May 22, 2024 5:00 PM ET 英伟达公司(纳斯达克股票代码:NVDA)2025 年第一季度收益电话会议 2024 年 5 月 22 日 下午 5:00 ET Company Participants 公司参与者 Simona Jankowski - Vice President, Investor Relations Simona ...
        • 2024-06-05 NVIDIA Corporation (NVDA) BofA Securities 2024 Global Technology Conference (Transcript)

          NVIDIA Corporation (NASDAQ:NVDA) BofA Securities 2024 Global Technology Conference June 5, 2024 3:30 PM ET 英伟达公司(纳斯达克:NVDA)美国银行证券 2024 年全球技术大会 2024 年 6 月 5 日 下午 3:30 ET Company Participants 公司参与者 Ian Buck - VP 伊恩·巴克 - 副总裁 Conference Call Participants ...
        • 2024-03-19 NVIDIA Corporation (NVDA) GTC Financial Analyst Q&A - (Transcript)

          NVIDIA Corporation (NASDAQ:NVDA) GTC Financial Analyst Q&A Call March 19, 2024 11:30 AM ET 英伟达公司(纳斯达克:NVDA)GTC 财务分析师问答电话会议 2024 年 3 月 19 日上午 11:30 Company Participants 公司参与者 Jensen Huang - Founder and Chief Executive Officer 黄仁勋 - 创始人兼首席执行官 Colette ...