NVIDIA Corporation (NASDAQ:NVDA) Q1 2026 Earnings Conference Call May 28, 2025 5:00 PM ET
Company Participants
Toshiya Hari - Investor Relations
Colette Kress - Executive Vice President & Chief Financial Officer
Jensen Huang - President & Chief Executive Officer
Conference Call Participants
Joe Moore - Morgan Stanley
Vivek Arya - Bank of America Securities
CJ Muse - Cantor Fitzgerald
Ben Reitzes - Melius
Timothy Arcuri - UBS
Operator
Good afternoon. My name is Sarah, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's First Quarter Fiscal 2026 Financial Results Conference Call. All lines have been placed on mute to prevent any background noise. After the speakers' remarks, there will be a question-and-answer session. \[Operator Instructions] Thank you.
下午好。我叫Sarah,将担任今天的电话会议操作员。现在,我谨代表大家欢迎各位参加英伟达2026财年第一季度财报电话会议。为避免背景噪音,所有电话线路目前均已静音。在发言人讲话结束后将进行问答环节。\[操作员指示] 谢谢。
Toshiya Hari, you may begin your conference.
Toshiya Hari先生,请开始您的会议。
Toshiya Hari
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the first quarter of fiscal 2026.
感谢大家。下午好,欢迎大家参加英伟达2026财年第一季度财报电话会议。
With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President and Chief Financial Officer.
今天与我一同出席会议的有英伟达的总裁兼首席执行官黄仁勋,以及执行副总裁兼首席财务官Colette Kress。
I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2026. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
在此提醒大家,本次电话会议正在英伟达投资者关系网站上进行直播。该直播将在我们召开2026财年第二季度财报电话会议之前可供回放。今天会议的内容属于英伟达所有,未经我们事先书面许可,不得复制或转录。
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission.
在本次电话会议中,我们可能会根据当前的预期作出前瞻性陈述。这些陈述面临多项重大风险与不确定性,我们的实际业绩可能与此存在重大差异。有关可能影响我们未来财务业绩和业务的因素,请参阅我们今天发布的财报新闻稿、最新的10-K和10-Q表格以及我们可能向美国证券交易委员会提交的8-K表格报告中的披露内容。
All our statements are made as of today, May 28, 2025, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.
我们所有的陈述均基于截至今日,也就是2025年5月28日我们目前可获得的信息。除非法律要求,我们不承担更新任何此类陈述的义务。
During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website.
在本次会议中,我们将讨论一些非公认会计准则(non-GAAP)财务指标。相关非GAAP与GAAP财务指标的对账信息,您可以在我们官网上发布的首席财务官评论中找到。
With that, let me turn the call over to Colette.
接下来,请Colette发言。
Colette Kress
Thank you, Toshiya.
谢谢你,Toshiya。
We delivered another strong quarter with revenue of $44 billion, up 69% year-over-year, exceeding our outlook in what proved to be a challenging operating environment. Data Center revenue of $39 billion grew 73% year-on-year. AI workloads have transitioned strongly to inference, and AI factory buildouts are driving significant revenue. Our customers' commitments are firm.
我们再次交出了一份强劲的季度成绩单,营收达到440亿美元,同比增长69%,超出我们在充满挑战的运营环境中的预期。其中数据中心收入为390亿美元,同比增长73%。AI工作负载正迅速向推理转型,AI工厂建设带来了可观的营收增长。我们的客户承诺坚实可靠。
On April 9th, the U.S. government issued new export controls on H20, our data center GPU designed specifically for the China market. We sold H20 with the approval of the previous administration. Although our H20 has been in the market for over a year and does not have a market outside of China, the new export controls on H20 did not provide a grace period to allow us to sell through our inventory.
4月9日,美国政府对H20实施了新的出口管制。H20是我们专为中国市场设计的数据中心GPU。在此前政府批准下,我们已经销售了H20。尽管H20已在市场上销售超过一年,且不适用于中国以外市场,但此次新出口限制并未提供宽限期让我们清理现有库存。
In Q1, we recognized $4.6 billion in H20 revenue, which occurred prior to April 9, but also recognized a $4.5 billion charge as we wrote down inventory and purchase obligations tied to orders we had received prior to April 9. We were unable to ship $2.5 billion in H20 revenue in the first quarter due to the new export controls. The $4.5 billion charge was less than what we initially anticipated as we were able to reuse certain materials.
在第一季度,我们确认了46亿美元的H20收入(均为4月9日前产生),但也确认了45亿美元的减值费用,用于冲销与4月9日前收到订单相关的库存和采购义务。由于新出口管制,我们未能在第一季度发运25亿美元的H20收入。由于部分材料得以回收再利用,该项45亿美元的减值低于我们最初的预期。
We are still evaluating our limited options to supply data center compute products compliant with the U.S. government's revised export control rules. Losing access to the China AI accelerator market, which we believe will grow to nearly $50 billion, would have a material adverse impact on our business going forward and benefit our foreign competitors in China and worldwide.
我们仍在评估有限的应对方案,以向中国市场供应符合美国政府新规的数据中心计算产品。若失去对中国AI加速器市场的准入权(我们预计该市场将增长至近500亿美元),将对我们的业务产生重大不利影响,并将使我们的外国竞争对手在中国及全球市场受益。
Our Blackwell ramp, the fastest in our company's history, drove a 73% year-on-year increase in Data Center revenue. Blackwell contributed nearly 70% of Data Center compute revenue in the quarter with a transition from Hopper nearly complete. The introduction of GB200 NVL was a fundamental architectural change to enable data center scale workloads and to achieve the lowest cost per inference token. While these systems are complex to build, we have seen a significant improvement in manufacturing yields, and rack shipments are moving to strong rates to end customers.
Blackwell的加速进展是我们公司历史上最快的,这推动数据中心收入同比增长73%。Blackwell贡献了本季度近70%的数据中心计算收入,从Hopper到Blackwell的过渡几近完成。GB200 NVL的推出代表着架构上的根本性变革,支持数据中心级工作负载,同时实现每个推理Token的最低成本。虽然这些系统构建复杂,但制造良率已显著提升,机架出货量正快速向终端客户推进。
GB200 NVL racks are now generally available for model builders, enterprises and sovereign customers to develop and deploy AI. On average, major hyperscalers are each deploying nearly 1,000 NVL72 racks or 72,000 Blackwell GPUs per week and are on track to further ramp output this quarter. Microsoft, for example, has already deployed tens of thousands of Blackwell GPUs and is expected to ramp to hundreds of thousands of GB200s with OpenAI as one of its key customers.
GB200 NVL机架现已全面供货,可供模型开发者、企业和国家级客户用于开发和部署AI。平均来看,主要的超大规模客户每周部署近1000个NVL72机架(即72,000颗Blackwell GPU),预计本季度将进一步提速。例如,微软已经部署了数万颗Blackwell GPU,并计划在OpenAI等核心客户的推动下将GB200部署规模扩大至数十万颗。
Key learnings from the GB200 ramp will allow for a smooth transition to the next phase of our product roadmap, Blackwell Ultra. Sampling of GB300 systems began earlier this month at the major CSPs, and we expect production shipments to commence later this quarter. GB300 will leverage the same architecture, same physical footprint and the same electrical and mechanical specifications as GB200. The GB300 drop-in design will allow CSPs to seamlessly transition their systems and manufacturing used for GB200 while maintaining high yields. B300 GPUs with 50% more HBM will deliver another 50% increase in dense FP4 inference compute performance compared to the B200. We remain committed to our annual product cadence with our roadmap extending through 2028, tightly aligned with the multiple year planning cycles of our customers.
从GB200的部署中获得的关键经验将帮助我们顺利过渡到下一阶段的产品路线——Blackwell Ultra。本月初,主要的云服务商(CSP)已开始对GB300系统进行测试,我们预计本季度晚些时候将开始量产出货。GB300将沿用与GB200相同的架构、外形尺寸以及电气和机械规格。这种可直接替换的设计将帮助CSP在保持高良率的同时,平稳完成从GB200系统和制造流程的过渡。B300 GPU的HBM容量将增加50%,从而使密集FP4推理计算性能相比B200再提升50%。我们将继续坚持年度产品节奏,路线图延续至2028年,紧密对齐客户的多年规划周期。
We are witnessing a sharp jump in inference demand. OpenAI, Microsoft and Google are seeing a step function leap in token generation. Microsoft processed over 100 trillion tokens in Q1, a five-fold increase on a year-over-year basis. This exponential growth in Azure OpenAI is representative of strong demand for Azure AI Foundry as well as other AI services across Microsoft's platform.
我们正见证推理需求的剧增。OpenAI、微软和谷歌的Token生成量出现飞跃式增长。微软在第一季度处理超过100万亿个Token,同比增长五倍。这种指数级的增长反映出Azure OpenAI的强劲需求,也体现了微软全平台AI服务(包括Azure AI Foundry)的强劲增长势头。
Inference serving startups are now serving models using B200, tripling their token generation rate and corresponding revenues for high-value reasoning models such as DeepSeek-R1 as reported by artificial analysis. NVIDIA Dynamo on Blackwell NVL72 turbocharges AI inference throughput by 30x for the new reasoning models, sweeping the industry. Developer engagements increased with adoption ranging from LLM providers such as Perplexity to financial services institutions such as Capital One, who reduced agentic chatbot latency by 5x with Dynamo.
推理服务初创公司正使用B200进行模型部署,Token生成速率提高三倍,同时推动像DeepSeek-R1这类高价值推理模型的营收增长。据Artificial Analysis报道,Blackwell NVL72上的NVIDIA Dynamo将AI推理吞吐量提升至原来的30倍,在业内掀起巨浪。开发者采用度持续上升,涵盖了从Perplexity等大模型提供商到Capital One等金融机构,后者借助Dynamo将智能客服机器人的延迟降低了5倍。
In the latest MLPerf Inference results, we submitted our first results using GB200 NVL72, delivering up to 30x higher inference throughput compared to our 8-GPU H200 submission on the challenging Llama 3.1 benchmark. This feat was achieved through a combination of tripling the performance for GPU as well as 9x more GPUs all connected on a single NVLink domain. And while Blackwell is still early in its life cycle, software optimizations have already improved its performance by 1.5x in the last month alone. We expect to continue improving the performance of Blackwell through its operational life as we have done with Hopper and Ampere. For example, we increased the inference performance of Hopper by four times over two years. This is the benefit of NVIDIA's programmable CUDA architecture and rich ecosystem.
在最新一轮MLPerf推理测试中,我们首次使用GB200 NVL72提交测试成绩,在具有挑战性的Llama 3.1基准测试中,其推理吞吐量是我们8-GPU H200提交的最高30倍。此成果来源于GPU性能的三倍提升,以及9倍GPU在同一NVLink域内的连接配置。尽管Blackwell还处于生命周期初期,但仅过去一个月的软件优化就已使其性能提升了1.5倍。我们预计会如同过去对Hopper和Ampere一样,在整个生命周期中持续优化Blackwell的性能。这正是英伟达可编程CUDA架构及其丰富生态的优势所在。
The pace and scale of AI factory deployments are accelerating with nearly 100 NVIDIA-powered AI factories in flight this quarter, a two-fold increase year-over-year, with the average number of GPUs powering each factory also doubling in the same period. And more AI factory projects are starting across industries and geographies. NVIDIA's full stack architecture is underpinning AI factory deployments as industry leaders like AT&T, BYD, Capital One, Foxconn, MediaTek, and Telenor, are strategically vital sovereign clouds like those recently announced in Saudi Arabia, Taiwan and the UAE. We have a line of sight to projects requiring tens of gigawatts of NVIDIA AI infrastructure in the not-too-distant future.
AI工厂的部署速度和规模正在加速,本季度在建的英伟达赋能AI工厂接近100个,同比增长一倍,同时每个工厂所配置的GPU数量也翻了一番。越来越多的AI工厂项目正在全球各地、各个行业启动。英伟达的全栈架构正成为AI工厂部署的基础,领先企业如AT&T、比亚迪、Capital One、富士康、联发科和Telenor,以及沙特、台湾、阿联酋等关键主权云平台,均在积极部署。我们已经看到在不远的将来,将有多个项目对英伟达AI基础设施提出数十吉瓦(gigawatt)级别的需求。

需要更现代化的银行,Capital One。
The transition from generative to agentic AI, AI capable of receiving, reasoning, planning and acting will transform every industry, every company and country. We envision AI agents as a new digital workforce capable of handling tasks ranging from customer service to complex decision-making processes.
从生成式AI向代理型AI的转变——也就是具备接收、推理、规划和行动能力的AI——将彻底改变每一个行业、每一家公司和每一个国家。我们设想AI代理人将成为新一代数字劳动力,能够胜任从客户服务到复杂决策流程等各类任务。
We introduced the Llama Nemotron family of open reasoning models designed to supercharge agentic AI platforms for enterprises. Built on the Llama architecture, these models are available as NIMs, or NVIDIA Inference Microservices, with multiple sizes to meet diverse deployment needs. Our post training enhancements have yielded a 20% accuracy boost and a 5x increase in inference speed. Leading platform companies, including Accenture, Cadence, Deloitte, and Microsoft are transforming work with our reasoning models.
我们推出了Llama Nemotron系列开放式推理模型,专为加速企业级代理型AI平台而设计。该系列基于Llama架构开发,并以NIMs(NVIDIA推理微服务)的形式提供,具备多种规模以满足多样化的部署需求。我们的后训练增强技术使模型准确率提升20%,推理速度提高5倍。埃森哲、益华、德勤和微软等领先平台公司正在利用我们的推理模型变革工作方式。
NVIDIA NeMo microservices are generally available across industries are being leveraged by leading enterprises to build, optimize and scale AI applications. With NeMo, Cisco increased model accuracy by 40% and improved response time by 10x in its code assistant. NASDAQ realized a 30% improvement in accuracy and response time in its AI platform's search capabilities. And Shell's custom LLM achieved a 30% increase in accuracy when trained with NVIDIA NeMo. NeMo's parallelism, techniques accelerated model training time by 20% when compared to other frameworks.
NVIDIA NeMo微服务已在多个行业中普遍可用,被领先企业用于构建、优化和扩展AI应用。借助NeMo,思科在其代码助手中将模型准确率提升了40%,响应时间提高了10倍;纳斯达克的AI平台搜索功能准确率和响应时间均提升30%;壳牌公司使用NeMo训练的定制LLM也提升了30%的准确率。NeMo的并行训练技术使模型训练时间相比其他框架缩短了20%。
We also announced a partnership with Yum! Brands, the world's largest restaurant company, to bring NVIDIA AI to 500 of its restaurants this year and expanding to 61,000 restaurants over time to streamline order-taking, optimize operations and enhance service across its restaurants. For AI-powered cybersecurity leading companies like Check Point, CrowdStrike and Palo Alto Networks are using NVIDIA's AI security and software stack to build, optimize and secure agentic workflows, with CrowdStrike realizing 2x faster detection triage with 50% less compute cost.
我们还宣布与全球最大餐饮集团Yum! Brands合作,今年将在其500家门店部署NVIDIA AI,未来将扩展至其全部61,000家门店,以简化点单流程、优化运营、提升服务。在AI驱动的网络安全领域,Check Point、CrowdStrike和Palo Alto Networks等领先公司正利用NVIDIA的AI安全与软件堆栈构建、优化并保护其代理型工作流,其中CrowdStrike的威胁检测分诊速度提升2倍,同时计算成本降低50%。
Moving to networking. Sequential growth in networking resumed in Q1 with revenue up 64% quarter-over-quarter to $5 billion. Our customers continue to leverage our platform to efficiently scale up and scale out AI factory workloads. We created the world's fastest switch, NVLink, for scale up. Our NVLink compute fabric in its fifth generation offers 14x the bandwidth of PCIe Gen 5. NVLink72 carries 130 terabytes per second of bandwidth in a single rack, equivalent to the entirety of the world's peak Internet traffic. NVLink is a new growth vector and is off to a great start with Q1 shipments exceeding $1 billion.
再来看网络业务。第一季度网络业务恢复环比增长,营收环比增长64%,达到50亿美元。我们的客户持续利用我们的平台高效扩展AI工厂的工作负载。我们为纵向扩展创建了全球最快的交换网络——NVLink。第五代NVLink计算架构带宽是PCIe Gen 5的14倍。NVLink72在单个机架中可实现每秒130TB的带宽,相当于全球峰值互联网流量的总和。NVLink成为新的增长引擎,第一季度出货额已突破10亿美元。
At COMPUTEX, we announced NVLink Fusion. Hyperscale customers can now build semi-custom CCUs and accelerators that connect directly to the NVIDIA platform with NVLink. We are now enabling key partners, including ASIC providers such as MediaTek, Marvell, Alchip Technologies and Astera Labs, as well as CPU suppliers, such as Fujitsu and Qualcomm to leverage LVLink Fusion to connect our respective ecosystems.
在COMPUTEX大会上,我们发布了NVLink Fusion。超大规模客户现在可以构建半定制的计算单元(CCU)和加速器,直接通过NVLink与NVIDIA平台连接。我们正在赋能关键合作伙伴,包括联发科、Marvell、阿尔特芯片、Astera Labs等ASIC厂商,以及富士通、高通等CPU供应商,通过NVLink Fusion连接各自的生态系统。
For scale out, our enhanced Ethernet offerings delivered the highest throughput, lowest latency networking for AI. Spectrum-X posted strong sequential and year-on-year growth and is now annualizing over $8 billion in revenue. Adoption is widespread across major CSPs and consumer Internet companies, including CoreWeave, Microsoft Azure and Oracle Cloud and xAI. This quarter, we added Google Cloud and Meta to the growing list of Spectrum-X customers.
在横向扩展方面,我们强化的以太网产品为AI提供了最高吞吐量和最低延迟的网络解决方案。Spectrum-X实现了强劲的环比和同比增长,目前正以年收入超过80亿美元的速度运行。其被广泛采用于主要的云服务提供商(CSP)和消费互联网公司,包括CoreWeave、微软Azure、Oracle Cloud和xAI。本季度,我们又新增了Google Cloud和Meta,进一步扩大了Spectrum-X客户群。
We introduced Spectrum-X and Quantum-X silicon photonics switches featuring the world's most advanced co-packaged optics. These platforms will enable next-level AI factory scaling to millions of GPUs through the increasingly power efficiency by 3.5x and network resiliency by 10x, while accelerating customer time to market by 1.3x.
我们推出了Spectrum-X和Quantum-X硅光子交换机,采用全球最先进的共封装光学技术。这些平台将支持AI工厂实现新一代扩展,规模可达数百万颗GPU,同时能效提升3.5倍,网络韧性提升10倍,客户产品上市时间也加快1.3倍。
Transitioning to a quick summary of our revenue by geography. China as a percentage of our Data Center revenue was slightly below our expectations and down sequentially due to H20 export licensing controls. For Q2, we expect a meaningful decrease in China Data Center revenue. As a reminder, while Singapore represented nearly 20% of our Q1 billed revenue as many of our large customers use Singapore for centralized invoicing, our products are almost always shipped elsewhere. Note that over 99% of H100, H200 and Blackwell Data Center compute revenue billed to Singapore was for orders from U.S.-based customers.
下面快速回顾一下我们按地区划分的营收情况。受H20出口许可限制的影响,中国在我们数据中心收入中的占比略低于预期,环比下降。预计第二季度中国的数据中心收入将出现显著下降。需要提醒的是,虽然新加坡占我们第一季度账单收入的近20%,但这是因为许多大客户使用新加坡进行集中开票,实际上产品几乎总是运往其他地方。值得注意的是,开票至新加坡的H100、H200和Blackwell数据中心计算收入中,超过99%来自美国客户的订单。
Moving to Gaming and AI PCs. Gaming revenue was a record $3.8 billion, increasing 48% sequentially and 42% year-on-year. Strong adoption by gamers, creators and AI enthusiasts have made Blackwell our fastest ramp ever. Against a backdrop of robust demand, we greatly improved our supply and availability in Q1 and expect to continue these efforts in Q2.
接下来是游戏和AI PC业务。游戏收入创下新高,达到38亿美元,环比增长48%,同比增长42%。得益于玩家、创作者以及AI爱好者的强劲采纳,Blackwell成为我们有史以来增长最快的架构。在强劲需求的背景下,我们在第一季度大幅提升了供应和可获得性,预计第二季度还将继续改进。
AI is transforming PC and creator and gamers. With a 100 million user installed base, GeForce represents the largest footprint for PC developers. This quarter, we added to our AI PC laptop offerings, including models capable of running Microsoft's Copilot+. This past quarter, we brought Blackwell architecture to mainstream gaming with its launch of GeForce RTX 5060 and 5060 Ti starting at just $299. The RTX 5060 also debuted in laptop starting at $1,099. These systems that doubled the frame rate/latency. These GeForce RTX 5060 and 5060 Ti desktop GPUs and laptops are now available.
AI正在变革PC、内容创作者和游戏玩家的体验。GeForce的装机用户数量已达1亿,代表着PC开发者的最大用户基础。本季度,我们扩展了AI PC笔记本产品线,包括可运行微软Copilot+的型号。过去一季,我们将Blackwell架构引入主流游戏市场,推出起售价仅为299美元的GeForce RTX 5060和5060 Ti。RTX 5060笔记本起售价为1099美元,这些系统的帧率和延迟表现是上一代产品的两倍。目前GeForce RTX 5060与5060 Ti台式GPU及笔记本产品已全面上市。
In console gaming, the recently unveiled Nintendo Switch 2 leverages NVIDIA's neural rendering and AI technologies, including next-generation custom RTX GPUs with DLSS technology, deliver a giant leap in gaming performance to millions of players worldwide. Nintendo has shipped over 150 million Switch consoles to date, making it one of the most successful gaming systems in history.
在主机游戏方面,任天堂最近发布的Switch 2采用了英伟达的神经渲染和AI技术,包括下一代定制RTX GPU与DLSS技术,为全球数百万玩家带来巨大性能飞跃。截至目前,任天堂已累计出货超过1.5亿台Switch,是历史上最成功的游戏系统之一。

优势的溢出。
Moving to Pro Visualization. Revenue of $509 million was flat sequentially and up 19% year-on-year. Tariff-related uncertainty temporarily impacted Q1 systems, and demand for our AI workstations is strong, and we expect sequential revenue growth to resume in Q2. NVIDIA DGX Spark and Station revolutionized personal computing by putting the power of an AI supercomputer in a desktop form factor. DGX Spark delivers up to 1 petaflop of AI compute while DGX Station offers an incredible 20 petaflops and is powered by the GB300 Superchip. DGX Spark will be available in calendar Q3 and DGX Station later this year.
再来看专业可视化业务。该部门收入为5.09亿美元,环比持平,同比增长19%。受关税相关不确定性影响,第一季度系统出货受到一定影响,但我们AI工作站的需求依然强劲,预计第二季度将恢复环比增长。NVIDIA的DGX Spark和DGX Station正在彻底变革个人计算方式,把AI超级计算机的能力带入桌面。DGX Spark可提供高达1 PFLOP的AI计算性能,而DGX Station更是达到惊人的20 PFLOPS,搭载GB300超级芯片。DGX Spark将于2025年第三季度上市,DGX Station则将在年内推出。
We have deepened Omniverse's integration and adoption into some of the world's leading software platforms, including Databricks, SAP and Schneider Electric. New Omniverse Blueprints such as Mega for at-scale robotic fleet management are being leveraged in KION Group, Pegatron, Accenture and other leading companies to enhance industrial operations. At COMPUTEX, we showcased Omniverse's great traction with technology manufacturing leaders, including TSMC, Quanta, Foxconn, Pegatron. Using Omniverse, TSMC saves months in work by designing fabs virtually, Foxconn accelerates thermal simulations by 150x, and Pegatron reduced assembly line defect rates by 67%.
我们加深了Omniverse与全球领先软件平台的集成与应用,包括Databricks、SAP和施耐德电气。新推出的Omniverse Blueprints,例如用于大规模机器人车队管理的Mega,正被凯傲集团、和硕、埃森哲等行业领先企业采用,以提升工业运营效率。在COMPUTEX大会上,我们展示了Omniverse在科技制造企业中的强劲势头,包括台积电、广达、富士康、和硕。借助Omniverse,台积电通过虚拟设计工厂节省了数月工期,富士康将热仿真速度提升了150倍,和硕的产线缺陷率降低了67%。
Lastly with our Automotive Group. Revenue was $567 million, down 1% sequentially, but up 72% year-on-year. Year-on-year growth was driven by the ramp of self-driving across a number of customers and robust end demand for NEVs. We are partnering with GM to build the next-gen vehicles, factories and robots using NVIDIA AI, simulation and accelerated computing. And we are now in production with our full stack solution for Mercedes-Benz starting with the new CLA, hitting roads in the next few months.
最后来看我们的汽车业务。本季度营收为5.67亿美元,环比下降1%,但同比增长72%。同比增长主要得益于多个客户的自动驾驶上量以及新能源车(NEV)的强劲终端需求。我们正与通用汽车合作,利用NVIDIA的AI、仿真和加速计算,打造下一代汽车、工厂和机器人。我们与梅赛德斯-奔驰的全栈解决方案也已投入量产,并将从新款CLA车型开始,数月内正式上路。
We announced Isaac GR00T N1, the world's first open fully customizable foundation model for humanoid robots, enabling generalized reasoning and skill development. We also launched new open NVIDIA Cosmos World Foundation models. Leading companies include 1X, Agility Robots -- Robotics, Figure AI, Uber and Waabi. We've begun integrating Cosmos into their operations for synthetic data generation, while Agility Robotics, Boston Dynamics, and XPENG Robotics are harnessing Isaac's simulation to advance their humanoid efforts.
我们发布了Isaac GR00T N1——全球首个开放、完全可定制的人形机器人基础模型,具备通用推理与技能开发能力。同时,我们还推出了新的开放式NVIDIA Cosmos World基础模型。1X、Agility Robotics、Figure AI、Uber和Waabi等领先企业已率先接入,我们正协助其将Cosmos用于合成数据生成;同时,Agility Robotics、波士顿动力、小鹏机器人等公司也在利用Isaac模拟平台推进其人形机器人研发。
GE Healthcare is using the new NVIDIA Isaac platform for healthcare simulation built on NVIDIA Omniverse and using NVIDIA Cosmos. The platform speeds development of robotic imaging and surgery systems.
GE医疗正使用基于NVIDIA Omniverse构建、集成NVIDIA Cosmos的新Isaac平台进行医疗仿真,该平台加速了机器人成像和手术系统的开发进程。
The era of robotics is here, billions of robots, hundreds of millions of autonomous vehicles and hundreds of thousands of robotic factories and warehouses will be developed.
机器人时代已经到来,未来将出现数十亿台机器人、数亿辆自动驾驶汽车,以及成千上万家机器人工厂与仓储设施。
All right. Moving to the rest of the P&L. GAAP gross margins and non-GAAP gross margins were 60.5% and 61%, respectively. Excluding the $4.5 billion charge, Q1 non-GAAP gross margins would have been 71.3%, slightly below -- above our outlook at the beginning of the quarter.
好的,接下来看损益表其他部分。根据GAAP的毛利率为60.5%,非GAAP毛利率为61%。若不计入45亿美元的减值费用,第一季度非GAAP毛利率将达71.3%,略高于我们季度初的预期。
Sequentially, GAAP operating expenses were up 7% and non-GAAP operating expenses were up 6%, reflecting higher compensation and employee growth. Our investments include expanding our infrastructure capabilities and AI solutions, and we plan to grow these investments throughout the fiscal year.
从环比来看,GAAP运营支出增长7%,非GAAP运营支出增长6%,反映了薪酬和员工人数的增加。我们的投资方向包括基础设施能力扩展及AI解决方案,我们计划在整个财年持续加大投入。
In Q1, we returned a record $14.3 billion to shareholders in the form of share repurchases and cash dividends. Our capital return program continues to be a key element of our capital allocation strategy.
在第一季度,我们通过股票回购和现金分红的方式向股东返还创纪录的143亿美元。资本返还计划仍是我们资本配置战略的重要组成部分。
Let me turn to the outlook for the second quarter. Total revenue is expected to be $45 billion, plus or minus 2%. We expect modest sequential growth across all of our platforms. In Data Center, we anticipate the continued ramp of Blackwell to be partially offset by a decline in China revenue. Note, our outlook reflects a loss in H20 revenue of approximately $8 billion for the second quarter.
接下来让我讲讲第二季度的展望。预计总营收将达到450亿美元,正负浮动2%。我们预期所有平台将实现温和的环比增长。在数据中心业务方面,Blackwell的持续放量预计将被中国市场营收的下降部分抵消。请注意,我们的展望已计入H20约80亿美元的营收损失。
GAAP and non-GAAP gross margins are expected to be 71.8% and 72%, respectively, plus or minus 50 basis points. We expect or Blackwell profitability to drive modest sequential improvement in gross margins. We are continuing to work towards achieving gross margins in the mid-70%s range late this year.
预计GAAP与非GAAP毛利率分别为71.8%与72%,正负50个基点。Blackwell的盈利能力预计将推动毛利率环比小幅提升。我们将继续努力,争取在今年晚些时候实现毛利率达到中段70%的目标区间。
GAAP and non-GAAP operating expenses are expected to be approximately $5.7 billion and $4 billion, respectively, and we continue to expect full year fiscal year '26 operating expense growth to be in the mid-30% range.
预计GAAP运营支出为约57亿美元,非GAAP运营支出为约40亿美元。我们仍预计2026财年全年运营支出增长将处于中段30%的区间。
GAAP and non-GAAP other income and expenses are expected to be an income of approximately $450 million, excluding gains and losses from non-marketable and publicly-held equity securities.
预计GAAP与非GAAP的其他收入和支出将为约4.5亿美元的净收益,不包括非市场流通和公开持有的股权证券所带来的盈亏。
GAAP and non-GAAP tax rates are expected to be 16.5%, plus or minus 1%, excluding any discrete items.
预计GAAP与非GAAP税率将为16.5%,正负1个百分点,不包括任何特殊项目的影响。
Further financial details are included in the CFO commentary and other information available on our IR website, including a new Financial Information AI Agent.
更多财务细节请参阅首席财务官评论和我们投资者关系网站上的其他资料,网站中还新增了一个“财务信息AI助手”供参考使用。
Let me highlight upcoming events for the financial community. We will be at the BofA Global Technology Conference in San Francisco on June 4th; the Rosenblatt Virtual AI Summit and NASDAQ Investor Conference in London on June 10; and GTC Paris at VivaTech on June 11th in Paris. We look forward to seeing you at these events. Our earnings call to discuss the results of our second quarter of fiscal 2026 is scheduled for August 27.
让我简要介绍即将举行的投资者活动。我们将于6月4日出席在旧金山举行的BofA全球科技大会;6月10日参加Rosenblatt虚拟AI峰会及在伦敦举行的纳斯达克投资者会议;6月11日在巴黎的VivaTech出席GTC Paris。我们期待在这些活动中与各位相见。关于2026财年第二季度财报的电话会议定于8月27日召开。
Well, now, let me turn it over to Jensen to make some remarks.
好了,现在请Jensen发表讲话。
Jensen Huang
Thanks, Colette.
谢谢你,Colette。
We've had a busy and productive year. Let me share my perspective on some topics we're frequently asked.
这一年我们非常忙碌且成果丰硕。下面我想就一些大家经常提问的主题,分享我的看法。
On export control: China is one of the world's largest AI markets and a springboard to global success. With half of the world's AI researchers based there, the platform that wins China is positioned to lead globally. Today, however, the $50 billion China market is effectively closed to U.S. industry. The H20 export ban ended our Hopper Data Center business in China. We cannot reduce Hopper further to comply. As a result, we are taking a multibillion-dollar write-off on inventory that cannot be sold or repurposed. We are exploring limited ways to compete, but Hopper is no longer an option.
关于出口管制: 中国是全球最大AI市场之一,也是通往全球成功的跳板。全球一半的AI研究人员都在中国,谁赢得中国市场,谁就有望在全球领先。然而如今,这个规模达500亿美元的市场对美国企业几乎已关闭。H20的出口禁令终结了我们在中国的Hopper数据中心业务。我们无法在不违反规定的情况下进一步削减Hopper性能。因此,我们对无法销售或重新利用的库存计提了数十亿美元的减值。我们正在探索极其有限的竞争方式,但Hopper已经不再是选项。
China's AI moves on with or without U.S. chips. It has to compute to train and deploy advanced models. The question is not whether China will have AI, it already does. The question is whether one of the world's largest AI markets will run on American platforms. Shielding Chinese chipmakers from U.S. competition only strengthens them abroad and weakens America's position. Export restrictions have spurred China's innovation and scale. The AI race is not just about chips. It's about which stack the world runs on. As that stack grows to include 6G and quantum, U.S. global infrastructure leadership is at stake.
无论是否有美国芯片,中国的AI都在继续发展。 它需要算力来训练和部署先进模型。问题不是中国会不会拥有AI——它已经有了;问题是,这个全球最大AI市场之一,是否会运行在美国产业平台上。让中国芯片厂商脱离美国竞争只会让它们在国际市场上更强大,削弱美国的地位。出口限制实际上刺激了中国的创新和规模扩张。AI竞赛不只是芯片之争,更是平台生态之争。而随着生态堆栈扩展至6G和量子计算,美国的全球基础设施领导地位正面临考验。
The U.S. has based its policy on the assumption that China cannot make AI chips. That assumption was always questionable and now it's clearly wrong. China has enormous manufacturing capability. In the end, the platform that wins the AI developers win AI -- wins AI. Export controls should strengthen U.S. platforms, not drive half of the world's AI talent to rivals.
美国的政策建立在“中国无法制造AI芯片”的假设上。 这个假设从一开始就值得质疑,现在显然是错误的。中国拥有强大的制造能力。最终,谁赢得AI开发者,谁就赢得AI。出口管制应当增强美国平台的竞争力,而不是把全球一半的AI人才推向对手。
On DeepSeek: DeepSeek and Qwen from China are among the most -- among the best open-source AI models. Released freely, they've gained traction across the U.S., Europe and beyond. DeepSeek-R1, like ChatGPT, introduced reasoning AI that produces better answers, the longer it thinks. Reasoning AI enables step-by-step problem solving, planning and tool use, turning models into intelligent agents. Reasoning is compute-intensive, requires hundreds to thousands more -- thousands of times more tokens per task than previous one-shot inference. Reasoning models are driving a step-function surge in inference demand. AI scaling laws remain firmly intact, not only for training, but now inference too requires massive scale compute.
关于DeepSeek: 来自中国的DeepSeek和Qwen是当前最优秀的开源AI模型之一。它们自由发布,在美国、欧洲乃至全球都获得了广泛关注。DeepSeek-R1与ChatGPT类似,引入了推理式AI,思考时间越长,答案越优。推理AI支持逐步解决问题、规划与工具调用,将模型转变为智能代理。这类推理非常依赖算力,每个任务所需Token数量是传统单次推理的数百至数千倍。推理模型正在推动推理需求出现跃升。AI的扩展规律仍然成立,不仅适用于训练,现在推理同样需要大规模计算。

可能是不涉及知识版权的问题,这两个模型在我的测试中确实表现的非常好。
DeepSeek also underscores the strategic value of open-source AI. When popular models are trained and optimized on U.S. platforms, it drives usage, feedback and continuous improvement, reinforcing American leadership across the stack. U.S. platforms must remain the preferred platform for open-source AI. That means supporting collaboration with top developers globally, including in China. America wins when models like DeepSeek and Qwen runs best on American infrastructure.
DeepSeek也凸显了开源AI的战略价值。 当流行模型在美国产业平台上训练和优化时,会带来使用、反馈与持续改进,强化美国在AI生态栈上的领导地位。美国平台必须继续成为开源AI的首选平台,这意味着要支持与全球顶级开发者的合作,包括来自中国的开发者。当像DeepSeek和Qwen这样的模型在美国基础设施上运行得最好时,美国就赢了。
Regarding onshore manufacturing: President Trump has outlined a bold vision to reshore advanced manufacturing, create jobs and strengthen national security. Future plants will be highly computerized in robotics. We share this vision. TSMC is building six fabs and two advanced packaging plants in Arizona to make chips for NVIDIA. Process qualification is underway with volume production expected by year-end. SPIL and Amkor are also investing in Arizona, constructing packaging, assembly and test facilities. In Houston, we're partnering with Foxconn to construct a 1 million square foot factory to build AI supercomputers. Wistron is building a similar plant in Fort Worth, Texas. To encourage and support these investments, we've made substantial long-term purchase commitments, a deep investment in America's AI manufacturing future. Our goal from chip to supercomputer built in America within a year. Each GB200 NVLink72 racks contains 1.2 million components and weighs nearly 2 tons. No one has produced supercomputers on this scale. Our partners are doing an extraordinary job.
关于本土制造: 川普总统提出了一个雄心勃勃的计划,重振先进制造业、创造就业机会并加强国家安全。未来的工厂将高度数字化与自动化,我们完全认同这一愿景。台积电正在亚利桑那州建设6座晶圆厂和2座先进封装厂,为NVIDIA生产芯片,目前已进入工艺认证阶段,预计年底开始量产。日月光与安靠也在亚利桑那投资建厂,用于封装、组装与测试。在休斯顿,我们与富士康合作,建设一座100万平方英尺的AI超级计算机工厂。纬创也在德州沃斯堡建设类似工厂。为鼓励并支持这些投资,我们做出了大量长期采购承诺,这是对美国AI制造业未来的深度投入。我们的目标是在一年内实现从芯片到超级计算机的“美国制造”。每个GB200 NVLink72机架包含120万个零件,重达近2吨。没有人曾在这种规模上制造超级计算机,而我们的合作伙伴正在做出卓越的工作。
On AI diffusion rule: President Trump rescinded the AI diffusion rule, calling it counterproductive, and proposed a new policy to promote U.S. AI tech with trusted partners. On his Middle East tour, he announced historic investments. I was honored to join him in announcing a 500-megawatt AI infrastructure project in Saudi Arabia and a 5-gigawatt AI campus in the UAE. President Trump wants U.S. tech to lead. The deals he announced are wins for America, creating jobs, advancing infrastructure, generating tax revenue and reducing the U.S. trade deficit.
关于AI扩散规则: 川普总统取消了AI扩散规则,称其适得其反,并提出了一项新政策,推动美国AI技术与可信伙伴的合作。在他的中东之行中,他宣布了具有历史意义的投资。我很荣幸与他一同宣布,沙特将建设一座500兆瓦的AI基础设施项目,阿联酋将建设一座5吉瓦的AI园区。川普总统希望美国技术处于全球领先地位,这些协议为美国带来了就业、基础设施发展、税收增长,并帮助缩小贸易逆差。
The U.S. will always be NVIDIA's largest market and home to the largest installed base of our infrastructure. Every nation now sees AI as core to the next industrial revolution, a new industry that produces intelligence and essential infrastructure for every economy. Countries are racing to build national AI platforms to elevate their digital capabilities. At COMPUTEX, we announced Taiwan's first AI factory in partnership with Foxconn and the Taiwan government. Last week, I was in Sweden to launch its first national AI infrastructure. Japan, Korea, India, Canada, France, the U.K., Germany, Italy, Spain, and more are now building national AI factories to empower startups, industries and societies. Sovereign AI is a new growth engine for NVIDIA.
美国将始终是NVIDIA最大的市场,也是我们基础设施部署最广的地方。 如今每个国家都将AI视为下一场工业革命的核心,是一个为经济社会提供智能与基础设施的新型产业。各国正竞相打造国家AI平台以提升其数字能力。在COMPUTEX大会上,我们宣布与富士康及台湾政府合作建设台湾首个AI工厂。上周我在瑞典,启动该国首个国家级AI基础设施。日本、韩国、印度、加拿大、法国、英国、德国、意大利、西班牙等国家也在加快建设国家级AI工厂,以赋能初创企业、产业与社会。主权AI(Sovereign AI)正成为NVIDIA的新增长引擎。
Toshiya, back to you. Thank you.
Toshiya,交还给你,谢谢。
Toshiya Hari
Operator, we will now open the call for questions. Would you please poll for questions?
操作员,现在我们进入问答环节。请您为我们收集问题。
Question-and-Answer Session
问答环节
Operator
Thank you. [Operator Instructions] Your first question comes from the line of Joe Moore with Morgan Stanley. Your line is open.
谢谢。[操作员指示] 第一位提问的是摩根士丹利的Joe Moore。请提问。
Joe Moore
Great. Thank you. You guys have talked about this scaling up of inference around reasoning models for at least a year now. And we've really seen that come to fruition as you talked about. We've heard it from your customers. Can you give us a sense for how much of that demand you're able to serve and give us a sense for maybe how big the inference business is for you guys? And do we need full-on NVL72 rack scale solutions for reasoning inference going forward?
太好了,谢谢。你们已经谈了推理模型在推理推断方面的扩展问题至少一年了。现在正如你们所说的,这些已经开始实现了,我们也从你们的客户那里听到了类似的反馈。你们能否说明一下目前你们能满足多少这方面的需求?能否也谈一下推理业务对你们来说现在有多大?未来我们是否需要完全采用NVL72整机架级别的解决方案来支持推理模型?
Jensen Huang
Well, we would like to serve all of it, and I think we're on track to serve most of it. Grace Blackwell NVLink72 is the ideal engine today, the ideal computer thinking machine, if you will, for reasoning AI. There's a couple of reasons for that.
我们当然希望能满足全部的需求,而且我认为我们正走在满足大部分需求的路上。Grace Blackwell NVLink72目前是最理想的引擎,可以说是为推理AI而生的“计算思维机器”。原因有几个。
The first reason is that the token generation amount, the number of tokens reasoning goes through, is 100 times, 1,000 times more than a one-shot chatbot. It's essentially thinking to itself, breaking down a problem step-by-step. It might be planning multiple paths to an answer. It could be using tools, reading PDFs, reading web pages, watching videos, and then producing a result, an answer. The longer it thinks, the better the answer, the smarter the answer is.
首先,推理模型在推断过程中所生成的Token数量,是传统一次性问答型聊天机器人(chatbot)的100倍、甚至1000倍。推理AI本质上是在“自我思考”,把问题拆解为多个步骤。它可能会规划多条路径以得出答案,调用工具,阅读PDF、网页、视频等信息,然后再给出最终结果。它思考时间越久,答案质量就越高,也越智能。
And so, what we would like to do, and the reason why Grace Blackwell was designed to give such a giant step-up in inference performance, is so that you could do all this and still get a response as quickly as possible. Compared to Hopper, Grace Blackwell is some 40 times higher speed and throughput compared. And so, this is going to be a huge, huge benefit in driving down the cost while improving the quality of response with excellent quality of service at the same time.
因此,我们设计Grace Blackwell的初衷就是实现推理性能的巨大跃升——让你在完成上述复杂推理任务的同时,仍能获得尽可能快速的响应速度。与Hopper相比,Grace Blackwell在速度与吞吐量方面提升了约40倍。这将在降低成本的同时,极大提升响应质量与服务质量,带来巨大收益。
So, that's the fundamental reason. That was the core driving reason for Grace Blackwell NVLink72. Of course, in order to do that, we had to reinvent, literally redesign the entire way that these supercomputers are built, and -- but now we're in full production. It's going to be exciting. It's going to be incredibly exciting.
这就是根本原因,也正是我们推出Grace Blackwell NVLink72的核心驱动力。当然,为了实现这个目标,我们必须彻底重构超级计算机的构建方式——从头开始重新设计整个架构。但现在我们已经进入全面量产阶段,令人非常激动,未来也将非常令人期待。
Operator
The next question comes from Vivek Arya with Bank of America Securities. Your line is open.
下一位提问的是美国银行证券公司的Vivek Arya。请提问。
Vivek Arya
Thanks for the question. Just a clarification for Colette first. So, on the China impact, I think previously, it was mentioned at about $15 billion, so you had the $8 billion in Q2. So, is there still some left as a headwind for the remaining quarters? Just, Colette, how to model that?
谢谢你的解答。首先,我想请Colette澄清一个问题。关于中国市场的影响,我记得你们之前提到过大概是150亿美元,现在你们说第二季度有80亿美元的影响。那么是否意味着后续几个季度还有一些影响尚未体现?Colette,你能指导一下我们该如何建模这部分影响吗?
And then a question, Jensen, for you. Back at GTC, you had outlined a path towards almost $1 trillion of AI spending over the next few years. Where are we in that build-out? And do you think it's going to be uniform that you will see every spender, whether it's CSP, sovereigns, enterprises or build-out should we expect some periods of digestion in between? Just what are your customer discussions telling you about how to model growth for next year?
然后Jensen,我也有一个问题。在GTC大会上,你曾提到未来几年AI相关支出可能接近1万亿美元。我们现在处于这个建设周期的哪个阶段?你认为这种支出会是均匀分布的吗?还是会出现云服务商、主权国家、企业等客户群之间的阶段性消化?你和客户的交流是否能给我们一些关于明年增长建模的提示?
Colette Kress
Yes, Vivek. Thanks so much for the question regarding H20. Yes, we recognized $4.6 billion H20 in Q1. We were unable to ship $2.5 billion, so the total for Q1 should have been $7 billion.
是的,Vivek,感谢你关于H20的提问。我们在第一季度确认了46亿美元的H20收入,同时有25亿美元由于无法发货未能确认,因此第一季度本来应有的H20订单总量是70亿美元。
When we look at our Q2, our Q2 is going to be meaningfully down in terms of China Data Center revenue. And we had highlighted in terms of the amount of orders that we had planned for H20 in Q2, and that was $8 billion.
对于第二季度,中国数据中心的收入将明显下降。我们此前已指出,第二季度原计划交付的H20订单为80亿美元。
Now, going forward, we did have other orders going forward that we will not be able to fulfill. That is what was incorporated, therefore, in the amount that we wrote down of the $4.5 billion. That write-down was about inventory and purchase commitments, and our purchase commitments were about what we expected regarding the orders that we had received.
展望未来,我们还有一些未交付的订单将无法履行,这部分也被计入了45亿美元的减值中。那笔减值主要针对的是库存和采购承诺,而这些采购承诺正是我们基于已接订单而准备的。
Going forward, though, it's a bigger issue regarding the amount of the market that we will not be able to serve. We assess that TAM to be close to about $50 billion in the future as we don't have a product to enable for China.
但更大的问题是,我们未来将无法服务一个庞大的市场。我们评估的中国AI数据中心相关TAM(总可服务市场)接近500亿美元,而目前我们没有可用于中国的产品来服务这个市场。
Jensen Huang
Vivek, the -- probably the best way to think through it is that AI is several things. Of course, we know that AI is this incredible technology that's going to transform every industry, from, of course, the way we do software to healthcare and financial services to retail to, I guess, every industry, transportation, manufacturing. And we're at the beginning of that.
Vivek,我认为思考这个问题最好的方式,是认识到AI涵盖了多个维度。当然,我们都知道,AI是一项极具颠覆性的技术,它将重塑几乎所有行业,从软件开发到医疗、金融服务、零售,乃至运输、制造业等各行各业。而我们目前还处在这一转型的起点。
But maybe another way to think about that is where do we need intelligence, where do we need digital intelligence? And it's in every country, it's in every industry. And we know because of that, we recognize that AI is also an infrastructure. It's a way of developing a technology -- delivering a technology that requires factories and these factories produce tokens. And they, as I mentioned, are important to every single industry and every single country. And so, on that basis, we're really at the very beginning of it, because the adoption of this technology is really kind of in its early, early stages.
另一种角度是思考:哪里需要智能,哪里就需要数字智能?答案是——每一个国家、每一个行业。正因如此,我们也认识到AI不仅是技术,更是一种基础设施。AI的交付模式就像工厂,它需要实际建造并产出“Token”。这些Token,正如我先前所说,对每一个行业和国家都至关重要。因此,从这个角度看,我们还处在AI作为“基础设施”的非常初期阶段。
Now, we've reached an extraordinary milestone with AIs that are reasoning, are thinking, what people call inference time scaling. Of course, it created a whole new -- we've entered an era where inference is going to be a significant part of the compute workload.
目前我们已经达成了一个非凡的技术里程碑:AI现在具备“推理”与“思考”的能力,也就是所谓的“推理时间扩展(inference time scaling)”。这标志着一个全新时代的开始——推理将成为计算负载的核心组成部分。
But anyhow, it's going to be a new infrastructure, and we're building it out in the cloud. The United States is really the early starter and available in U.S. clouds. And this is our largest market, our largest installed base, and we continue to see that happening.
总而言之,AI将成为一种新的基础设施,我们正率先在云端进行建设。美国是最早开始的国家,其云平台目前也最为成熟。美国仍然是我们最大的市场,也是我们基础设施部署最广泛的地区,我们看到这一趋势还在持续发展。
But beyond that, we're going to have to -- we're going to see AI go into enterprise, which is on-prem, because so much of the data is still on-prem. Access control is really important. It's really hard to move all of -- every company's data into the cloud. And so, we're going to move AI into the enterprise.
但这还不够,接下来我们会看到AI进入企业内部,也就是本地部署(on-prem),因为大部分数据仍然在企业内部。本地访问控制非常关键,将所有企业数据全部迁移至云端并不现实。因此,AI必须进入企业本地环境。
And you saw that we announced a couple of really exciting new products, our RTX Pro Enterprise AI server that runs everything enterprise and AI, our DGX Spark and DGX Station, which is designed for developers who want to work on-prem. And so, enterprise AI is just taking off.
你们也看到了,我们宣布了一些令人振奋的新产品,比如RTX Pro企业级AI服务器,它可支持企业与AI所有工作负载;还有为本地部署而设计的DGX Spark与DGX Station,专为希望本地开发AI的工程师而打造。企业级AI正在起飞。
Telcos, today, a lot of the telco infrastructure will be in the future software-defined and built on AI. And so, 6G is going to be built on AI, and that infrastructure needs to be built out. And as said, it's very, very early stages.
对于电信行业,未来大量电信基础设施将是软件定义的,并构建在AI之上。例如6G,将以AI为核心基础构建。这些基础设施尚待建立,目前仍处于非常初期的阶段。
And then, of course, every factory today that makes things will have an AI factory that sits with it. And the AI factory is going to be -- drive creating AI and operating AI for the factory itself, but also to power the products and the things that are made by the factory. So, it's very clear that every car company will have AI factories. And very soon, there'll be robotics companies -- robot companies, and those companies will be also building AIs to drive the robots.
此外,几乎每一家生产型工厂未来都将配备“AI工厂”。这些AI工厂不仅会用于为本身的生产运作提供智能支持,还将用于赋能所生产的产品本身。很明显,每一家汽车公司都将拥有AI工厂。不久之后,机器人公司也将崛起,并为其机器人构建专属AI系统。
And so, we're at the beginning of all of this build-out.
因此,我们才刚刚开始走上这条AI基础设施建设之路。
Operator
The next question comes from CJ Muse with Cantor Fitzgerald. Your line is open.
下一位提问的是来自Cantor Fitzgerald的CJ Muse。请开始提问。
CJ Muse
Yeah, good afternoon. Thank you for taking the question. There have been many large GPU cluster investment announcements in the last month, and you alluded to a few of them with Saudi Arabia, the UAE, and then, also, we heard from Oracle and xAI, just to name a few. So, my question, are there other that have yet to be announced of the same kind of scale and magnitude? And perhaps more importantly, how are these orders impacting your lead times for Blackwell and your current visibility sitting here today almost halfway through 2025?
下午好,谢谢你们回答我的问题。过去一个月,有很多关于大型GPU集群投资的公告,你们也提到了沙特、阿联酋的一些项目,我们也听说了Oracle和xAI的一些计划。我的问题是,还有没有其他规模和体量类似的项目尚未公布?或许更重要的是,这些订单对你们Blackwell的交付周期有何影响?站在2025年年中的位置上,目前你们对未来的供需有怎样的可见性?
Jensen Huang
Well, we have more orders today than we did at the last time I spoke about orders at GTC. However, we're also increasing our supply chain and building out our supply chain. They're doing a fantastic job. We're building it here onshore in the United States, but we're going to keep our supply chain quite busy for several -- many more years coming.
我们目前的订单数量已经超过了我上次在GTC大会上提到的水平。不过,我们也在同步扩大和建设我们的供应链。他们的表现非常出色。我们的供应链正在美国本土进行建设,我们预计未来多年内,它都会持续保持高负荷运转。
And with respect to further announcements, I'm going to be on the road next week through Europe. And it's -- just about every country needs to build out AI infrastructure and their umpteenth AI factories being planned. I think in the remarks, Colette mentioned there's some 100 AI factories being built. There's a whole bunch that haven't been announced.
至于是否还会有其他新宣布的项目,我下周将前往欧洲出差。几乎每个国家都在规划建设自己的AI基础设施,许多AI工厂也在筹备中。我记得Colette刚才提到,目前已有大约100个AI工厂在建设中,而实际上还有很多尚未对外公布。
And I think the important concept here, which makes it easier to understand, is that like other technologies that impact literally every single industry, of course, electricity was one and it became infrastructure. Of course, the information infrastructure, which we now know as the Internet affects every single industry, every country, every society.
我认为,帮助大家理解这一趋势的一个重要观念是:就像其他影响所有行业的技术一样——比如电力,最终成为基础设施;又比如信息基础设施,也就是现在的互联网,已经深入每个行业、国家和社会。
Intelligence is surely one of those things. I don't know any company, industry, country who thinks that intelligence is optional. It's essential infrastructure. And so, we've now digitalized intelligence. And so, I think we're clearly in the beginning of the build-out of this infrastructure. And every country will have it, I'm certain of that. Every industry will use it, that I'm certain of.
“智能”也毫无疑问是这类变革性技术之一。我从未听说过有哪家公司、哪一行业、哪个国家会认为“智能”是可选项。它是必要的基础设施。我们现在已将“智能”数字化,所以我认为我们正处于这一基础设施建设的起点。我可以肯定,每一个国家都会拥有自己的AI基础设施,每一个行业也都将应用它。
And what's unique about this infrastructure is that it needs factories. It's a little bit like the energy infrastructure, electricity. It needs factories. We need factories to produce this intelligence, and the intelligence is getting more sophisticated. We were talking about earlier that we had a huge breakthrough in the last couple of years with reasoning AI. And now, there are agents that reason and there are super-agents that use a whole bunch of tools and then there's clusters of super agents where agents are working with agents, solving problems.
而这种基础设施的独特之处在于,它需要“工厂”——有点像能源系统(比如电力),也需要发电厂。我们需要AI工厂来“制造”智能,而且这些智能系统正变得越来越复杂。我们刚才谈到,过去几年在推理AI方面有了巨大突破,现在不仅有“能推理的代理人(agents)”,还有“超级代理人(super-agents)”能调用各种工具,甚至出现多个代理协同工作的“代理群集”,共同解决问题。
And so, you could just imagine, compared to one-shot chatbots and the agents that are now using AI built on these large language models, how much more compute-intensive they really need to be and are. So, I think we're in the beginning of the build-out, and there should be many, many more announcements in the future.
你可以想象,相较于传统的一次性问答机器人,如今这些基于大语言模型构建的AI代理,需要的算力是何等巨大。所以我认为,我们正处于建设的初始阶段,未来还会有很多类似的大型项目宣布。
Operator
Your next question comes from Ben Reitzes with Melius. Your line is open.
下一位提问来自Melius公司的Ben Reitzes。请开始提问。
Ben Reitzes
Yeah, hi. Thanks for the question. I wanted to ask, first to Colette, just a little clarification around the guidance and maybe putting it in a different way. The $8 billion for H20 just seems like it's roughly $3 billion more than most people thought with regard to what you'd be foregoing in the second quarter. So, that would mean that with regard to your guidance, the rest of the business in order to hit $45 billion is doing $2 billion to $3 billion or so better. So, I was wondering if that math made sense to you. And then, in terms of the guidance, that would imply the non-China business is doing a bit better than the Street expected. So, wondering what the primary driver was there in your view.
你好,谢谢解答。我首先想请教Colette,就你们的业绩指引做一个澄清,也可以换个角度来看。你们提到第二季度因为H20受限而损失了80亿美元,这个数字比大多数人此前预期的50亿美元多了大概30亿。那么从你们整体指引的450亿美元来看,这是否意味着其他业务比预期多贡献了20到30亿美元?换句话说,你们在非中国业务上的表现是不是超出了华尔街的预期?你们认为驱动因素主要是什么?
And then, the second part of my question, Jensen, I know you guide one quarter at a time, but with regard to the AI diffusion rule being lifted and this momentum with sovereign, there's been times in your history where you guys have said on calls like this, where you have more conviction and sequential growth throughout the year, et cetera. And given the unleashing of demand with AI diffusion being revoked and the supply chain increasing, does the environment give you more conviction in sequential growth as we go throughout the year? So, first one for Colette and then next one for Jensen. Thanks so much.
第二个问题给Jensen。我知道你们一贯只做季度指引,但随着AI扩散规则的取消,以及你们在主权市场的强劲势头,你们过去也曾在类似电话会议上表示对全年的持续增长更有信心。那么在如今需求被释放、供应链能力又在增强的环境下,你是否对全年持续增长有更强的信心?第一个问题请Colette回答,第二个问题请Jensen,谢谢。
Colette Kress
Thanks, Ben, for the question. When we look at our Q2 guidance and our commentary that we provided, that, had the export controls not occurred, we would have had orders of about $8 billion for H20, that's correct. That was a possibility for what we would have had in our outlook for this quarter in Q2.
谢谢你提问,Ben。关于第二季度的业绩指引和我们的说明,确实,如果没有出口管制,我们原本预计H20的订单大约是80亿美元,这是准确的,也是我们本季度可能会包含在展望中的部分。
So, what we also have talked about here is the growth that we've seen in Blackwell, Blackwell across many of our customers as well as the growth that we continue to have in terms of supply that we need for our customers. So, putting those together, that's where we came through with the guidance that we provided.
同时,我们也提到了Blackwell的增长情况,它在很多客户中的部署正在扩大。此外,我们对客户的供货能力也在同步增长。将这些因素综合考虑,就是我们得出当前450亿美元营收指引的基础。
I'm going to turn the rest over to Jensen to see how he wants to take us.
接下来我将把剩下的问题交给Jensen来回答。
Jensen Huang
Yeah, thanks. Thanks, Ben. I would say compared to the beginning of the year, compared to GTC timeframe, there are four positive surprises. The first positive surprise is the step function demand increase of reasoning AI, I think it is fairly clear now that AI is going through an exponential growth, and reasoning AI really busted through. Concerns about hallucination or its ability to really solve problems, and I think a lot of people are crossing that barrier and realizing how incredibly effective agentic AI is and reasoning AI is. So, number one is inference reasoning and the exponential growth there, demand growth.
谢谢你,Ben。我想说的是,相比年初或GTC会议时的情况,现在有四个积极的“惊喜”发生了。第一个惊喜,是推理型AI的需求呈现阶跃式增长。现在很明显,AI正处于指数级爆发阶段,而推理型AI已经突破了瓶颈。关于“幻觉”现象和AI解决实际问题能力的担忧,很多人正在跨越这个障碍,意识到代理型AI和推理AI的效果非常惊人。所以第一点,是推理推断需求的指数级增长。
The second one, you mentioned AI diffusion. It's really terrific to see that the AI diffusion rule was rescinded. President Trump wants America to win, and he also realizes that we're not the only country in the race. And he wants the United States to win and recognizes that we have to get the American stack out to the world and have the world build on top of American stacks instead of alternatives.
第二个你刚才也提到了,就是AI扩散规则的取消。这是非常振奋人心的消息。川普总统希望美国赢得这场技术竞赛,他也清楚地认识到,美国并不是唯一参赛的国家。他希望美国保持领先地位,并意识到必须将美国的AI技术栈推向全球,让世界在美国技术基础上进行开发,而不是选择其他替代方案。
And so, AI diffusion happened, the rescinding of it happened at almost precisely the time that countries around the world are awakening to the importance of AI as an infrastructure, not just as a technology of great curiosity and great importance, but infrastructure for their industries and start-ups and society. Just as they had to build out infrastructure for electricity and Internet, you got to build out an infrastructure for AI. I think that, that's an awakening, and that creates a lot of opportunity.
而这项政策的撤销,恰逢其时——全球各国正逐步意识到,AI不仅仅是一个令人好奇且重要的技术,它还是新一代的“基础设施”,将服务于本国的产业、初创企业和社会。正如当年需要建设电力和互联网基础设施一样,现在也必须建设AI基础设施。我认为这是一次“觉醒”,它带来了巨大的发展机遇。
The third is enterprise AI. Agents work and agents are doing -- these agents are really quite successful. Much more than generative AI, agentic AI is game-changing. Agents can understand ambiguous and rather implicit instructions and able to problem solve and use tools and have memory and so on.
第三个积极因素是企业级AI。代理人已经真正开始“工作”,它们取得了非常成功的应用。相比生成式AI,代理型AI才是真正改变格局的力量。这些代理能理解含糊甚至隐含的指令,能解决问题、调用工具、具备记忆功能等等。
And so, I think this is -- enterprise AI is ready to take off. And it's taken us a few years to build a computing system that is able to integrate and run enterprise AI stacks, run enterprise IT stacks but add AI to it. And this is the RTX Pro Enterprise server that we announced at COMPUTEX just last week. And just about every major IT company has joined us, super excited about that.
所以我认为,企业级AI已经准备起飞了。我们花了几年时间打造能够集成AI与企业IT架构的计算平台,也就是我们上周在COMPUTEX发布的RTX Pro Enterprise服务器。几乎所有主要的IT公司都加入了我们,他们对此感到非常兴奋。

微软。
And so, computing is one stack, one part of it. But remember, enterprise IT is really three pillars; it's compute, storage, and networking. And we've now put all three of them together finally, and we're going to market with that.
计算只是其中一个部分。请记住,企业IT真正的“三大支柱”是:计算、存储和网络。我们现在已经将三者整合在一起,并将其推向市场。
And then lastly, industrial AI. Remember, one of the implications of the world reordering, if you will, is a region's onshoring manufacturing and building plants everywhere. In addition to AI factories, of course, there are new electronics manufacturing, chip manufacturing being built around the world. And all of these new plants and these new factories are creating exactly the right time when Omniverse and AI and all the work that we're doing with robotics is emerging. And so, this fourth pillar is quite important.
最后一个因素是工业AI。你可以看到,全球产业链重组带来的一个结果是:各地区在本土建设工厂,推动制造回流。除了AI工厂之外,全球还在建设新的电子制造工厂、芯片工厂。而所有这些新工厂的出现,恰好也是Omniverse、AI和我们在机器人领域工作的最佳落地时机。所以,工业AI是非常关键的“第四个支柱”。
Every factory will have an AI factory associated with it. And in order to create these physical AI systems, you really have to train a vast amount of data. So, back to more data, more training, more AIs to be created, more computers. And so, these four drivers are really kicking into turbocharge.
未来每一个物理工厂都会配套一个AI工厂。而要构建这些“实体AI系统”,你必须训练大量数据。所以我们又回到那个公式:更多数据,更多训练,更多AI模型,也就需要更多计算资源。这四大驱动因素正联手推动整个行业进入“加速模式”。
Operator
Your next question comes from Timothy Arcuri with UBS. Your line is open.
下一位提问的是来自瑞银(UBS)的Timothy Arcuri,请开始提问。
Timothy Arcuri
Thanks a lot. Jensen, I wanted to ask about China. It sounds like the July guidance assumes there's no SKU replacement for the H20, but if the President wants the U.S. to win, it seems like you're going to have to be allowed to ship something into China. So, I guess I had two points on that.
谢谢。Jensen,我想问一下关于中国市场的问题。听起来你们7月的业绩指引假设是H20没有替代产品可供出货。但如果总统希望美国赢得这场技术竞争,那似乎你们总归还是得被允许向中国市场出口某种产品。所以我有两个相关问题:
First of all, have you been approved to ship a new modified version into China? And you're currently building it, but you just can't ship it in fiscal Q2?
第一,你们是否已经获得批准,可以向中国出口某个新的修改版本?是否目前正在生产,但在2026财年第二季度内还不能发货?
And then, you were sort of run rating $7 billion to $8 billion a quarter into China. Can we get back to those sorts of quarterly run rates once you get something that you're allowed to ship back into China? I think we're all trying to figure out how much to add back to our models and when. So, whatever you can say there would be great. Thanks.
第二,在H20之前,你们每季度在中国的收入运行率大概是70到80亿美元。如果未来你们可以向中国重新供货,我们能否恢复到这样的季度节奏?我们现在都在试图判断模型中可以重新加入多少营收,以及何时可以恢复。因此,请尽可能多分享一些信息,谢谢。
Jensen Huang
The President has a plan. He has a vision and I trust him. With respect to our export controls, it's a set of limits. And the new set of limits pretty much make it impossible for us to reduce Hopper any further for any productive use. And so, the new limits, it's kind of the end of the road for Hopper.
总统有他的计划和远见,我信任他的判断。关于出口管制,这是一个“限制条件集”。而新的限制条件几乎使我们无法再对Hopper进行任何进一步削弱,以满足有意义的使用场景。所以,从这个角度看,新的规定基本上意味着Hopper在中国市场的终点。
We have some -- we have limited options. And so, we just -- the key is to understand the limits. The key is to understand the limits and see if we can come up with interesting products that could continue to serve the Chinese market. We don't have anything at the moment, but we're considering it. We're thinking about it. Obviously, the limits are quite stringent at the moment. And we have nothing to announce today. And when the time comes, we'll engage the administration and discuss that.
我们确实还有一些——但非常有限的——选择。因此,关键在于弄清楚这些限制的具体内容,理解清楚之后再判断我们是否有机会设计出某些新产品,能够在合规范围内继续服务中国市场。我们目前还没有任何可以宣布的产品,但我们正在认真考虑。很明显,目前的限制非常严格。今天没有新的消息可以公布,但一旦时机成熟,我们会与政府方面进行沟通并探讨可能的路径。
Operator
Your final question comes from the line of Aaron Rakers with Wells Fargo. Your line is open.
最后一个提问来自富国银行的Aaron Rakers。请开始提问。
Unidentified Analyst
Hi. This is Jake on for Aaron. Thanks for taking the question, and congrats on the great quarter. I was wondering if you could give some additional color around the strength you saw within the networking business, particularly around the adoption of your Ethernet solutions at CSPs as well as any change you're seeing in network attach rates.
你好,我是替Aaron提问的Jake。感谢你们回答问题,也祝贺你们又一个精彩的季度。我想请你们详细谈谈网络业务的增长情况,尤其是云服务提供商(CSP)对你们以太网解决方案的采纳情况,以及你们是否观察到网络绑定率(attach rate)有任何变化。
Jensen Huang
Yeah, thank you for that. We now have three networking platforms, maybe four. The first one is the scale-up platform to turn a computer into a much larger computer. Scaling up is incredibly hard to do. Scaling out is easier to do, but scaling up is hard to do. And that platform is called NVLink. NVLink is -- comes with it chips and switches and NVLink Spines and it's really complicated. But anyway, that's our new platform, scale-up platform.
谢谢你的提问。目前我们拥有三种(也许可以说是四种)网络平台。第一种是“纵向扩展”平台(scale-up),用于将一台计算机扩展成一台更大的超级计算机。纵向扩展非常难,相比之下横向扩展更容易。而这个纵向扩展平台就是NVLink。NVLink由芯片、交换机、NVLink Spine等组件构成,整体架构相当复杂。但不管怎样,这是我们的新一代纵向扩展平台。
In addition to InfiniBand, we also have Spectrum-X. We've been fairly consistent that Ethernet was designed for a lot of traffic that are independent, but in the case of AI, you have a lot of computers working together. And the traffic of AI is insanely bursty. Latency matters a lot because the AI is thinking and it wants to get work done as quickly as possible, and you got a whole bunch of nodes working together.
除了InfiniBand之外,我们还有Spectrum-X。我们一直强调,以太网原本是为处理大量“独立型流量”而设计的,但在AI场景下,往往是多个计算节点协同工作,AI工作负载的通信极为“突发性”,并对延迟极其敏感。因为AI模型在“思考”时,需要尽可能快地完成任务,所有节点必须协同配合。
And so, we enhanced Ethernet, added capabilities like extremely low latency, congestion control, adaptive routing, the type of technologies that were available only in InfiniBand to Ethernet. And as a result, we improved the utilization of Ethernet in these clusters. These clusters are gigantic, from as low as 50% to as high as 85%, 90%.
因此,我们对以太网进行了增强,添加了超低延迟、拥塞控制、自适应路由等功能——这些过去只有在InfiniBand中才具备的技术。结果就是,我们显著提升了这些AI集群中以太网的利用率,从50%提升到85%、甚至90%。
And so, the difference is if you had a cluster that's $10 billion, and you improve its effectiveness by 40%, that's worth $4 billion. It's incredible. And so, Spectrum-X has been really, quite frankly, a home run. And this last quarter, as we said in the prepared remarks, we added two very significant CSPs to the Spectrum-X adoption.
举个例子,如果一个AI集群投资了100亿美元,通过优化以太网性能提升40%效能,就相当于节省或新增了40亿美元价值。这是一个巨大的成果。因此,Spectrum-X可以说是一次“本垒打”。我们在本季度的说明中也提到,新增了两家非常重要的云服务商采用Spectrum-X。
And then the last one is BlueField, which is our control plane. And so, in those four -- the control plane of network, which is used for storage, it's used for security, and for many of these clusters that want to achieve isolation among its users, multi-tenant clusters and still be able to use and have extremely high-performance bare metal performance, BlueField is ideal for that and is used in a lot of these cases.
最后是BlueField,这是我们的网络控制平面(control plane)平台。BlueField广泛用于存储、安全场景,尤其是在那些需要实现多租户隔离、但又希望维持裸金属级高性能的AI集群中,它是理想的解决方案,在很多实际部署中已经被采用。
And so, we have these four networking platforms that are all growing and we're doing really well. I'm very proud of the team.
所以我们现在有这四大网络平台,它们都在快速增长,表现非常出色。我为我们的团队感到非常骄傲。
Operator
That is all the time we have for questions. Jensen, I will turn the call back to you.
我们今天的问答环节到此结束。Jensen,请您做总结发言。
Jensen Huang
Thank you. This is the start of a powerful new wave of growth. Grace Blackwell is in full production. We're off to the races. We now have multiple significant growth engines. Inference, once the light of workload, is surging with revenue-generating AI services. AI is growing faster and will be larger than any platform shifts before, including the Internet, mobile and cloud. Blackwell is built to power the full AI life cycle from training frontier models to running complex inference and reasoning agents at scale. Training demand continues to rise with breakthroughs in post training and like reinforcement learning and synthetic data generation, but inference is exploding. Reasoning AI agents require orders of magnitude more compute.
谢谢大家。我们正站在一波强劲增长浪潮的起点。Grace Blackwell已全面进入量产阶段,我们已全力奔赴未来。如今,我们拥有多个重要的增长引擎。推理,以往被视为轻型负载,如今因收入驱动的AI服务而激增。AI的增长速度远超以往任何平台变革,包括互联网、移动和云计算。Blackwell专为AI全生命周期而生,从训练前沿模型到大规模运行复杂推理与智能代理。训练需求仍在上升,得益于后训练阶段的突破,如强化学习与合成数据生成,但推理需求正呈爆炸式增长。推理型AI代理需要数量级更高的计算资源。
The foundations of our next growth platforms are in place and ready to scale. Sovereign AI, nations are investing in AI infrastructure like they once did for electricity and Internet. Enterprise AI, AI must be deployable on-prem and integrated with existing IT. Our RTX Pro, DGX Spark and DGX Station enterprise AI systems are ready to modernize the $500 billion IT infrastructure on-prem or in the cloud. Every major IT provider is partnering with us. Industrial AI from training to digital twin simulation to deployment, NVIDIA Omniverse and Isaac GR00T are powering next-generation factories and humanoid robotic systems worldwide.
我们的下一波增长平台已完成基础建设,准备大规模扩展。主权AI:各国正如当年投资电力和互联网那样投资AI基础设施。企业级AI:AI必须支持本地部署,并与现有IT系统集成。我们的RTX Pro、DGX Spark和DGX Station企业AI系统,已准备好将5,000亿美元的本地或云端IT基础设施现代化。所有主要IT厂商都与我们合作。工业AI:从模型训练、数字孪生仿真到实际部署,NVIDIA Omniverse和Isaac GR00T正在为全球新一代工厂和人形机器人系统提供支持。
The age of AI is here from AI infrastructures, inference at scale, sovereign AI, enterprise AI, and industrial AI, NVIDIA is ready.
AI时代已经来临——从AI基础设施、大规模推理、主权AI、企业级AI到工业AI,NVIDIA已经准备就绪。
Join us at GTC Paris, our keynote at VivaTech on June 11, talking about quantum GPU computing, robotic factories and robots, and celebrate our partnerships building AI factories across the region. The NVIDIA band will tour France, the U.K., Germany and Belgium.
欢迎大家参加GTC Paris大会,我们将在6月11日VivaTech主旨演讲中,探讨量子GPU计算、机器人工厂和机器人系统,并庆祝我们在欧洲各地建设AI工厂的合作成果。NVIDIA团队还将访问法国、英国、德国与比利时。
Thank you for joining us at the earnings call today. See you in Paris.
感谢大家今天参加我们的财报电话会议。巴黎见。
Operator
This concludes today's conference call. You may now disconnect.
本次电话会议到此结束。您现在可以断线。