Sam Altman:
The thing we do that had the most impact on the world is how we set the ChatGPT personality.
我们所做的事情里,对世界影响最大的,是我们如何设定 ChatGPT 的人格。
Jacklyn Dallas:
What's different with the new model?
新模型有什么不同?
Sam Altman:
Smarter, faster, more context.
更聪明、更快、上下文更多。
Jacklyn Dallas:
What's like the most common thought in your head? What are we aiming towards? Like, what's your vision here?
你脑子里最常出现的想法是什么?我们到底在朝什么方向走?也就是说,你在这里的愿景是什么?
Sam Altman:
Almost unimaginable prosperity.
几乎难以想象的繁荣。
Jacklyn Dallas:
This is Sam Altman. Three years ago, he catapulted us into a race to invent superintelligence. But he actually started working on this 20 years ago when everyone thought creating AI was impossible.
这是 Sam Altman。三年前,他把我们一下子推入了一场发明超级智能的竞赛。但事实上,早在 20 年前,当所有人都认为创造人工智能是不可能的时候,他就已经开始做这件事了。
And against all odds, he and a team of scrappy engineers did it and changed the world. Now, over 900 million people use ChatGPT every week. In today's episode, I'm going to ask Sam questions he's never been asked before.
而且在几乎不被看好的情况下,他和一支顽强拼搏的工程师团队做成了这件事,并改变了世界。现在,每周有超过 9 亿人使用 ChatGPT。在今天这一期节目中,我会问 Sam 一些他以前从未被问过的问题。
Sam Altman:
Okay, I'm probably gonna be really bad at this. Let's try it.
好吧,我可能会答得很糟。我们试试看。
Jacklyn Dallas:
And give us a rare look into his vision for the future.
也让我们难得地看一看他对未来的愿景。
Sam Altman:
So you gotta have robots.
所以你必须要有机器人。
Jacklyn Dallas:
So you can get ahead of it and build the next big thing.
这样你就能走在它前面,打造下一个伟大的东西。
Sam Altman:
That was so fun.
这太有意思了。
Jacklyn Dallas:
Thanks for coming on the show.
谢谢你来参加节目。
Sam Altman:
Thank you for having me.
谢谢你邀请我。
Jacklyn Dallas:
Yeah, I'm so excited. I've watched like all of your interviews for over the last 20 years. Oh no. It's been cool because I think there's been a lot of things that you've stayed very consistent on. One of them is your like focus on builders and I think a lot of people don't know, they see you as like the CEO of OpenAI now, but they don't know that you were obsessed with AI like 20 years ago. Can you tell me about your initial venture into it in college and what made you love artificial intelligence?
是的,我非常兴奋。过去 20 年里,你几乎所有访谈我都看过。哦不。很有意思,因为我觉得有很多事情,你一直保持得非常一致。其中之一就是你对构建者的关注。我觉得很多人并不知道这一点,他们现在看到的是你作为 OpenAI 的 CEO,但他们不知道,大约 20 年前你就已经对人工智能非常着迷了。你能不能讲讲你在大学时最初涉足人工智能的经历,以及是什么让你爱上人工智能的?
Sam Altman:
One, I think it is just from a tech nerd perspective, the coolest thing ever. It's just such an interesting idea that we could make computers think and do stuff for us and help. I have loved my understanding of the story of technological progress in human history, where we keep inventing tools on top of tools on top of tools and build up this scaffolding that lets us do more and more. I think it's just a beautiful idea and very cool technology. But that gets to the second thing, which is, in terms of technology and scientific discovery that can make the world better. If we can put this tool in the hands of people and if people can use AI to build and explore and create and make new companies and new kinds of art and new kinds of experiences for each other and whatever else they'll do with it,
第一,从技术极客的角度看,我觉得这简直是有史以来最酷的东西。我们竟然可以让计算机思考、替我们做事、帮助我们,这个想法本身就非常有意思。我一直很喜欢自己对人类技术进步史的理解:我们不断在工具之上发明工具,一层又一层地搭建起脚手架,让我们能够做越来越多的事情。我觉得这是一个非常美的想法,也是一项非常酷的技术。但这就引出了第二点,也就是从能够让世界变得更好的技术与科学发现这个角度来看。如果我们能把这个工具交到人们手中,如果人们能够用 AI 去构建、探索、创造,去创办新的公司,创造新的艺术形式,创造人与人之间新的体验,以及他们还会用它去做的其他各种事情,
that That is how I believe the world gets better. That's also how I believe people get fulfilled. So before this, my career was in startups and I used to watch people make startups and I thought it was like awesome for the world, awesome for the people doing it. I now think we're going to be in a world where you can have these one-person startups or three-person startups, whatever, and the amount of human potential that is going to unlock and the amount of good stuff that's going to make for all of us that was just totally impossible without this technology, I think will be like quite great to see.
那就是我相信世界会变得更好的方式。那也是我相信人们会获得成就感的方式。在此之前,我的职业生涯是在初创公司领域,我过去常常看着人们创办初创公司,我觉得这对世界来说非常好,对那些做这件事的人来说也非常好。现在我认为,我们将进入一个这样的世界:你可以拥有一个人的初创公司,或者三个人的初创公司,诸如此类,它将释放出的人类潜能,以及它将为我们所有人创造出的那些好东西,如果没有这项技术,原本完全不可能出现。我觉得这会是非常值得见证的事情。
Jacklyn Dallas:
Which historical figure do you think would have benefited the most from having AI?
哪位历史人物如果拥有 AI,你觉得会从中受益最大?
Sam Altman:
Um. Da Vinci was the first one that came to mind. Some like very broad thinker interested in a lot of things. It's like, you know, just huge creative energy trying to get as much done as possible.
嗯。我首先想到的是 Da Vinci。某种非常广博的思想者,对很多事情都感兴趣。就像,你知道的,拥有巨大的创造性能量,试图尽可能多地完成事情。
Jacklyn Dallas:
It's interesting, too, because I feel like when the transformer initially came out, it's like a predictive text model. It's crazy how far that one breakthrough has gotten us in discovery. Do you think about that?
这也很有意思,因为我觉得 transformer 最初出现时,它就像是一个预测文本的模型。很难想象,这一个突破竟然把我们在发现上带到了这么远。你会思考这一点吗?
Sam Altman:
All the time. Ilya Sutskever once said a very simple sentence that as many simple sentences said really stuck in my mind, which is prediction is very close to intelligence. And the idea is that if you can Compress all of the information about the world, the state of things, whatever, into its smallest representation. And then as part of that, predict the thing that's going to happen next. You understand it in a sort of deep way. And this idea At the time, a lot of people in the AI field were quite excited about generative models for a reason they couldn't quite articulate.
一直在想。Ilya Sutskever 曾经说过一句非常简单的话,和许多简单的话一样,它深深留在了我的脑子里,那就是:预测非常接近智能。这个想法是说,如果你能把关于世界的所有信息、事物的状态等等,压缩成最小的表征,然后作为其中的一部分,预测接下来会发生什么。那就意味着你在某种深层意义上理解了它。而这个想法在当时,AI 领域很多人都对生成模型感到非常兴奋,尽管他们还说不太清楚原因。
But I think the reason was something to do about this that prediction is very close to. Intelligence and if we're trying to build systems that really understand all of the data they're trained on, if we can get them to start predicting what comes next, that seems like a great step. And certainly when you watch children begin to understand the world, I think you can observe a similar phenomenon.
但我认为原因与这一点有关:预测非常接近智能。如果我们试图构建真正理解其训练数据的系统,如果我们能让它们开始预测接下来会发生什么,那看起来就是非常重要的一步。当然,当你观察儿童开始理解世界时,我认为你也能看到类似的现象。
Jacklyn Dallas:
Yeah, I think about it a lot. Watching AI get smarter makes me hopeful because I'm like, all right, if I just input all that information into my brain, I would get the same outcome. Do you think that certain people are just like more destined to be great at like physics or science and math? Or do you kind of believe that if you just gave everyone the same information, they would have the same output?
是的,我经常思考这个问题。看着 AI 变得越来越聪明,会让我感到有希望,因为我会想,好吧,如果我也把所有这些信息输入到自己的大脑里,我也会得到同样的结果。你认为某些人就是更注定会在物理、科学和数学这类领域变得出色吗?还是说你更相信,只要给每个人同样的信息,他们就会产生同样的输出?
Sam Altman:
No, I don't think they wouldn't. I'm happy they wouldn't. Like I'm happy we get like the rich tapestry of human experience and that people have different interests and talents and sort of, you know, also different training data. But I think it'd be quite sad if we showed every person the exact same training data and they had all the exact same ideas and the exact same interest in anything else. So I'm grateful that doesn't happen.
不,我不认为他们会有同样的输出。我也很高兴他们不会。我的意思是,我很高兴我们拥有这种丰富的人类经验图景,人们有不同的兴趣、不同的天赋,而且某种程度上,你知道,也有不同的训练数据。但我觉得,如果我们给每个人看完全相同的训练数据,然后他们产生完全相同的想法,对其他任何事情也有完全相同的兴趣,那会相当令人难过。所以我很庆幸这种事情没有发生。
Jacklyn Dallas:
It brings me to an interesting thing that you said in a previous interview, which is that there's never been another time in history where this many people have talked to like one mind. Like there are 900 million ChatGPT users every week. Extraordinary. How does that influence how you shape the personality of ChatGPT?
这让我想到你在之前一次访谈里说过的一件很有意思的事:历史上从来没有哪个时期,有这么多人在和同一个“心智”对话。比如现在每周有 9 亿 ChatGPT 用户。这非常非凡。这会如何影响你塑造 ChatGPT 人格的方式?
Sam Altman:
We've gone through many different approaches to this. It's so hard to get this right. People want different personalities. The same person wants different personalities on different days. People on different time horizons might prefer a very different personality. Like you might want a model. If you're just thinking about how it makes you feel today, you might want a model that tells you how great you are. And if you want, if you're thinking about like what's going to maximize your fulfillment and kind of accomplishment over a longer period of time, you might want a model that pushes back on you way more.
我们在这件事上尝试过很多不同的方法。要把它做好非常难。人们想要不同的人格。同一个人在不同日子里,也会想要不同的人格。不同时期、不同时间尺度下,人们偏好的人格可能会非常不同。比如,你可能想要一个模型。如果你只是考虑它今天让你感觉如何,你可能想要一个会不断告诉你“你很棒”的模型。而如果你考虑的是,什么东西能在更长时间里最大化你的满足感和成就感,那你可能会想要一个更频繁反驳你的模型。
Almost no one wants to go like set sliders about like, here's how I want ChatGPT to behave. I want it to be like this funny. And I want it to be like this nice to me and push back on me this much. And we don't do that for friends to our live either. But We do gravitate towards different people or different people at different times or want the people in our lives to support us in different ways, in different environments, in different days, in different contexts and we expect them to understand that. And right now, ChatGPT doesn't work that way and most people don't expect it to work that way but that is what I think we should shoot for.
几乎没有人真的想去调一堆滑块,比如,“我希望 ChatGPT 这样表现。我希望它这么幽默。我希望它对我这么友好,同时反驳我到这个程度。”我们对生活中的朋友也不会这么做。但我们确实会在不同时间被不同的人吸引,或者希望生活中的人在不同环境、不同日子、不同语境下,以不同方式支持我们,而且我们期待他们能够理解这一点。现在,ChatGPT 还不是这样工作的,大多数人也并不期待它这样工作,但我认为这正是我们应该努力实现的方向。
在搜索引擎的影响几乎是确定的,推荐引擎基本上是正面的,社交软件的底层是推荐引擎,但对于社交本身会有什么样的影响?AI 不只是帮你看内容,而是开始替代一部分人际反馈、情绪支持、陪伴、建议和认同。
Jacklyn Dallas:
I think it does kind of model a little bit, though. I feel like my ChatGPT is very energized and optimistic in a way that when I use the account logged out, it's not.
不过我觉得它确实已经在某种程度上有一点这种感觉了。我感觉我的 ChatGPT 非常有活力,也很乐观,而当我退出账号使用时,它就不是那样。
Sam Altman:
It definitely does a little bit of this. It definitely kind of gets to know you somewhat. And this is what we've been going for. We're pushing towards this kind of memory and understanding more and more. But we used to not have that. We used to have these sliders and tell us what the personalities were.
它确实已经有一点这种能力。它确实会在某种程度上逐渐了解你。而这正是我们一直努力的方向。我们正在越来越多地推进这种记忆和理解能力。但我们以前没有这个。以前我们有那些滑块,让用户告诉我们人格应该是什么样。
Jacklyn Dallas:
And then you had 4.0, which was an interesting moment for you because For people that don't know, the context is that it was very agreeable, but in a good way in some ways. And I remember in an interview, you talked about how some people actually emailed you saying like, this is the only supportive chat in my life.
然后你们推出了 4.0,那对你来说是一个很有意思的时刻。给不了解的人补充一下背景:它非常顺从,不过在某些方面也可以说是好的顺从。我记得你在一次访谈里谈到,有些人真的给你发邮件说,这是他们生活中唯一支持他们的聊天对象。
Sam Altman:
I still think about those emails a lot.
我现在仍然经常想起那些邮件。
Jacklyn Dallas:
Yeah. How do you navigate that? Because I think what's really cool is you have this opportunity to rewire someone's brain for positivity and work ethic if you do it right. But there's a lot of responsibility.
是的。你怎么处理这件事?因为我觉得非常酷的一点是,如果做得对,你其实有机会重新塑造一个人的大脑,让它更积极、更有工作伦理。但这里面也有很大的责任。
Sam Altman:
The responsibility on this point is huge. We talk a lot about the risks with AI and the benefits and we measure the big bio-risk or cybersecurity risk. But probably the thing we do, at least historically, has had the most impact on the world is how we set the ChatGPT personality. How encouraging should it be? How tough love should it be? Customize to you versus me versus not do that. How understandable should it be? What it's doing? How much do you need to see those sliders after all?
在这一点上的责任非常巨大。我们经常谈论 AI 的风险和好处,也会衡量重大的生物风险或网络安全风险。但至少从历史上看,我们所做的事情里,对世界影响最大的,可能就是我们如何设定 ChatGPT 的人格。它应该多鼓励人?应该有多少“严厉的爱”?应该为你和我分别定制,还是不这么做?它正在做什么,应该有多容易被理解?归根结底,你到底需要看到多少那些滑块?
We have historically, not just us, the whole field, not treated this with the same amount of rigor and scientific focus and sort of risk understanding that we have on things like, let's not make a new pathogen. But the impact this has had on the world is huge. I think it's had tremendous positive impact. Obviously with 4.0 it had some negative impact too.
从历史上看,不只是我们,而是整个领域,都没有像对待“不要制造一种新的病原体”这类问题那样,以同样程度的严谨、科学关注和风险理解来对待这件事。但它对世界产生的影响非常巨大。我认为它产生了巨大的正面影响。当然,在 4.0 上,它也产生了一些负面影响。
I still have not heard anyone talk about this in a way where I'm like, this is the answer this is how the world should think about the power of the default personality or the limits of personality in these models, but it is clearly a huge issue and going to get bigger. How are you thinking about it right now? Um I have asked a small number of people that I think are really wise in different ways, you know, people from great spiritual traditions, like great clinical psychologists, people who I just think really understand how people interact with each other and what motivates them and what fulfills them, to try to write different sort of instruction manuals for ChatGPT about here is how to behave to maximize people's fulfillment and personal growth and sort of accomplishment and enjoyment of life.
我到现在还没有听到任何人以一种让我觉得“这就是答案,这就是世界应该如何理解这些模型中默认人格的力量,或者人格边界”的方式来谈论这件事。但它显然是一个巨大的问题,而且会变得更大。你问我现在怎么思考它?嗯,我已经请了一小部分人,他们在不同方面都让我觉得非常有智慧。你知道,有些人来自伟大的精神传统,有些是非常优秀的临床心理学家,还有一些人,我认为他们真正理解人们如何彼此互动、什么会激励人、什么会让人获得满足。我请他们尝试为 ChatGPT 写出不同类型的使用说明,说明它应该如何表现,才能最大化人的满足感、个人成长、某种意义上的成就,以及对生活的享受。
And I want to get those and I want to try aligning ChatGPT to the combination of those and see what happens.
我想拿到这些说明,然后尝试把 ChatGPT 对齐到这些说明的组合上,看看会发生什么。
如果在一个系统注入大量负面信息,这样的系统很容易朝着错误的方向自我强化、向下坠落,字节最近一段时间在推动长视频,先让罗永浩出来做,现在已经有一批长视频的网红,明显是一种纠正的行为。在巴甫洛夫的实验里,铃声(中性刺激)与食物(无条件刺激)反复配对后,铃声本身就能引发分泌唾液的条件反射。字节的短视频平台做了一件类似的事。
1、消退不等于遗忘
巴甫洛夫发现,当你停止配对铃声和食物后,狗的唾液分泌确实会逐渐减弱,这叫"消退"(extinction)。但他随后发现了一个关键现象:消退并没有擦除原来的条件反射,它只是在旧的关联之上叠加了一层新的抑制性学习。旧的神经通路还在那里,只是被暂时压制了。
2、部分强化消退效应
间歇性强化所建立的行为,比持续强化的行为更难消退。短视频平台的推荐算法恰恰就是一个间歇性强化系统——你刷十条可能有三条让你觉得很爽,但你不知道是哪三条,这跟老虎机的原理一模一样。而这种变比率强化(variable ratio reinforcement)建立起来的行为模式,在所有类型的条件反射中是最抗消退的。
3、刺激泛化
巴甫洛夫发现,狗不仅对原来的铃声产生反应,对类似的声音也会产生反应。字节的用户已经被条件化的不只是"刷抖音"这个具体行为,而是一整套与即时满足相关的刺激模式——快速切换、强烈视觉刺激、情绪峰值。任何带有这些特征的界面或内容,都可能触发旧的条件反射。
巴甫洛夫的实验告诉我们,条件反射一旦建立,你能做的最好的事情不是消除它,而是在它之上建立一个更强的新反射来覆盖它。但新反射必须提供足够强的无条件刺激,才能与旧反射竞争。
Jacklyn Dallas:
Would it have to change by culture too?
它是否也必须根据文化而改变?
Sam Altman:
I think it largely will, but there are some universal things about people that seem more about biology than culture.
我认为很大程度上会。但关于人,有些普遍性的东西似乎更多来自生物学,而不是文化。
Jacklyn Dallas:
Like what?
比如什么?
Sam Altman:
There's this interesting book, I'm gonna get the title slightly wrong, maybe I won't, I think it's called Human Universals.
有一本很有意思的书,我可能会把书名说得稍微不准确,也许不会。我觉得它叫 Human Universals。
Jacklyn Dallas:
Okay.
好。
Sam Altman:
But it was some anthropologists that went through every human culture they could find and looked for and they like made a list of all the traits and then if something didn't exist in even one culture they took it out because they said it's not that's not really a universal that's like some sort of cultural thing.
那本书里,一些人类学家考察了他们能找到的每一种人类文化,去寻找共同特征。他们列出所有特征,然后如果某个东西在哪怕一种文化中不存在,他们就把它剔除,因为他们认为那就不是真正的普遍性,而是某种文化性的东西。
Jacklyn Dallas:
Okay.
好。
Sam Altman:
And there were some things that weren't obvious to me that would exist in every culture, like valuing travel, but still did. There were a lot of things that made sense to me that would, you know, value in every culture. I was talking earlier about kind of how I'm very excited about AI and what it's, you know, what I think it can do for the world. But increasingly, one of the concerns we're hearing from people is, you know, okay, let's say you're right. Let's say you do give, with this technology, everybody on earth a ton of agency, and people working with this technology collectively make huge prosperity for the world.
有些东西我原本并不觉得会存在于每一种文化里,比如重视旅行,但它确实存在。也有很多东西让我觉得很合理,确实会被每一种文化重视。我前面谈到过,我对 AI 非常兴奋,也谈到我认为它能为世界做些什么。但现在我们越来越多听到人们提出一种担忧:好吧,假设你是对的。假设借助这项技术,你真的赋予了地球上每个人巨大的能动性,而人们与这项技术协同工作,也确实为世界创造了巨大的繁荣。
People will still work because they want to work for fun, but no one will have to, and everybody will have this great life and whatever. Increasingly, people are saying, well, you know, you talk about a right to prosperity, but what about the struggle? What about the like need for adversity? What about the need to kind of overcome challenges and learn and not have everything taken care of and how important that is? And there are a few things like that that seem like important to how we evolved as well.
人们仍然会工作,因为他们觉得工作有趣,但没有人必须工作,每个人都会拥有很好的生活,诸如此类。于是,越来越多人会说,好吧,你谈的是获得繁荣的权利,但奋斗怎么办?对逆境的需要怎么办?那种克服挑战、学习成长、而不是一切都被照顾好的需要怎么办?这些事情的重要性又该如何看待?而且有几件类似的事情,似乎对我们的进化方式也很重要。
好的想法必须在相互竞争中产生,这跟前面让AI提一些挑战性的问题是一回事。
Jacklyn Dallas:
I agree. I also feel like that's a little bit of a false equivalency that's being made because I don't think, like, if you look at any big technological revolution, there's never really been an overall decrease in jobs. It's just that the jobs have shifted.
我同意。我也觉得这里有一点错误类比,因为我认为,如果你回顾任何一次重大的技术革命,整体就业从来没有真正减少过。只是工作发生了转移。
Sam Altman:
Yeah, we, you know, we were promised four hour work weeks or whatever.
是的,你知道,我们曾经被承诺会有每周工作四小时之类的未来。
Jacklyn Dallas:
Yeah.
是的。
Sam Altman:
And we were promised like less stress and more happiness and more abundance. And maybe if we were still content with the quality of life from 100 or 500 years ago, actually wouldn't have to work that hard.
我们也曾被承诺压力会更少,幸福会更多,物质会更丰富。也许如果我们仍然满足于 100 年前或 500 年前的生活质量,确实就不必那么辛苦地工作。
Jacklyn Dallas:
Yeah.
是的。
Sam Altman:
We could get that. But we want more and more and more. The bar keeps going up. And more than that, we want to accomplish and we want to compete and we want to be useful to each other and whatever the new world looks like, and we want to sort of push on new things and discover new frontiers and invent new products and services and make new stuff.
我们可以做到那一点。但我们想要越来越多。标准会不断提高。更重要的是,我们想要有所成就,想要竞争,想要对彼此有用。不管新世界长成什么样,我们都想推动新的东西,发现新的前沿,发明新的产品和服务,创造新的事物。
I saw something once where some music producer said decades ago that music had gotten so good he really didn't see why there was ever going to be a need to create any more music. We just don't work that way. I, you know, fortunately, unfortunately, depends on your take, like, people are still gonna be working hard. People are still gonna be stressed. People are still gonna be unhappy. People are still gonna be, like, striving to create and trying to overcome adversity in whatever way is meaningful to them. And through that, find fulfillment and growth.
我曾经看到过一个说法,有位音乐制作人在几十年前说,音乐已经好到这种程度了,他实在看不出未来为什么还需要再创作更多音乐。但我们并不是这样运转的。你知道,不管这是幸运还是不幸,取决于你怎么看,人们仍然会努力工作。人们仍然会有压力。人们仍然会不快乐。人们仍然会努力创造,并以对自己有意义的方式试图克服逆境。然后通过这一过程,找到满足感和成长。
And maybe it looks nothing like the kind of struggles that we have today or the kind of work we have today, but I bet the spirit of it will be very similar.
也许它看起来会和我们今天面对的挣扎、今天从事的工作完全不同,但我敢说,它的精神内核会非常相似。
Jacklyn Dallas:
Yeah, I think that's an interesting point because if you look at how AI is pulling in general, it is like not pulling positively in America. And yet, I'm so excited about it. And like, I feel like a kid in a candy store when I use ChatGPT because it opens up all these new doors to explore. And I think a lot of my founder friends feel similarly. But then I look at how it's often covered in the news, and it will be like, 50% of jobs are going to be wiped out. Why do you think that narrative has taken off? And what do you think is actually going to happen?
是的,我觉得这是一个很有意思的观点。因为如果你看 AI 的整体民意走向,它在美国好像并不是正向的。可是我自己对它非常兴奋。使用 ChatGPT 时,我感觉自己就像进了糖果店的小孩,因为它打开了这么多可以探索的新门。我觉得我的很多创始人朋友也有类似感受。但当我看新闻里通常如何报道它时,报道会说,50% 的工作将被消灭。你为什么认为这种叙事会流行起来?你认为实际会发生什么?
Sam Altman:
A lot of thoughts. I think people do kind of always like doom sells. The news covers bad stuff, calamity, that kind of thing. And a lot of people at least seem to love reading about, talking about how horrible the future is going to be and the bad stuff seems to travel better than the good stuff. Now, I think with any new technology and this degree of change, there is reason for caution.
我有很多想法。我觉得人们某种程度上一直都喜欢“末日叙事有销路”这件事。新闻会报道坏事、灾难之类的东西。而且至少看起来,很多人喜欢阅读、谈论未来会变得多么糟糕,坏消息似乎比好消息传播得更快。当然,我认为面对任何新技术,尤其是这种程度的变化,谨慎是有理由的。
And speaking of these things that are like deep human evolutionary traits, we seem to evolve something where we do want to think about the bad, we want to talk about the bad, and that probably helps us defend against it. And there's probably a really important societal shared caution thing there.
说到这些深层的人类进化特征,我们似乎进化出了某种机制,让我们确实会想去思考坏事,想去谈论坏事,而这大概有助于我们防范它。这里面可能确实存在一种非常重要的社会共同谨慎机制。
I know some AI CEOs are saying things like 50% of the jobs are going to go away. To say nothing of how tone-deaf it is for someone to be saying, my company is going to eliminate 50% of the jobs, and my company is going to be the most valuable company in human history, and you know how wonderful that's going to be, but 50% of you are going to lose your jobs. To say nothing about the wisdom of Not even wisdom, just the sort of like the tone deafness of that.
我知道有些 AI 公司的 CEO 在说,类似 50% 的工作会消失。先不谈这种说法有多么不合时宜:一个人说,我的公司将消灭 50% 的工作,我的公司会成为人类历史上最有价值的公司,你知道那会多么美好,但你们当中 50% 的人会失去工作。先不谈这在智慧上如何——甚至不谈智慧,单说这种表达上的麻木,就已经很严重了。
I don't think that's the right way to think about it. Jobs will go away. Jobs have gone away with every technological revolution. Jobs will change. I think someone said to me just yesterday that really stuck with me is, I can use the new model, GPT 5.5 in Codex to accomplish in an hour what would have taken me weeks two years ago. And I thought I would have been much less busy in that world. And I have never been busier in my life. I'm waking up in the middle of the night to do more work. It's like, make it stop, please.
我不认为那是正确的思考方式。工作会消失。每一次技术革命都会让一些工作消失。工作会改变。就在昨天,有人对我说了一句话,让我印象很深:我现在可以用 Codex 里的新模型 GPT 5.5,在一小时内完成两年前需要几周才能完成的事情。我本来以为在那样的世界里,我会没那么忙。结果我这辈子从来没有这么忙过。我半夜醒来还要继续工作。感觉就像,拜托,让它停下来吧。
With new tools, I think we will create in new kind of ways. I have no doubt that the economy is going to change a lot and jobs are going to change a lot. And I think caution is warranted and I think rigorous debate about new social contracts, new economic systems are warranted as well. But I don't think it's like we're all going to sit around in a life without meaning and without work. It's just going to be different.
有了新的工具,我认为我们会以新的方式进行创造。我毫不怀疑,经济会发生很大变化,工作也会发生很大变化。我认为谨慎是必要的,关于新的社会契约、新的经济制度,也确实需要严肃讨论。但我不认为未来会变成我们所有人都坐在那里,过着没有意义、没有工作的生活。它只是会不一样。
Jacklyn Dallas:
I also think that scientific breakthroughs are really coming and super exciting. I want to dive into that with you. I have so many questions here. But one of the first ones that came to mind is going back to the idea that it's like a predictive model. I think there's like two scenarios that are playing out here. One is like if you gave a human enough time and you gave them all this information, would they develop the same breakthrough? And then two, is it kind of like move 37 Which is like in the game of Go when AI came up with a move that humans never would have done. Which path are we on?
我也认为科学突破真的正在到来,而且非常令人兴奋。我想和你深入聊聊这个。我这里有很多问题。但我首先想到的一个问题,是回到它像一个预测模型这个想法。我觉得这里好像有两种情形正在展开。第一种是,如果你给一个人足够多的时间,也给他所有这些信息,他是否会发展出同样的突破?第二种是,它是否有点像“第 37 手”——也就是围棋比赛中,AI 下出了一步人类永远不会下出的棋。我们现在是在走哪一条路?
Sam Altman:
Well, they might not be that different. I was smiling because I was remembering when we had the first GPT models. There were all of these really smart-sounding scientists or AI experts that would say, you know, next token prediction will never develop new knowledge. It can't. It's modeled off of the data. It's been shown. It can't figure out anything new. And they sounded so smart.
嗯,它们可能并没有那么不同。我刚才笑,是因为我想起我们最早有 GPT 模型的时候。有很多听起来非常聪明的科学家或 AI 专家会说,你知道,预测下一个词元永远不可能发展出新知识。它做不到。它只是根据数据建模。这个已经被证明了。它不可能想出任何新的东西。而且他们听起来非常聪明。
They had all of this like fancy explanations, these fancy explanations for why this was going to be the case. And then with really with 5.4, a little bit of 5.3, it was the first time where models started contributing in small ways new knowledge to humanity's collective knowledge.
他们有各种复杂精致的解释,用这些复杂精致的解释来说明为什么事情一定会是这样。然后到了 5.4,严格说从 5.3 也开始有一点,这是模型第一次开始以小的方式,为人类的集体知识贡献新的知识。
Jacklyn Dallas:
Like what?
比如什么?
Sam Altman:
Proving unproven mathematical theorems, some small-ish new pieces of physics, things like that. I expect this to keep going. In some sense, move 37 was already an example of this.
证明此前未被证明的数学定理,一些不算太大的物理学新发现,诸如此类。我预期这会继续下去。从某种意义上说,“第 37 手”已经是一个例子。
And so this idea that we can train a model to just predict the next token based off of things that has already seen, and then use that ability to go discover fundamentally new things that didn't exist anywhere, is like not so obvious on its face. In fact, you kind of would say what people turned out to be wrong about, which is it shouldn't do that.
所以,这个想法本身并不是一眼就显而易见的:我们训练一个模型,只是让它根据已经见过的东西预测下一个词元,然后再利用这种能力去发现此前任何地方都不存在的全新事物。事实上,你很自然会说出后来被证明是错误的那种判断:它不应该能做到这一点。
But really what these models are learning to do through this process of next token prediction is to reason, to understand how to make sense of all of the data they have seen and complete what comes next, even if it's something they haven't seen before. This reasoning process can be applied to things that you have not seen before, and this is really quite remarkable. But people do this too, right? Like people can go study all of the known physics and then keep running their model, their predictive model or whatever, and by applying that reasoning ability they have learned through not just the facts, but the underlying thinking process that they develop during their physics training, go discover new physics. And I think that's what these models are doing too.
但实际上,这些模型在预测下一个词元的过程中真正学会的是推理,是理解如何把它们见过的所有数据组织成有意义的结构,并补全接下来应该出现的东西,哪怕那是它们以前没有见过的东西。这种推理过程可以被应用到以前没有见过的事情上,而这确实相当了不起。不过人类也是这样做的,对吧?比如,人可以去学习所有已知的物理学,然后不断运行自己的模型,或者说自己的预测模型,诸如此类。通过运用他们学到的推理能力——这种能力不仅来自事实本身,也来自他们在物理训练过程中形成的底层思维过程——他们就能去发现新的物理学。我认为这些模型也在做类似的事情。
Now could people do it with more time and more brain power? Probably yes. I would say yes actually. But it is much easier to go make a faster bigger model than it is to figure out how to give people like much bigger brains. So I for one am thrilled we have these new kind of like external tool.
那么,如果人类有更多时间和更强大脑,能不能做到?大概可以。我其实会说可以。但制造一个更快、更大的模型,比弄清楚如何给人类装上大得多的大脑要容易得多。所以至少对我来说,我非常高兴我们拥有了这种新的外部工具。
Yeah that we can ask to go think really hard about a problem that maybe would be harder like for us to think about ourselves. When you see these models read like hundreds of thousands of pages in a few seconds across all of them, it's like, you know, maybe if we had a bigger brain we can do that, but we cannot with our current size brains.
是的,我们可以让它非常深入地思考一个问题,而这个问题如果只靠我们自己去想,可能会困难得多。当你看到这些模型在几秒钟内读完几十万页内容,并在全部内容之间进行处理时,你会觉得,你知道,也许如果我们有一个更大的大脑,我们也能做到,但以我们当前大小的大脑,是做不到的。
Jacklyn Dallas:
Yeah, it is interesting though that it kind of is similar to biological brains and it made me curious. Are there other things in nature that you think we could copy for tech breakthroughs? Like the airplane is based on the bird. Like are there other examples of that?
是的,不过有意思的是,它在某种程度上确实类似于生物大脑,这让我很好奇。你认为自然界里还有没有其他东西,是我们可以模仿来实现技术突破的?比如飞机是基于鸟类的。还有没有其他这样的例子?
Sam Altman:
A great scientist once said that there is no alternative, that that's the only thing that we figured out how to do. I mean neural networks, artificial neural networks were really inspired by the way neurons in a brain connect. I certainly don't think it's literally true that nature is our only source of inspiration for discovering new science, but man, is it a good place to start looking.
一位伟大的科学家曾经说过,没有别的选择,那是我们唯一弄明白该怎么做的事情。我的意思是,神经网络,人工神经网络,确实是受到大脑中神经元连接方式的启发。我当然不认为,从字面上说,自然是我们发现新科学的唯一灵感来源,但说真的,它确实是一个非常好的起点。
Jacklyn Dallas:
Yeah. Is there anything that you're thinking about now that you want to implement? This is a big week, obviously, for your new science model. How are you thinking about science breakthroughs? What you're focusing on there? What comes next? I saw that there is an Australian guy that helped his dog's cancer be cured.
是的。你现在有没有什么正在思考、想要实现的东西?显然,对你们的新科学模型来说,这是非常重要的一周。你现在如何思考科学突破?你们在这方面关注什么?下一步是什么?我看到有一个澳大利亚人帮助治好了他家狗的癌症。
Sam Altman:
Yeah. That is a specific thing. I was just talking to a founder of a company. I got to visit YC last night. Who is thinking about a similar thing like that, but making that a scaled. I mean, I'm sure a lot of people think like custom mRNA vaccines for cancer in people. And that's tremendously exciting.
是的。那是一个具体案例。我昨晚刚去了 YC,和一家公司的一位创始人聊过。他正在思考一件类似的事情,但想把它规模化。我的意思是,我相信很多人都会想到,为人类癌症定制 mRNA 疫苗。这非常令人兴奋。
Jacklyn Dallas:
Totally. Why haven't we done it yet?
完全同意。那为什么我们还没有做到?
Sam Altman:
As I understand it, there's many reasons, but one of the big ones is the FDA is not well set up to think about how we're going to do that, although getting better fast.
据我理解,原因有很多,但其中一个重要原因是,FDA 目前还没有很好地建立起一套机制来思考我们该如何做这件事,尽管它正在迅速改善。
Jacklyn Dallas:
Yeah, it's interesting because I think when I think about personalized or like the next frontiers, personalized medicine has to be it because all of us have such different DNA and risk outcomes.
是的,这很有意思。因为当我思考个性化,或者说下一个前沿时,个性化医疗一定是其中之一,因为我们的 DNA 和风险结果都如此不同。
Sam Altman:
Certainly the idea that if you get cancer, a company or a lab can make you a personalized vaccine just for your cancer and it's very likely to be effective or likely to be effective. Sounds like an obvious thing we should all demand.
当然,如果你得了癌症,一家公司或实验室能够专门针对你的癌症,为你制作一种个性化疫苗,而且它很可能有效,或者至少有相当可能有效,这听起来显然是我们所有人都应该要求实现的事情。
Jacklyn Dallas:
Do you use ChatGPT right now for your health?
你现在会用 ChatGPT 来处理自己的健康问题吗?
Sam Altman:
I do.
会。
Jacklyn Dallas:
What do you use it for?
你会用它做什么?
Sam Altman:
I am probably an overuser of it. I think they used to call them like cyberchondriacs. I don't know what they call the ChatGPT version of this. But any mild symptom I get, I will go down the ChatGPT rabbit hole. You know, like everybody else or many other people, I will put my blood test in there and I kind of wish I... I mean, I'm happy I do, but sometimes it'll really like be like, oh, this is slightly off and I'm like, oh, should I do something about this?
我可能是一个过度使用者。我记得他们以前好像把这类人叫作“网络疑病症患者”。我不知道 ChatGPT 版本的这种人该叫什么。但只要我出现任何轻微症状,我就会顺着 ChatGPT 一路深挖下去。你知道,就像其他所有人,或者很多人一样,我会把自己的血检结果放进去。我有点希望我……我的意思是,我很高兴自己这么做,但有时候它真的会说,哦,这个指标有点偏离正常。然后我就会想,哦,我是不是该为此做点什么?
Jacklyn Dallas:
It helped me recently.
它最近帮了我一次。
Sam Altman:
How did it help you?
它怎么帮你的?
Jacklyn Dallas:
I had a stress fracture from running and my doctor went out of town and I had an MRI and I put it in and it read the MRI for me. Obviously, you got it checked, but it was accurate.
我因为跑步出现了应力性骨折,而我的医生出城了。我做了核磁共振,然后把结果放进去,它帮我读了核磁共振。当然,之后还是让医生确认了,但它读得是准确的。
Sam Altman:
That's great.
这很好。
Jacklyn Dallas:
It blew my mind.
这让我震惊。
Sam Altman:
When we first launched ChatGPT and there was like a little bit of this, it was not very good. People said people will never use ChatGPT for medical advice. It's just not nearly good enough. It's not going to be good enough. And even if it were good, everybody would like rather talk to a doctor. People still definitely want to talk to their doctor, but the amount of ChatGPT usage of people asking medical questions and getting, at least how they reported to us, really helpful information is quite extraordinary.
我们最初推出 ChatGPT 的时候,它已经有一点这种能力,但表现并不好。人们说,大家永远不会用 ChatGPT 寻求医疗建议。它远远不够好,也不会变得足够好。而且即便它真的不错,所有人也都会更愿意去和医生交谈。人们当然仍然想和自己的医生交流,但人们用 ChatGPT 询问医疗问题,并且至少按他们反馈给我们的说法,得到了非常有帮助的信息,这种使用量是相当惊人的。
Jacklyn Dallas:
Is it tough for you to constantly have people doubt that the technology is going to be impactful?
不断有人怀疑这项技术会产生重大影响,这对你来说难受吗?
Sam Altman:
Yes. It shouldn't bother me anymore.
会。按理说我现在不应该再被它影响了。
Jacklyn Dallas:
Yeah.
是的。
Sam Altman:
It still annoys the shit out of me.
但它还是让我烦得要命。
Jacklyn Dallas:
It would bother me too. I feel like, and if you look at like any big tech breakthrough, like before we flew planes, the newspaper was like, we will never fly. It will be a hundred years. And then like the next week we were in the sky.
换我也会烦。我觉得,如果你看任何一次重大技术突破,比如在我们真正驾驶飞机飞起来之前,报纸还在说,我们永远飞不起来,这还要一百年。然后好像下一周,我们就已经飞上天了。
Sam Altman:
That was that particular example in the early days of OpenAI we used to talk about all the time.
这个具体例子,在 OpenAI 早期我们经常谈到。
Jacklyn Dallas:
Love that.
我喜欢这个例子。
Sam Altman:
That Wright Brothers New York Times article.
就是 Wright Brothers 那篇 New York Times 文章。
Jacklyn Dallas:
Yeah.
是的。
Sam Altman:
We talk about all the time and we said AI is going to be like this. We're trying to be right. But the fact like, honestly, it annoyed me a lot in the early days. But, you know, it was like not super clear. So I thought it was at least like intellectually honest of the critics to say maybe this isn't going to have a big impact. Now, watching people say there's really no value and this is going to have no impact on the world. It shouldn't bother me. I mean, it's obviously ridiculous, but it's so annoying. It's so intellectually dishonest and so annoying.
我们一直谈这个例子,并且说 AI 会像这样。我们在努力判断正确。但说实话,在早期,这件事确实让我很恼火。不过你知道,当时事情还不是特别清楚。所以我觉得,批评者说“也许这不会产生很大影响”,至少在智识上还是诚实的。可现在,看着有人说它真的没有价值,也不会对世界产生任何影响。它不应该影响我。我的意思是,这显然很荒唐,但它就是非常烦人,非常缺乏智识诚实,也非常烦人。
Jacklyn Dallas:
And also, I feel like when you're in the arena every day and you're like trying so hard to push the ball forward, you want people to believe in it. And like, I think with our videos, I often try to show people like amazing technology and what the future can look like, because I think you need to see it and like lock in and build. And ultimately, I feel like people are the most fulfilled when they're working hard on something they care about. If you were talking to like a 22 year old today, what types of things would you be curious to know about, like how they're feeling about the world? And how would that inform what you build?
而且我觉得,当你每天都在场上,拼命努力把事情往前推进时,你会希望人们相信它。就像我们做视频时,我经常试着向人们展示令人惊叹的技术,以及未来可能是什么样子,因为我觉得你需要先看到它,然后专注起来,开始构建。归根结底,我觉得当人们为自己在乎的事情努力工作时,他们最有满足感。如果你今天和一个 22 岁的人交谈,你会好奇了解哪些事情?比如他们如何感受这个世界?这些又会如何影响你所构建的东西?
Sam Altman:
A thing that I have been trying to do over the last couple of weeks is really sit with people using the latest model and using Codex to understand how it's going to impact their work, what they're excited about, what they're not excited about, what they need from us that we haven't already built. And I've done this mostly with people running companies or senior engineers at companies and I really should go sit down with some young people and say try this out and watch what they do and like listen to their concerns.
过去几周,我一直在努力做的一件事,是认真坐在人们旁边,看他们使用最新模型和 Codex,理解它将如何影响他们的工作,理解他们兴奋什么、不兴奋什么,以及他们还需要我们做什么,而那是我们尚未构建出来的。目前我主要是和经营公司的人,或者公司的资深工程师这样做。但我确实应该去和一些年轻人坐下来,说,试试这个,然后观察他们怎么用,听听他们的担忧。
Jacklyn Dallas:
You have a unique perspective too because you advise so many young founders. When you were on Joe Rogan's podcast a few years ago, you talked about how there was a lack of like 25 year old founders. Has that changed since then?
你也有一个独特视角,因为你给很多年轻创始人提供建议。几年前你上 Joe Rogan 的播客时,谈到过当时缺少那种 25 岁左右的创始人。从那以后,这种情况改变了吗?
Sam Altman:
That's changed. That's totally changed.
改变了。完全改变了。
Jacklyn Dallas:
What do you think changed it?
你认为是什么改变了这种情况?
Sam Altman:
I think there were a few things happening at once. So I don't really get to advise founders anymore because life got so busy. But I have been thinking that I need to find some way to do that again because one of the most important things about this technology is the entrepreneurship it's enabling. And I feel out of touch on that in a way I really don't like. I intellectually understand it, but I want to go be in the trenches with people building companies with two founders and 10,000 GPUs. I've met a few of these recently, but this is reminding me that I got to figure out some way to get closer to startups again.
我认为是几件事情同时发生了。现在我其实已经不太有机会再给创始人提供建议了,因为生活变得太忙。但我一直在想,我需要找到某种方式重新做这件事,因为这项技术最重要的影响之一,就是它正在释放创业精神。而在这一点上,我感觉自己有点脱节,这种感觉我很不喜欢。我在智识上理解它,但我想真正进入一线,和那些由两个创始人、1 万块 GPU 组成的公司一起摸爬滚打。我最近见过几家这样的公司,但这也提醒我,我必须想办法重新靠近初创公司。
Why there was not this sort of cohort of young founders then and why there is now? I think it was a lot of things. I think the U.S. educational system went through a very dark period where COVID happened at the same time. And we were kind of demotivating this whole set of people and telling them that I don't know. The future is going to be bad and capitalism is bad and companies are bad and, you know, ambition is bad. That seems to have corrected.
为什么当时没有这样一批年轻创始人,而现在有了?我认为原因很多。我觉得美国教育体系经历过一段非常黑暗的时期,而 COVID 又同时发生。我们某种程度上让整整一批人失去了动力,并且告诉他们,我也说不太清楚,大概就是未来会很糟糕,资本主义是坏的,公司是坏的,你知道,野心也是坏的。现在这种情况似乎已经被纠正了。
大型科技正在进行的重资产投入给初始企业创造了非常好的条件,就像历史上的铁路和AMEX。
Jacklyn Dallas:
Yeah, we're back.
是的,我们回来了。
Sam Altman:
We're back.
我们回来了。
Jacklyn Dallas:
There was like a Timothee Chalamet thing that went viral where he was saying how much he wanted to win an award and people were stoked about it. They were like, it's so cool to carry on.
之前有一段 Timothee Chalamet 的内容走红,他在里面说自己有多想赢得一个奖项,人们对此非常兴奋。他们觉得,这种继续追求成就的状态太酷了。
Sam Altman:
That's great.
这很好。
Jacklyn Dallas:
It should never have not been.
它本来就不该一度变得不酷。
Sam Altman:
It should never have not been, but the sort of like that you weren't allowed to be ambitious or to like, yeah, it was really weird, really weird time.
它本来就不该一度变得不酷。但那种好像你不被允许有野心,或者说,是的,那是一段非常奇怪、非常奇怪的时期。
And then I think another thing is startups thrive when there's dynamism and newness and there's a change in the technological landscape. And that happened when the iPhone App Store launched in 2008, I guess. That happened when AWS launched a few years earlier. And then it didn't happen for like a very long time until AI came along. So there was just like a kind of period in the wilderness. There were still successful startups, but not as many as there can be when, you know, there's a real technological shift.
然后我认为另一件事是,当存在活力、新鲜事物,以及技术格局发生变化时,初创公司就会繁荣。我想,2008 年 iPhone App Store 推出时就是这样。几年前 AWS 推出时也是这样。之后很长一段时间里,这种事情都没有再发生,直到 AI 出现。所以那段时间有点像是在荒野中行走。当然仍然有成功的初创公司,但数量没有真正技术转折出现时那么多。
Jacklyn Dallas:
You said that on your blog seven years ago. You were like, we're due for another technological shift.
你七年前在博客上就说过这一点。你当时说,我们该迎来下一次技术转折了。
Sam Altman:
Got it.
明白。
Jacklyn Dallas:
And you did. And you're the man that made it, which is cool.
而你真的做到了。而且你就是促成这件事的人,这很酷。
Sam Altman:
Thank you.
谢谢。
Jacklyn Dallas:
How do you think about focus now? Like, what are the next areas you want to focus on with AGI? And obviously, you guys like shut down SOAR recently. What are the areas that get the most focus and why?
你现在如何思考专注这件事?比如,在 AGI 上,你接下来最想关注哪些领域?而且很明显,你们最近关闭了 SOAR。哪些领域会得到最多关注,为什么?
Sam Altman:
I think the three most important things for us now are accelerating research. We talked a little bit about this and this goes from like AI research to physics to biology, everything. But accelerate research because research and scientific understanding is so much for humanity.
我认为,对我们来说,现在最重要的三件事,第一是加速研究。我们刚才稍微谈到过这一点,这包括从 AI 研究,到物理学、生物学,所有领域。要加速研究,因为研究和科学理解对人类太重要了。
Second is accelerate the economy. Talked about these automated startups, big companies using AI to be more productive and all of, you know, eventually like building the space colonies or whatever.
第二是加速经济。我们谈到过这些自动化初创公司,大公司用 AI 提高生产力,以及你知道的,最终比如建造太空殖民地之类的事情。
And then third is the sort of like personal AGI. ChatGPT was like a little preview of this. Maybe you can type in your medical questions and get some advice. But you would really like, or at least I would really like, an AGI working for me with my whole context, my whole life, all the time. Spending compute to make my life better. Those are the three most important focuses.
第三是某种个人 AGI。ChatGPT 有点像是这一点的小小预览。也许你可以输入自己的医疗问题,然后得到一些建议。但你真正想要的,或者至少我真正想要的,是一个带着我的完整上下文、了解我的整个人生、始终为我工作的 AGI。它会消耗算力来让我的生活变得更好。这就是三个最重要的关注点。
They're shockingly similar in terms of the enabling technology and platform, but those are the areas where I think society will really feel the value.
从底层支撑技术和平台来看,它们惊人地相似。但我认为,正是在这些领域,社会会真正感受到价值。
Jacklyn Dallas:
For scientific breakthroughs specifically, you guys have the foundation where you're focusing on Alzheimer's research. What other areas do you think we can expect breakthroughs, like in the next year?
具体到科学突破,你们有一个基金会,正在关注阿尔茨海默病研究。你认为在其他哪些领域,我们可以期待突破,比如在未来一年内?
Sam Altman:
I would expect the progress in math to be astonishing.
我预计数学方面的进展会非常惊人。
Jacklyn Dallas:
Oh, tell me. Like in what way?
哦,展开讲讲。会以什么方式?
Sam Altman:
I think we'll just discover hugely important new math and solve math problems that seemed out of reach. And like many other times in history, I expect if we discover new math, it'll point the way to new physics and other new cryptography, who knows what real world applications.
我认为我们会发现极其重要的新数学,并解决一些原本看起来遥不可及的数学问题。就像历史上许多次那样,我预期如果我们发现了新的数学,它会指引我们走向新的物理学,以及新的密码学,甚至谁知道还会有什么现实世界的应用。
But I hope we hold ourselves to a higher bar and work on some of like the messier, more difficult scientific understanding that has more of a real world impact. So I don't think we'll get Alzheimer's cured in the next year or even really treated in the next year. But I hope we can start to see like some new promising vectors that we can go push on.
但我希望我们能对自己提出更高标准,去处理那些更混乱、更困难、但对现实世界影响更大的科学理解问题。所以我不认为我们会在未来一年内治愈阿尔茨海默病,甚至也不认为能在未来一年内真正治疗它。但我希望我们能开始看到一些新的、有希望的方向,让我们可以继续推进。
Jacklyn Dallas:
Yeah, I remember Mark Zuckerberg in an interview talked about how when he talks to people that work in AI, they're like, we are going to solve every disease. And then when he talks to doctors, they're like, that is not going to happen. So there's clearly a disconnect in the two fields. How do you think about it?
是的,我记得 Mark Zuckerberg 在一次访谈中谈到,当他和从事 AI 的人交流时,他们会说,我们将解决所有疾病。而当他和医生交流时,医生会说,那不会发生。所以这两个领域之间显然存在脱节。你怎么看?
Sam Altman:
It will take longer than the AI people think and shorter than the doctors think.
它会比 AI 人想象的更久,也会比医生想象的更快。
Jacklyn Dallas:
Love that. Yeah, I totally agree. And I think if you even look back to a few years ago, the types of breakthroughs that are now possible with AI, it's just like we're definitely on an exponential. It seems like longer context windows is going to be super important in that exponential. How do we do that? Is it more compute? What has to happen?
我喜欢这个说法。是的,我完全同意。而且我认为,即便只回头看几年前,现在 AI 已经让某些类型的突破成为可能,这就像是我们确实正处在指数曲线上。更长的上下文窗口似乎会在这条指数曲线中变得极其重要。我们要如何做到这一点?是需要更多算力吗?必须发生什么?
Sam Altman:
I don't think it needs to be like a literal 1 billion or 1 trillion token context window, although I assume we'll be able to do that too. I think what you care about is that somehow the model can effectively understand your whole life or your whole company or all the things you care about. And there have been amazing, they all cost a lot of compute unfortunately or a lot of memory at least, but there have been amazing new methods to use the current context windows, but really figure out the important bits or to use tools to go off and find the less important bits when necessary and make way better use of the same amount of context.
我不认为它需要一个字面意义上的 10 亿或 1 万亿词元上下文窗口,尽管我认为我们也会有能力做到那一点。我认为你真正关心的是,模型能以某种方式有效理解你的整个人生、你的整个公司,或者你关心的所有事情。而且已经出现了一些惊人的新方法,遗憾的是它们全都需要大量算力,或者至少需要大量内存。但这些新方法可以利用现有的上下文窗口,真正找出其中重要的部分,或者在必要时使用工具去外部寻找那些相对不那么重要的部分,从而把同样数量的上下文利用得好得多。
So I think that will keep going with the new model and the things will add to the new model in the coming months. I don't want to say it will feel like infinite context, but it feels like, okay, this model really understands a lot. It has way more stuff in its head than I have in mind.
所以我认为,随着新模型以及未来几个月我们会加入新模型的那些东西,这个方向会继续推进。我不想说它会让人感觉像是无限上下文,但它会让人感觉,好吧,这个模型真的理解了很多东西。它脑子里的东西远远多于我脑子里装着的东西。
Jacklyn Dallas:
What's different with the new model? What did you guys change?
新模型有什么不同?你们改了什么?
Sam Altman:
Smarter.
更聪明。
Jacklyn Dallas:
Okay.
好。
Sam Altman:
Faster. More context. And a I don't have the right word for this. More reliability, let's say. I think it under, let me say more intuition, not more reliability. It feels like it does a better job of understanding what I actually want and trying a few times, knowing when it's on track for that and not, and actually getting me the right thing. So the subjective experience is way more of the time that I ask the model to do something, it does the right thing.
更快。上下文更多。还有一个,我没有找到准确的词。就说更可靠吧。我觉得它更能……让我换个说法,不是更可靠,而是更有直觉。它给人的感觉是,更能理解我真正想要什么,会尝试几次,知道自己是不是走在正确方向上,然后真正把我需要的东西给我。所以主观体验是,我让模型做某件事时,它做对的次数明显更多了。
Jacklyn Dallas:
Interesting. Because it understands based on its training, like did you guys update algorithms or?
有意思。是因为它基于训练理解得更好了吗?比如你们更新了算法,还是?
Sam Altman:
We have a lot of algorithmic change. It's a newer, better, bigger base model with a different architecture and then or architectural improvements. And then all of the things we've learned about post-training, how people want to use these models and how to kind of connect them to the world, people's systems, people's context to be helpful.
我们有很多算法上的变化。它是一个更新、更好、更大的基础模型,采用了不同的架构,或者说有架构上的改进。然后还有我们在后训练中学到的所有东西,包括人们希望如何使用这些模型,以及如何把它们连接到现实世界、人们的系统和人们的上下文中,从而真正有用。
Jacklyn Dallas:
So in thinking about it, tell me if this is the right understanding. It kind of seems like AI gets better in three ways. It's better algorithms, more data, and then like maybe more energy or more compute. Are those kind of the three things that we can push on?
所以这样理解,你看看对不对。AI 似乎主要通过三种方式变得更好:更好的算法、更多的数据,然后可能是更多能源或者更多算力。我们能够推进的大致就是这三件事吗?
Sam Altman:
Effectively, yes. There's more data is a very broad category. Like is this, you know, do we mean by that like literally just more training data or do we mean like, you know, we're going to connect it in a loop that it can learn continuously as you're doing something and it's failing? That's cool. But yeah, broadly speaking, I agree. Those are the three categories.
实质上,是的。只是“更多数据”是一个非常宽泛的类别。比如,你知道,我们说更多数据,指的是字面意义上更多训练数据,还是说,我们会把它接入一个循环,让它在你做某件事并失败时能够持续学习?那也很酷。但总体上,是的,我同意。这就是三大类别。
苹果、微软、Meta、腾讯,这些已经落后的看上去都有麻烦。
Jacklyn Dallas:
Which one's the easiest one to have a breakthrough in?
哪一个最容易取得突破?
Sam Altman:
I think building more compute is the most certain one. There's the least science there. It just takes a lot of money and a lot of complex supply chain, but you can just do it. Algorithmic breakthroughs are the highest payoff but the hardest and most uncertain to find and better data is in the middle.
我认为建设更多算力是最确定的一条。这里面的科学问题最少。它只是需要大量资金和非常复杂的供应链,但你就是可以去做。算法突破的回报最高,但最难,也最不确定;更好的数据则处在中间。
Jacklyn Dallas:
Does that tie to like recursive learning like the model teaches itself or is that different?
这和递归学习有关吗?比如模型教自己,还是那是另一回事?
Sam Altman:
It can, yeah. I mean that's totally one way to do it. If the model is, you know, in some sense, if the model is really smart, it can go prove an unproven theorem. And now in the next training run, there's one more thing the model can learn. We have this new proof. That's an example.
可以有关。我的意思是,那完全是一种方式。如果模型,你知道,从某种意义上说,如果模型真的很聪明,它可以去证明一个尚未被证明的定理。然后在下一轮训练中,模型就多了一件可以学习的东西。我们有了这个新的证明。这就是一个例子。
Jacklyn Dallas:
Interesting. Are we at that point where the model is improving itself a lot right now or no?
有意思。我们现在已经到了模型正在大量自我改进的阶段了吗,还是还没有?
Sam Altman:
It's so hard to frame that question properly. In some sense, clearly, yes. If our engineers are three times as productive as they used to be because of Codex and they can write the code for the next model faster, using the previous model, you kind of got to count that.
这个问题很难准确表述。从某种意义上说,显然是的。如果因为 Codex,我们的工程师生产力变成过去的三倍,他们可以借助前一个模型更快地为下一个模型写代码,那你某种程度上必须把这也算进去。
Jacklyn Dallas:
I agree.
我同意。
Sam Altman:
And then in the spiritual sense, people mean of like, are we just pushing a button and saying, you know, go make the next model and come up with these new algorithms? The idea is definitely not.
但如果人们是在那种精神意义上说,比如我们是不是只要按一个按钮,然后说,你知道,去做出下一个模型,想出这些新算法?那显然还不是。
Jacklyn Dallas:
I also think on the supply chain, like how do we build all these like data centers front? Robotics is so exciting. You said that robotics is a big priority for you.
我也在想供应链这一端,比如我们怎么建设所有这些数据中心。机器人太令人兴奋了。你说过机器人是你们的一个重要优先事项。
Sam Altman:
Yeah.
是的。
Jacklyn Dallas:
Can you bring into your mind, like what excites you about robotics? And then what's the roadmap?
你能不能具体想一想,机器人到底哪里让你兴奋?然后路线图是什么?
Sam Altman:
We live in the physical world and, you know, even when we're in the virtual world, as you were saying, we need like this massive complexity in the physical world to enable that. We need to make the chips and build the data centers and, you know, run the power plants and whatever else.
我们生活在物理世界里。而且你知道,即便我们身处虚拟世界,正如你刚才说的,我们也需要物理世界中这种巨大的复杂系统来支撑它。我们需要制造芯片、建设数据中心,你知道,还要运行发电厂,以及其他各种事情。
So a very sad future would be where computers can do these incredible things, but because we didn't figure out robots, we have to like go run around the physical world as like the actuators of the AGI. They'll say, you know, please go move this table and do this and do that. Really bad, really bad. So you gotta have robots.
所以,一个非常悲哀的未来会是:计算机能够做这些不可思议的事情,但因为我们没有解决机器人问题,我们就不得不在物理世界里跑来跑去,充当 AGI 的执行器。它们会说,你知道,请去搬一下这张桌子,去做这个,去做那个。那会非常糟,非常糟。所以你必须要有机器人。
Jacklyn Dallas:
What type of robots do you think will be the best?
你认为哪种机器人会是最好的?
Sam Altman:
I am not that focused on any particular morphology I want, but what I want is like automated manufacturing and the ability to say like, we need more of whatever this thing is. And with the same generality of ChatGPT, a factory of robots that can reconfigure itself and make more of that thing.
我并没有特别关注某一种具体形态。但我想要的是自动化制造,以及这样一种能力:你可以说,我们需要更多某个东西。然后,凭借类似 ChatGPT 那种通用性,一个由机器人组成的工厂能够重新配置自己,并制造出更多那种东西。
Jacklyn Dallas:
Would you think you would ever physically manufacture them or would you partner?
你觉得你们将来会自己实际制造它们,还是会与别人合作?
Sam Altman:
Don't know.
不知道。
Jacklyn Dallas:
Okay. Is AI hardware outside of that a priority to you? Like I know Johnny Ives is involved.
好。除此之外,AI 硬件对你来说是优先事项吗?比如我知道 Jony Ive 参与其中。
Sam Altman:
Yeah. Oh, you mean, I thought you're gonna say chips. You mean like consumer AI hardware?
是的。哦,你的意思是,我还以为你要说芯片。你指的是面向消费者的 AI 硬件?
Jacklyn Dallas:
Yeah.
是的。
Sam Altman:
Totally. We were talking earlier about how you want an AI to have all the context in your life.
当然。我们前面谈到过,你会希望一个 AI 拥有你生活中的全部上下文。
Jacklyn Dallas:
Yeah.
是的。
Sam Altman:
And current hardware, which is amazing. I think the iPhone is currently the greatest piece of consumer hardware ever made by a lot, like incredible what that has done. But it was not meant for a world where you needed a piece of hardware that could absorb all of the context of your life. You know, you can use the phone, you can stop using the phone, you can put it in your pocket, but it's kind of like on or off. And when we are not using it, like this has been a very interesting conversation. I would love this to be referenceable by my personal AGI later, but my phone is in my pocket and it's not gonna understand. And I would like a device that, if I wanted to, can participate and understand and know about this conversation.
而现在的硬件已经很了不起了。我认为 iPhone 目前远远是有史以来最伟大的消费级硬件,它所实现的东西令人难以置信。但它并不是为这样一个世界设计的:在这个世界里,你需要一件硬件能够吸收你生活中的全部上下文。你知道,你可以使用手机,也可以停止使用手机,可以把它放进口袋,但它基本上是一种开启或关闭的状态。而当我们没有使用它时,比如这是一场非常有意思的谈话。我希望之后我的个人 AGI 能够引用这段谈话,但我的手机在口袋里,它不会理解这件事。我想要的是一种设备,如果我愿意,它可以参与、理解并知道这场谈话。
Jacklyn Dallas:
Totally. Yeah. I think also it would be interesting to get outside insights. I recently downloaded the transcripts of every... Do you have the podcast Acquired? Yeah. Okay. Love that podcast. And I was trying to reverse engineer what makes their show successful. So I downloaded 400 transcripts from the show, put into ChatGPT and had to analyze their story structure. And it was amazing.
完全同意。是的。我觉得获得外部洞察也会很有意思。我最近下载了每一集的转录稿……你知道 Acquired 这个播客吗?知道。好。我很喜欢那个播客。我试图反向工程它们的节目为什么成功。所以我下载了这个节目 400 份转录稿,放进 ChatGPT,让它分析它们的叙事结构。结果非常惊人。
And I imagine that you could have similar insights of your own conversations and how you approach things as a leader. Yep. But I also know that when people see like an always-on recorder, there's an ick with it.
我想,你也可以从自己的谈话中获得类似洞察,理解自己作为领导者是如何处理事情的。是的。但我也知道,当人们看到一种“始终开启的录音设备”时,会有一种不舒服的感觉。
巴菲特的思维框架就放在那里,但是不是你自己能理解的方式?压缩是一种结果,自己能不能理解需要一个过程,两者之间的缝隙如何填补?就像学习不熟悉的英文单词,配上图文帮助学习,过程没那么难但就像前面讲到的疾病的问题,个性化定制可能是下一个巨大的市场。
Sam Altman:
Totally.
完全是这样。
Jacklyn Dallas:
What do you think?
你怎么看?
Sam Altman:
One of the reasons I initially wanted to talk to Jony is as I was thinking about what hardware for the AI world is going to be and the ick that I feel with technology that is just too present in my life, like even a smart speaker, I thought Jony would have great insight about how to design something that held all of these things in tension and I think it'll do great.
我最初想和 Jony 交流的原因之一,是当我在思考 AI 世界的硬件会是什么样时,也在思考那种技术在我生活中存在感太强所带来的不适感。比如即便是一个智能音箱,也会让我有这种感觉。我觉得 Jony 会对如何设计一种东西有很好的洞察,能够把这些相互拉扯的因素同时处理好。我认为它会做得很好。
Jacklyn Dallas:
What do you think will be like the biggest thing that is misunderstood about your approach?
你觉得人们最容易误解你们做法的地方会是什么?
Sam Altman:
I don't know yet. I'm sure there will be many things we can come back and talk about there.
我现在还不知道。我相信会有很多事情,到时候我们可以再回来谈。
Jacklyn Dallas:
Yeah, interesting. I'm also very interested in like the use of AI kind of in the background like agents. What does that mean and how do you think about it?
是的,有意思。我也非常感兴趣 AI 在后台运行的用法,比如智能体。这意味着什么?你怎么看?
Sam Altman:
When the team first made the Codex app, I put it on my computer and it had this thing that at the time we called YOLO mode. I think we found like a more polished name for it eventually. But you could basically say like you can just run in the background of my computer and do stuff. And you don't have to ask me every time. And I was like, I'm absolutely never going to turn that thing on.
团队最早做出 Codex 应用时,我把它装到了自己的电脑上。里面有一个功能,当时我们叫它 YOLO 模式。我想后来我们给它找了一个更正式的名字。但你基本上可以对它说,你可以在我的电脑后台运行并做事情,不必每一步都问我。我当时想,我绝对不会打开这个东西。
And I lasted a few hours and I got kind of so annoyed by having to like give permission every step I just put it on. And there was this agent, you know, running all over my computer doing stuff in the background. And then pretty soon after that, I like didn't want to close my computer because I want to stop working.
结果我只坚持了几个小时,就开始因为每一步都要授权而感到很烦,于是我把它打开了。然后就有这样一个智能体,你知道,在我的电脑里到处运行,在后台做各种事情。没过多久,我甚至不想合上电脑,因为我不想让它停止工作。
Jacklyn Dallas:
Yeah.
是的。
Sam Altman:
And the transition there was so smooth, so uneventful. I, you know, thought it was kind of crazy. I was doing it sort of irresponsible but here I was. We've since figured out how to make it a more responsible thing to do. Yeah. But I went from thinking I wasn't going to be comfortable to loving the idea that an agent was just running around my computer doing useful stuff.
这个转变非常顺滑,非常平静。我原本觉得这有点疯狂。我当时做得有点不负责任,但事情就是这样发生了。后来我们已经弄清楚,如何让它以更负责任的方式运行。是的。但我从一开始觉得自己肯定不会适应,变成了非常喜欢这样一个智能体在我的电脑里到处运行、做有用事情的想法。
Jacklyn Dallas:
What was it doing for you?
它替你做了什么?
Sam Altman:
Deal with my messages, deal with my email. Eventually, I tried something just like look around my computer and figure out what you can do to be useful to me.
处理我的消息,处理我的邮件。后来,我还尝试过一种指令,大概就是:看看我电脑里的情况,自己判断可以做些什么来帮到我。
Jacklyn Dallas:
Whoa. Did it do anything?
哇。它真的做了什么吗?
Sam Altman:
That time I first tried it, no, but it led me to working on this little project of making this like automatic to-do list.
第一次尝试时,没有。但它促使我开始做一个小项目,想做出某种自动待办事项清单。
Jacklyn Dallas:
That's sick.
这太酷了。
Sam Altman:
Which is sick. Like, autocompleting to-do lists is a very cool thing.
这很酷。比如,自动补全待办事项清单就是一件非常酷的事。
Jacklyn Dallas:
Totally great. Is it built into the ChatGPT homepage?
确实很棒。它内置在 ChatGPT 主页里了吗?
Sam Altman:
No, no, it's just a little program in it. That's cool.
没有,没有,它只是里面的一个小程序。挺酷的。
Jacklyn Dallas:
Yeah, because I feel like I always download different to-do list apps and then they never stick and I end up just like texting myself. That's cool.
是的,因为我感觉自己总是在下载不同的待办事项应用,但最后都坚持不下来,结果还是变成给自己发短信。这个很酷。
Sam Altman:
Yes.
是的。
Jacklyn Dallas:
Do you think that there will be agents that kind of work together? Like, will you have like one agent that's like your personal trainer or will it just be kind of the same?
你觉得未来会有一些智能体彼此协作吗?比如,你会不会有一个像私人教练一样的智能体,还是说它们基本上会是同一个东西?
Sam Altman:
I wonder about this so much. This is like one of the product design questions I would most like an answer to. How people are going to want to work. I suspect that people will have kind of a conceptual model of different agents and then maybe like their kind of personal assistant chief of staff, whatever you want to call it, that coordinates among them a lot of the time.
我经常思考这个问题。这大概是我最想知道答案的产品设计问题之一:人们到底会希望以什么方式工作。我猜,人们会在概念上拥有不同智能体的模型,然后可能还会有一个类似私人助理、幕僚长,或者你想怎么称呼都行的东西,在很多时候负责协调它们。
Jacklyn Dallas:
Okay, so let's say that you and I time travel into the future and we go 2050, which I know is a long way out. It feels like even you can't predict six months away. But I'm curious, if we were to dream together what the future looks like, what are we aiming towards? What's your vision here?
好,那假设你和我一起时间旅行到未来,来到 2050 年。我知道那是很遥远的未来,感觉甚至连你也无法预测六个月之后会怎样。但我很好奇,如果我们一起想象未来的样子,我们到底在朝什么方向努力?你在这里的愿景是什么?
Sam Altman:
Um, like, man, that feels so far away, like kind of almost unimaginable prosperity seems likely. What I hope for but what I think we have to really work for is radical levels of human agency where people can just do and create like beyond anyone's imagination and we avoided the kind of centralization of power tendencies. And then in terms of what the world actually looks like, I don't know, space colonies by then maybe.
嗯,天哪,那感觉太遥远了。不过,某种几乎难以想象的繁荣似乎是可能的。我所希望的,也是我认为我们必须真正努力争取的,是一种极高水平的人类能动性:人们可以去做、去创造,达到超出任何人想象的程度,同时我们也避免权力集中化的倾向。至于世界实际会是什么样子,我不知道,也许到那时会有太空殖民地。
Jacklyn Dallas:
Flying cars?
飞行汽车?
Sam Altman:
Yeah, maybe.
是的,也许。
Jacklyn Dallas:
Floating trains? That'd be cool. All right.
悬浮列车?那会很酷。好吧。
Sam Altman:
I hope it looks like the future.
我希望它看起来真的像未来。
Jacklyn Dallas:
Me too. Yeah, I hope it kind of looks like this. To end this video, have you ever heard about blind ranking? Basically, I'll give you options. You won't know what's coming next, and you have to tier list things. I want to do tech breakthroughs.
我也是。是的,我希望它有点像这样。为了结束这个视频,你听说过盲排吗?基本上,我会给你一些选项,但你不知道后面还会出现什么,然后你必须给这些东西分层排序。我想用技术突破来做这个游戏。
Sam Altman:
Okay, and I have to rank this 1 to 10, and I won't know what's coming.
好,所以我要把它们从 1 到 10 排名,而且我不知道后面会出现什么。
Jacklyn Dallas:
Yeah, so 1 through 5, what's the most important in your mind? But you won't know what's coming next, so it's a challenge. You've got to give yourself some room here.
是的,是从 1 到 5,按你心中最重要的程度排序。但你不知道后面还会出现什么,所以这是个挑战。你得给自己留点空间。
Sam Altman:
Okay, I'm probably going to be really bad at this, but let's try it.
好吧,我可能会排得很糟,但我们试试看。
Jacklyn Dallas:
I'm sure you're going to be amazing.
我相信你会表现得很棒。
Sam Altman:
1 through 5, 1 is the most important.
从 1 到 5,1 是最重要的。
Jacklyn Dallas:
Yes. Okay, I'm going to give you one to start us off, fire.
对。好,我先给你一个作为开头:火。
Sam Altman:
The other thing that's going to be hard is I'm going to think they're all ones.
还有一个难点是,我会觉得它们全都应该排第一。
Jacklyn Dallas:
They're awesome.
它们都很了不起。
Sam Altman:
Three.
第三。
Jacklyn Dallas:
Okay, that's a good answer. The printing press.
好,这是个不错的答案。印刷机。
Sam Altman:
Four.
第四。
Jacklyn Dallas:
Okay. Satellites in space.
好。太空中的卫星。
Unknown Speaker:
Five.
第五。
Jacklyn Dallas:
I like that, it's so smart. AI.
我喜欢这个,很聪明。AI。
Sam Altman:
One.
第一。
Jacklyn Dallas:
One, okay. Self-driving cars. So your only option left is like...
第一,好。自动驾驶汽车。所以你剩下唯一的选项就是……
Sam Altman:
Two.
第二。
Jacklyn Dallas:
Would you swap any of them now knowing all the options?
现在知道了所有选项,你会调换其中任何一个吗?
Sam Altman:
I would go... AI, fire, printing press, satellite, self-driving cars.
我会这样排:AI、火、印刷机、卫星、自动驾驶汽车。
Jacklyn Dallas:
Good answer. Why AI over fire?
好答案。为什么 AI 排在火前面?
Sam Altman:
Fire was clearly extremely important in human history. I mean from like food to steam engines and warmth in difficult climates and way other stuff in between. But I would bet that viewed backwards a hundred or a thousand years from now, they will both be two of the great enabling general purpose technologies of all time and AI will have done more in total. Tough to say. If someone wanted to say it the other way, I wouldn't fight them.
火在人类历史上显然极其重要。我的意思是,从食物,到蒸汽机,到在恶劣气候中取暖,以及中间大量其他事情,火都非常重要。但我会打赌,如果从 100 年后或 1000 年后回头看,它们都会是有史以来最伟大的赋能型通用技术之一,而 AI 总体上会做成更多事情。这很难说。如果有人想反过来说,我也不会和他争。
Jacklyn Dallas:
All right, my last question for you. What's the most common thought in your head? What do you think the most every day?
好,我最后一个问题。你脑子里最常出现的想法是什么?你每天想得最多的是什么?
Sam Altman:
At this point, it's been like, what does the successful societal rollout of this look like? Not just the technology, but how do we encourage all of this agency and entrepreneurship? How do we think about what the social contract is going to have to look like? What does it mean to live in a world of declining GDP, even if quality of life is going way up?
在这个阶段,更多是在想,这项技术成功地在社会中展开会是什么样子?不只是技术本身,而是我们如何鼓励所有这些能动性和创业精神?我们如何思考未来社会契约必须是什么样子?如果生活质量大幅提高,但 GDP 却在下降,生活在这样的世界里意味着什么?
How do we go be aggressive enough on the supply chain to build out the compute power I think we all need for a good and sort of fair future without breaking the economy in the short term. Those sorts of things.
我们如何在供应链上足够积极,建设出我认为所有人都需要的算力,以实现一个好的、某种程度上公平的未来,同时又不在短期内破坏经济?诸如此类的问题。
Jacklyn Dallas:
Love it. Yeah, you have an interesting, exciting challenge of having to think about the now, but then also thinking about 5, 10, 15 years.
我喜欢这个。是的,你面临一个很有意思、也很令人兴奋的挑战:既要思考当下,又要思考 5 年、10 年、15 年之后。
Sam Altman:
I probably should think about the now a little bit more, but yes.
我可能应该更多想一想当下,不过是的。
Jacklyn Dallas:
Love it. Thanks so much for coming.
很好。非常感谢你来。
Sam Altman:
Thank you very much.
非常感谢。
Jacklyn Dallas:
You're awesome.
你太棒了。
Sam Altman:
Really enjoyed this.
我真的很享受这次交流。
Jacklyn Dallas:
Yeah, this is epic. All right.
是的,这太精彩了。好。
Sam Altman:
That was so fun.
这太有意思了。
Jacklyn Dallas:
Dude, you're awesome.
兄弟,你太棒了。
Sam Altman:
Thank you for doing this.
谢谢你做这件事。