Introduction
引言
Sam Altman
(00:00:00)
I think compute is going to be the currency of the future. I think it’ll be maybe the most precious commodity in the world. I expect that by the end of this decade, and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, “Wow, that’s really remarkable.” The road to AGI should be a giant power struggle. I expect that to be the case.
我认为算力将成为未来的“货币”,也有可能成为世界上最珍贵的资源。我预计在本十年结束前,甚至更早,我们就会拥有令人惊叹的强大系统。通往 AGI 的道路必将是一场巨大的权力角逐——我认为事情就是如此。
Lex Fridman
(00:00:26)
Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power?
谁若最先建成 AGI,就会掌握巨大的权力。你相信自己能够驾驭那样的权力吗?
(00:00:36)
The following is a conversation with Sam Altman, his second time on the podcast. He is the CEO of OpenAI, the company behind GPT-4, ChaTGPT, Sora, and perhaps one day the very company that will build AGI. This is The Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Sam Altman.
下面是我与 Sam Altman 的对话,这是他第二次做客本播客。他是 OpenAI 的首席执行官,OpenAI 研发了 GPT-4、ChatGPT、Sora,也许终有一天会率先实现 AGI。这里是《Lex Fridman 播客》。如要支持本节目,请查看节目介绍中的赞助商信息。现在,朋友们,欢迎 Sam Altman。
OpenAI board saga
OpenAI 董事会风波
(00:01:05)
Take me through the OpenAI board saga that started on Thursday, November 16th, maybe Friday, November 17th for you.
请回顾一下那场始于 11 月 16 日星期四(对你而言可能是 11 月 17 日星期五)的 OpenAI 董事会风波。
Sam Altman
(00:01:13)
That was definitely the most painful professional experience of my life, and chaotic and shameful and upsetting and a bunch of other negative things. There were great things about it too, and I wish it had not been in such an adrenaline rush that I wasn’t able to stop and appreciate them at the time. But I came across this old tweet of mine or this tweet of mine from that time period. It was like going your own eulogy, watching people say all these great things about you, and just unbelievable support from people I love and care about. That was really nice, really nice. That whole weekend, with one big exception, I felt like a great deal of love and very little hate, even though it felt like I have no idea what’s happening and what’s going to happen here and this feels really bad. And there were definitely times I thought it was going to be one of the worst things to ever happen for AI safety. Well, I also think I’m happy that it happened relatively early. I thought at some point between when OpenAI started and when we created AGI, there was going to be something crazy and explosive that happened, but there may be more crazy and explosive things still to happen. It still, I think, helped us build up some resilience and be ready for more challenges in the future.
那绝对是我职业生涯中最痛苦的经历,混乱、羞耻、难过,还有很多负面情绪交织。在那当中也有一些美好之处,只是当时一切都在肾上腺素飙升中进行,我来不及停下来欣赏。那段时间我翻到自己以前的一条推文,感觉就像提前听到了自己的悼词,看到人们说了那么多夸赞的话,以及来自亲朋好友的不可思议的支持,那真的非常温暖。整个周末(除了一个重大例外)我感受到很多爱,而几乎没有憎恨,尽管我完全不知道发生了什么、将要发生什么,而且感觉非常糟糕。有好几次我觉得这可能是 AI 安全领域最糟糕的事件之一。不过,我也庆幸这件事发生得相对早。我一直以为在 OpenAI 创立到实现 AGI 之间,会出现某种疯狂而爆炸性的事件——也许未来还会有更多——但这次事件仍帮助我们提升了韧性,为未来的挑战做好准备。
Lex Fridman
(00:03:02)
But the thing you had a sense that you would experience is some kind of power struggle?
但是,你之前预感到会经历的,正是一种权力斗争吗?
Sam Altman
(00:03:08)
The road to AGI should be a giant power struggle. The world should… Well, not should. I expect that to be the case.
通往 AGI 的道路注定是一场巨大的权力角逐。世界应当——或许不是“应当”,而是我预计事实会如此。
Lex Fridman
(00:03:17)
And so you have to go through that, like you said, iterate as often as possible in figuring out how to have a board structure, how to have organization, how to have the kind of people that you’re working with, how to communicate all that in order to deescalate the power struggle as much as possible.
因此,正如你所说,你们必须不断迭代,尽可能多地尝试,去探索如何设置董事会结构、如何组织团队、如何选择合作伙伴、如何沟通,以最大限度地缓和这场权力斗争。
Sam Altman
(00:03:37)
Yeah.
嗯。
Lex Fridman
(00:03:37)
Pacify it.
让它平息吧。
Sam Altman
(00:03:38)
But at this point, it feels like something that was in the past that was really unpleasant and really difficult and painful, but we’re back to work and things are so busy and so intense that I don’t spend a lot of time thinking about it. There was a time after, there was this fugue state for the month after, maybe 45 days after, that I was just drifting through the days. I was so out of it. I was feeling so down.
但现在看来,那件事已经成为过去——虽然当时极度糟糕、艰难又痛苦——而我们已重返工作,事务繁忙且紧张,我几乎无暇再去回想。在那之后有一段时间,大约一个月,也许 45 天,我处于恍惚状态,每天浑浑噩噩,情绪极度低落。
Lex Fridman
(00:04:17)
Just on a personal, psychological level?
这是从个人心理层面而言吗?
Sam Altman
(00:04:20)
Yeah. Really painful, and hard to have to keep running OpenAI in the middle of that. I just wanted to crawl into a cave and recover for a while. But now it’s like we’re just back to working on the mission.
是的,非常痛苦,并且在那种状态下还要继续运营 OpenAI,真的很难。我只想钻进洞里休养一阵。但如今我们已重新投入使命之中。
Lex Fridman
(00:04:38)
Well, it’s still useful to go back there and reflect on board structures, on power dynamics, on how companies are run, the tension between research and product development and money and all this kind of stuff so that you, who have a very high potential of building AGI, would do so in a slightly more organized, less dramatic way in the future. So there’s value there to go, both the personal psychological aspects of you as a leader, and also just the board structure and all this messy stuff.
不过,回顾那些经历仍很有价值,能让人反思董事会结构、权力动态、公司运营,以及科研与产品开发、资金之间的张力等问题。你很有可能率先实现 AGI,因此若能以更有序、少戏剧化的方式行事就更好了。所以无论从你作为领导者的个人心理角度,还是从董事会结构等复杂事务来看,反思都很有意义。
Sam Altman
(00:05:18)
I definitely learned a lot about structure and incentives and what we need out of a board. And I think that it is valuable that this happened now in some sense. I think this is probably not the last high-stress moment of OpenAI, but it was quite a high-stress moment. My company very nearly got destroyed. And we think a lot about many of the other things we’ve got to get right for AGI, but thinking about how to build a resilient org and how to build a structure that will stand up to a lot of pressure in the world, which I expect more and more as we get closer, I think that’s super important.
我确实在结构、激励以及我们需要什么样的董事会方面学到了很多。从某种意义上说,这件事现在发生是有价值的。我认为这可能不会是 OpenAI 经历的最后一次高压时刻,但当时确实压力巨大,公司几乎被摧毁。我们一直在思考为了 AGI 需要做对的许多事情,但如何打造一个有韧性的组织、建立能承受巨大外部压力的结构——而这种压力会随着我们接近目标而增大——同样至关重要。
Lex Fridman
(00:06:01)
Do you have a sense of how deep and rigorous the deliberation process by the board was? Can you shine some light on just human dynamics involved in situations like this? Was it just a few conversations and all of a sudden it escalates and why don’t we fire Sam kind of thing?
你是否了解董事会当时的讨论过程有多深入、严谨?能否透露一下此类情形下的人际动态?是不是仅仅几次对话,事情就突然升级到“干脆解雇 Sam”这样的程度?
Sam Altman
(00:06:22)
I think the board members are well-meaning people on the whole, and I believe that in stressful situations where people feel time pressure or whatever, people understand and make suboptimal decisions. And I think one of the challenges for OpenAI will be we’re going to have to have a board and a team that are good at operating under pressure.
我认为董事会成员整体上都是出于善意的。我相信在高压、紧迫的情境中,人们会误解并做出次优决策。我想 OpenAI 面临的挑战之一是必须拥有一个善于在压力下运作的董事会和团队。
Lex Fridman
(00:07:00)
Do you think the board had too much power?
你认为董事会权力过大吗?
Sam Altman
(00:07:03)
I think boards are supposed to have a lot of power, but one of the things that we did see is in most corporate structures, boards are usually answerable to shareholders. Sometimes people have super voting shares or whatever. In this case, and I think one of the things with our structure that we maybe should have thought about more than we did is that the board of a nonprofit has, unless you put other rules in place, quite a lot of power. They don’t really answer to anyone but themselves. And there’s ways in which that’s good, but what we’d really like is for the board of OpenAI to answer to the world as a whole, as much as that’s a practical thing.
我认为董事会理应拥有很大权力。但在大多数公司结构中,董事会通常要向股东负责,有时会有超额投票权等安排。而在我们的结构下,如果没有额外规则,非营利组织的董事会实际上拥有相当大的权力,他们只对自己负责。这在某些方面有其好处,但我们真正希望的是让 OpenAI 的董事会尽可能对全球公众负责,尽管这在实践中并非易事。

缺少正确的激励函数,可能延至产品。
Lex Fridman
(00:07:44)
So there’s a new board announced.
所以现在已经宣布了一个新董事会。
Sam Altman
(00:07:46)
Yeah.
是的。
Lex Fridman
(00:07:47)
There’s I guess a new smaller board at first, and now there’s a new final board?
最初是一个规模更小的新董事会,现在是不是又有了一个最终版的董事会?
Sam Altman
(00:07:53)
Not a final board yet. We’ve added some. We’ll add more.
还不是最终名单,我们已经新增了一些成员,后续还会再增。
Lex Fridman
(00:07:56)
Added some. Okay. What is fixed in the new one that was perhaps broken in the previous one?
已经新增了成员。好。那么新董事会修复了旧董事会中哪些可能存在的问题?
Sam Altman
(00:08:05)
The old board got smaller over the course of about a year. It was nine and then it went down to six, and then we couldn’t agree on who to add. And the board also I think didn’t have a lot of experienced board members, and a lot of the new board members at OpenAI have just have more experience as board members. I think that’ll help.
旧董事会在一年时间里不断缩小,从九人减到六人,之后我们又无法就增补人选达成一致。旧董事会的成员中缺乏富有经验的董事,而新加入的 OpenAI 董事中有不少人拥有更丰富的董事会经验。我认为这会有所帮助。
Lex Fridman
(00:08:31)
It’s been criticized, some of the people that are added to the board. I heard a lot of people criticizing the addition of Larry Summers, for example. What’s the process of selecting the board? What’s involved in that?
外界对新增董事人选提出了一些批评,比如我听到很多人批评拉里·萨默斯的加入。你们的董事会遴选流程是怎样的?涉及哪些考量?
Sam Altman
(00:08:43)
So Brett and Larry were decided in the heat of the moment over this very tense weekend, and that weekend was a real rollercoaster. It was a lot of ups and downs. And we were trying to agree on new board members that both the executive team here and the old board members felt would be reasonable. Larry was actually one of their suggestions, the old board members. Brett, I think I had even previous to that weekend suggested, but he was busy and didn’t want to do it, and then we really needed help in \[inaudible 00:09:22]. We talked about a lot of other people too, but I felt like if I was going to come back, I needed new board members. I didn’t think I could work with the old board again in the same configuration, although we then decided, and I’m grateful that Adam would stay, but we considered various configurations, decided we wanted to get to a board of three and had to find two new board members over the course of a short period of time.
所以 Brett 和 Larry 的加入是在那个高度紧张的周末仓促决定的,那真是一段过山车般的时光,起伏不断。当时我们想要敲定新董事人选,让管理团队和原董事会都能接受。实际上,Larry 是原董事会提出的人选之一。至于 Brett,我在那个周末之前就推荐过,但他当时很忙,不愿意出任;后来我们实在需要帮助,就再次联系他。我们也讨论过许多其他人选,但我觉得如果我要回到公司,就必须有新的董事会成员。我认为无法在原来的配置下继续与旧董事会合作。虽然我们后来决定并且很感激 Adam 留任,但我们还讨论了多种方案,最后确定董事会缩至三人,必须在短时间内再找两名新董事。
(00:09:57)
So those were decided honestly without… You do that on the battlefield. You don’t have time to design a rigorous process then. For new board members since, and new board members we’ll add going forward, we have some criteria that we think are important for the board to have, different expertise that we want the board to have. Unlike hiring an executive where you need them to do one role well, the board needs to do a whole role of governance and thoughtfulness well, and so, one thing that Brett says which I really like is that we want to hire board members in slates, not as individuals one at a time. And thinking about a group of people that will bring nonprofit expertise, expertise at running companies, good legal and governance expertise, that’s what we’ve tried to optimize for.
坦白说,这些决定都是在“战场”上做出的,当时没时间制定严谨的流程。此后已加入及未来将加入的新董事,我们设定了一些重要标准,希望董事会具备多元专长。与招聘高管只需要其胜任单一岗位不同,董事会必须整体履行治理和深思熟虑的职责。Brett 有句话我很赞同:我们要“成批”而非单个地选聘董事。我们优化的方向是让整个团队兼具非营利机构经验、企业运营经验,以及良好的法律和治理专长。
Lex Fridman
(00:10:49)
So is technical savvy important for the individual board members?
那么,单个董事的技术敏锐度重要吗?
Sam Altman
(00:10:52)
Not for every board member, but for certainly some you need that. That’s part of what the board needs to do.
并非每位董事都需要技术专长,但至少要有几位具备。这是董事会职责的一部分。
Lex Fridman
(00:10:57)
The interesting thing that people probably don’t understand about OpenAI, I certainly don’t, is all the details of running the business. When they think about the board, given the drama, they think about you. They think about if you reach AGI or you reach some of these incredibly impactful products and you build them and deploy them, what’s the conversation with the board like? And they think, all right, what’s the right squad to have in that kind of situation to deliberate?
人们或许不了解 OpenAI 的业务运营细节——我自己也不完全了解。当提到董事会,鉴于那些戏剧性事件,人们就会想到你。他们会想,如果你们实现 AGI 或推出极具影响力的产品并部署它们,董事会会如何讨论?在那种情形下,该由哪些人组成合适的团队来审议?
Sam Altman
(00:11:25)
Look, I think you definitely need some technical experts there. And then you need some people who are like, “How can we deploy this in a way that will help people in the world the most?” And people who have a very different perspective. I think a mistake that you or I might make is to think that only the technical understanding matters, and that’s definitely part of the conversation you want that board to have, but there’s a lot more about how that’s going to just impact society and people’s lives that you really want represented in there too.
我认为确实需要一些技术专家,同时也需要一些人能思考“如何以最有利于全球大众的方式部署这些技术”,以及拥有截然不同视角的人。你我可能犯的错误是只关注技术理解,但这只是董事会讨论的一部分。更重要的是技术将如何影响社会和人们的生活,你也需要有人在董事会中代表这些视角。
Lex Fridman
(00:11:56)
Are you looking at the track record of people or you’re just having conversations?
你们更看重候选人的过往履历,还是主要通过对话来评估?
Sam Altman
(00:12:00)
Track record is a big deal. You of course have a lot of conversations, but there are some roles where I totally ignore track record and just look at slope, ignore the Y-intercept.
履历当然很重要,我们也会进行大量沟通。但有些岗位我完全忽略过往成绩,只看“斜率”,不看“截距”。
Lex Fridman
(00:12:18)
Thank you. Thank you for making it mathematical for the audience.
谢谢,把问题用数学术语说明。
Sam Altman
(00:12:21)
For a board member, I do care much more about the Y-intercept. I think there is something deep to say about track record there, and experience is something’s very hard to replace.
但对董事人选而言,我更在意“截距”。履历确实反映了深层价值,经验是难以替代的。
Lex Fridman
(00:12:32)
Do you try to fit a polynomial function or exponential one to the track record?
那你会用多项式还是指数函数来拟合履历曲线?
Sam Altman
(00:12:36)
That analogy doesn’t carry that far.
这个比喻就别延伸那么远了。
Lex Fridman
(00:12:39)
All right. You mentioned some of the low points that weekend. What were some of the low points psychologically for you? Did you consider going to the Amazon jungle and just taking ayahuasca and disappearing forever?
好吧。你提到那个周末的一些低谷。从心理层面来说,你经历了哪些最低点?有没有想过干脆去亚马逊雨林喝点死藤水,从此隐退?
Sam Altman
(00:12:53)
It was a very bad period of time. There were great high points too. My phone was just nonstop blowing up with nice messages from people I worked with every day, people I hadn’t talked to in a decade. I didn’t get to appreciate that as much as I should have because I was just in the middle of this firefight, but that was really nice. But on the whole, it was a very painful weekend. It was like a battle fought in public to a surprising degree, and that was extremely exhausting to me, much more than I expected. I think fights are generally exhausting, but this one really was. The board did this Friday afternoon. I really couldn’t get much in the way of answers, but I also was just like, well, the board gets to do this, so I’m going to think for a little bit about what I want to do, but I’ll try to find the blessing in disguise here.
那段时间非常糟糕。当然也有高光时刻:手机不停收到来自同事和多年未联系朋友的暖心信息,只是我当时置身“枪林弹雨”,没能好好体会。但总体而言,那是非常痛苦的周末,几乎是一场公开进行的战斗,远比我预想的要消耗精力。我知道争斗总是让人疲惫,但这次尤其如此。董事会在周五下午做出决定,我几乎得不到任何答案。我想,既然董事会有权这么做,我就先思考下一步,但也试着在逆境中寻找转机。
(00:13:52)
And I was like, well, my current job at OpenAI is, or it was, to run a decently sized company at this point. And the thing I’d always liked the most was just getting to work with the researchers. And I was like, yeah, I can just go do a very focused AGI research effort. And I got excited about that. Didn’t even occur to me at the time possibly that this was all going to get undone. This was Friday afternoon.
我当时想,在 OpenAI 我的工作是运营一家颇具规模的公司,而我最喜欢的其实是与研究人员一起工作。我就想,也许我可以专注做一个 AGI 研究项目,这让我感到兴奋。当时根本没想到这一切最终会被推翻——那还是周五下午的事。
Lex Fridman
(00:14:19)
So you’ve accepted the death of this-
所以你已经接受了这件事的终结——
Sam Altman
(00:14:22)
Very quickly. Very quickly. I went through a little period of confusion and rage, but very quickly, quickly. And by Friday night, I was talking to people about what was going to be next, and I was excited about that. I think it was Friday evening for the first time that I heard from the exec team here, which is like, “Hey, we’re going to fight this.” and then I went to bed just still being like, okay, excited. Onward.
很快,非常快。我经历了一段短暂的困惑和愤怒,但转眼就过去了。到周五晚上,我已经在跟人讨论接下来要做什么,而且对此感到兴奋。那天晚间,我第一次接到公司管理层的消息,说:“嘿,我们要抗争到底。”随后我就带着“好的,继续前进”的兴奋心情入睡。
Lex Fridman
(00:14:52)
Were you able to sleep?
你当时睡得着吗?
Sam Altman
(00:14:54)
Not a lot. One of the weird things was there was this period of four and a half days where I didn’t sleep much, didn’t eat much, and still had a surprising amount of energy. You learn a weird thing about adrenaline in wartime.
睡得不多。有件怪事:那四天半里,我几乎没怎么睡觉、也没怎么吃东西,却仍然精力充沛。人在“战时”会体会到肾上腺素的奇效。
Lex Fridman
(00:15:09)
So you accepted the death of this baby, OpenAI.
所以你接受了这个“孩子”OpenAI 的死亡。
Sam Altman
(00:15:13)
And I was excited for the new thing. I was just like, “Okay, this was crazy, but whatever.”
而且我对新的事物感到兴奋。我心想:“好吧,这太疯狂了,但无所谓。”
Lex Fridman
(00:15:17)
It’s a very good coping mechanism.
这是一个很好的应对机制。
Sam Altman
(00:15:18)
And then Saturday morning, two of the board members called and said, “Hey, we didn’t mean to destabilize things. We don’t want to store a lot of value here. Can we talk about you coming back?” And I immediately didn’t want to do that, but I thought a little more and I was like, well, I really care about the people here, the partners, shareholders. I love this company. And so I thought about it and I was like, “Well, okay, but here’s the stuff I would need.” And then the most painful time of all was over the course of that weekend, I kept thinking and being told, and not just me, the whole team here kept thinking, well, we were trying to keep OpenAI stabilized while the whole world was trying to break it apart, people trying to recruit whatever.
接着到周六早上,有两位董事给我打电话说:“嘿,我们并不想让局面动荡,我们不想在这里储存过多价值。能谈谈你回来的事吗?”我本能地不想回去,但仔细想想,我很在乎这里的人、伙伴和股东,我热爱这家公司。于是我思考后说:“好吧,但我需要满足以下条件。”最痛苦的是那个周末期间,我和团队不断被告知、也不断在想:我们在努力稳定 OpenAI,而外界却试图分裂它,到处有人挖人。
(00:16:04) We kept being told, all right, we’re almost done. We’re almost done. We just need a little bit more time. And it was this very confusing state. And then Sunday evening when, again, every few hours I expected that we were going to be done and we’re going to figure out a way for me to return and things to go back to how they were. The board then appointed a new interim CEO, and then I was like, that feels really bad. That was the low point of the whole thing. I’ll tell you something. It felt very painful, but I felt a lot of love that whole weekend. Other than that one moment Sunday night, I would not characterize my emotions as anger or hate, but I felt a lot of love from people, towards people. It was painful, but the dominant emotion of the weekend was love, not hate.
我们不停被告知:“快好了,快好了,只差一点点时间。”整个人处于极度困惑中。到了周日晚上,我本以为几小时内能解决、让我回去、一切恢复原状,却得知董事会任命了新的临时 CEO,那感觉非常糟糕——这是整件事的最低谷。但我要说,那周末虽然痛苦,却充满爱意。除了周日晚上的那一刻,我的情绪并非愤怒或仇恨,而是感受到人与人之间的爱。痛苦归痛苦,但主导情感是爱,而非恨。
Lex Fridman
(00:17:04)
You’ve spoken highly of Mira Murati, that she helped especially, as you put in the tweet, in the quiet moments when it counts. Perhaps we could take a bit of a tangent. What do you admire about Mira?
你曾高度评价 Mira Murati,说她尤其在关键但安静的时刻给予帮助。我们稍微岔开话题——你欣赏 Mira 什么?
Sam Altman
(00:17:15)
Well, she did a great job during that weekend in a lot of chaos, but people often see leaders in the crisis moments, good or bad. But a thing I really value in leaders is how people act on a boring Tuesday at 9:46 in the morning and in just the normal drudgery of the day-to-day. How someone shows up in a meeting, the quality of the decisions they make. That was what I meant about the quiet moments.
她在那个充满混乱的周末表现出色,但人们通常只看到领导者在危机时刻的表现。对我来说,更重要的是他们在一个乏味的周二早上 9:46 分、日复一日的平淡工作中如何行动:开会时的状态、做决策的质量。所谓“安静的时刻”正是指这些。
Lex Fridman
(00:17:47)
Meaning most of the work is done on a day-by-day, in meeting-by-meeting. Just be present and make great decisions.
也就是说,大多数工作都是在日常、在一次次会议中完成的——只要到场并做出正确决定。
Sam Altman
(00:17:58)
Yeah. Look, what you have wanted to spend the last 20 minutes about, and I understand, is this one very dramatic weekend, but that’s not really what OpenAI is about. OpenAI is really about the other seven years.
没错。你这二十分钟里关注的是那个戏剧性的周末,我理解,但 OpenAI 的核心并不在此,而在其他七年里的点点滴滴。
Lex Fridman
(00:18:10)
Well, yeah. Human civilization is not about the invasion of the Soviet Union by Nazi Germany, but still that’s something people focus on.
是的。人类文明的全部也不是德军入侵苏联,但人们仍会关注那样的事件。
Sam Altman
(00:18:18)
Very understandable.
完全可以理解。
Lex Fridman
(00:18:19)
It gives us an insight into human nature, the extremes of human nature, and perhaps some of the damage in some of the triumphs of human civilization can happen in those moments, so it’s illustrative. Let me ask you about Ilya. Is he being held hostage in a secret nuclear facility?
这类极端时刻能让我们洞见人性、看到人性的极端,也揭示人类文明成败都可能在此刻发生,因此具有启示意义。换个话题——我想问问 Ilya。他是不是被关在某个秘密核设施里当人质?
Ilya Sutskever
Sam Altman
(00:18:36)
No.
没有。
Lex Fridman
(00:18:37)
What about a regular secret facility?
那普通的秘密设施呢?
Sam Altman
(00:18:39)
No.
也没有。
Lex Fridman
(00:18:40)
What about a nuclear non-secret facility?
那公开的核设施?
Sam Altman
(00:18:41)
Neither. Not that either.
都不是,完全没有。
Lex Fridman
(00:18:44)
This is becoming a meme at some point. You’ve known Ilya for a long time. He was obviously part of this drama with the board and all that kind of stuff. What’s your relationship with him now?
这件事已经有点像梗了。你和 Ilya 认识很久,他显然也卷入了这场董事会风波。你们现在的关系怎样?
Sam Altman
(00:18:57)
I love Ilya. I have tremendous respect for Ilya. I don’t have anything I can say about his plans right now. That’s a question for him, but I really hope we work together for certainly the rest of my career. He’s a little bit younger than me. Maybe he works a little bit longer.
我爱 Ilya,对他非常敬重。关于他接下来的计划我不便代言,这得问他本人。但我真心希望在我职业生涯的余下时间里都能与他共事。他比我稍年轻,也许会工作更久一点。
Lex Fridman
(00:19:15)
There’s a meme that he saw something, like he maybe saw AGI and that gave him a lot of worry internally. What did Ilya see?
有个梗说他看到了什么,比如看到了 AGI,所以心里非常担忧。Ilya 到底看到了什么?
Sam Altman
(00:19:28)
Ilya has not seen AGI. None of us have seen AGI. We’ve not built AGI. I do think one of the many things that I really love about Ilya is he takes AGI and the safety concerns, broadly speaking, including things like the impact this is going to have on society, very seriously. And as we continue to make significant progress, Ilya is one of the people that I’ve spent the most time over the last couple of years talking about what this is going to mean, what we need to do to ensure we get it right, to ensure that we succeed at the mission. So Ilya did not see AGI, but Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right.
Ilya 没有见过 AGI,我们谁都没有,我们还没有造出 AGI。我欣赏 Ilya 的一点是,他非常严肃地看待 AGI 及其安全问题,包括对社会影响等广义议题。过去几年里,随着我们取得重大进展,Ilya 是我交流最多的人之一——讨论这意味着什么、我们需要做什么才能确保事情正确推进、使命得以实现。所以他并未见到 AGI,但他对如何把事情做正确的深思和担忧,是人类的财富。
Lex Fridman
(00:20:30)
I’ve had a bunch of conversation with him in the past. I think when he talks about technology, he’s always doing this long-term thinking type of thing. So he is not thinking about what this is going to be in a year. He’s thinking about in 10 years, just thinking from first principles like, “Okay, if this scales, what are the fundamentals here? Where’s this going?” And so that’s a foundation for them thinking about all the other safety concerns and all that kind of stuff, which makes him a really fascinating human to talk with. Do you have any idea why he’s been quiet? Is it he’s just doing some soul-searching?
过去我跟他聊过很多次。我发现他谈技术总是以长远视角思考,不只看一年后的情况,而是思考十年后——从第一性原理出发,问“如果规模扩大,底层是什么?方向在哪?”这也成为他思考其他安全问题的基础,使他成为很有趣的谈话对象。你知道他最近为何沉默吗?是在进行某种内省吗?
Sam Altman
(00:21:08)
Again, I don’t want to speak for Ilya. I think that you should ask him that. He’s definitely a thoughtful guy. I think Ilya is always on a soul search in a really good way.
我还是不想替 Ilya 发言,这问题最好问他本人。他确实是个深思熟虑的人。我认为 Ilya 一直都在以一种积极的方式自我探索。
Lex Fridman
(00:21:27)
Yes. Yeah. Also, he appreciates the power of silence. Also, I’m told he can be a silly guy, which I’ve never seen that side of him.
是的,他也懂得沉默的力量。我听说他有时很“搞笑”,但我从未见过那一面。
Sam Altman
(00:21:36)
It’s very sweet when that happens.
当他展现那一面时真的很可爱。
Lex Fridman
(00:21:39)
I’ve never witnessed a silly Ilya, but I look forward to that as well.
我从没见过“搞笑版”的 Ilya,但也很期待。
Sam Altman
(00:21:43)
I was at a dinner party with him recently and he was playing with a puppy and he was in a very silly mood, very endearing. And I was thinking, oh man, this is not the side of Ilya that the world sees the most.
前不久一次晚宴上,他逗一只小狗,整个人特别搞怪,特别讨人喜欢。我当时想,哇,这可不是大众最常见的 Ilya。
Lex Fridman
(00:21:55)
So just to wrap up this whole saga, are you feeling good about the board structure-
那么回到这场风波的收尾,你对董事会的结构感觉——
Sam Altman
(00:21:55)
Yes.
很好。
Lex Fridman
(00:22:01)
… about all of this and where it’s moving?
——以及事情的发展方向感到满意吗?
Sam Altman
(00:22:04)
I feel great about the new board. In terms of the structure of OpenAI, one of the board’s tasks is to look at that and see where we can make it more robust. We wanted to get new board members in place first, but we clearly learned a lesson about structure throughout this process. I don’t have, I think, super deep things to say. It was a crazy, very painful experience. I think it was a perfect storm of weirdness. It was a preview for me of what’s going to happen as the stakes get higher and higher and the need that we have robust governance structures and processes and people. I’m happy it happened when it did, but it was a shockingly painful thing to go through.
我对新董事会非常满意。关于 OpenAI 的组织结构,新董事会的任务之一就是评估并强化其稳健性。我们先把新董事会成员就位,这个过程也让我们深刻认识到结构的重要性。我没有太多高深的感想——那是一段疯狂且痛苦的经历,可谓“完美风暴”。这对我来说是一个预演:随着风险和利益越发重大,我们需要更强大的治理结构、流程和人才。我庆幸它在此时发生,但经历过程的痛苦依旧令人震惊。
Lex Fridman
(00:22:47)
Did it make you be more hesitant in trusting people?
这件事是否让你在信任他人方面变得更犹豫?
Sam Altman
(00:22:50)
Yes.
是的。
Lex Fridman
(00:22:51)
Just on a personal level?
只是从个人层面来说?
Sam Altman
(00:22:52)
Yes. I think I’m like an extremely trusting person. I’ve always had a life philosophy of don’t worry about all of the paranoia. Don’t worry about the edge cases. You get a little bit screwed in exchange for getting to live with your guard down. And this was so shocking to me. I was so caught off guard that it has definitely changed, and I really don’t like this, it’s definitely changed how I think about just default trust of people and planning for the bad scenarios.
是的。我原本是一个极度信任他人的人。我的人生哲学一直是:别为偏执担心,别为极端情况忧虑。放下戒心的代价只是偶尔吃点亏。但这次事件让我震惊不已,完全措手不及,确实改变了我对“默认信任他人”和“预设坏情况”的看法,这一点让我很不喜欢。
Lex Fridman
(00:23:21)
You got to be careful with that. Are you worried about becoming a little too cynical?
这点要小心。你担心自己会变得过于愤世嫉俗吗?
Sam Altman
(00:23:26)
I’m not worried about becoming too cynical. I think I’m the extreme opposite of a cynical person, but I’m worried about just becoming less of a default trusting person.
我不担心会变得太愤世嫉俗,我正好是那类人的反面。但我担心自己会不再那么“默认信任”他人。
Lex Fridman
(00:23:36)
I’m actually not sure which mode is best to operate in for a person who’s developing AGI, trusting or un-trusting. It’s an interesting journey you’re on. But in terms of structure, see, I’m more interested on the human level. How do you surround yourself with humans that are building cool shit, but also are making wise decisions? Because the more money you start making, the more power the thing has, the weirder people get.
对于正在研发 AGI 的人来说,究竟是信任还是不信任模式更好,我也说不准。你的旅程很有意思。不过就结构而言,我更关注人这一层面:你如何让自己周围既聚集能做出酷东西,又能做出明智决策的人?因为赚钱越多、权力越大,人就越可能变得古怪。
Sam Altman
(00:24:06)
I think you could make all kinds of comments about the board members and the level of trust I should have had there, or how I should have done things differently. But in terms of the team here, I think you’d have to give me a very good grade on that one. And I have just enormous gratitude and trust and respect for the people that I work with every day, and I think being surrounded with people like that is really important.
关于董事会成员以及我该有的信任程度,或者我本应不同做法,你可以提出各种观点。但就团队而言,我想你得给我打个高分。我对每天共事的这些人充满感激、信任和尊重。我认为身边有这样的人非常重要。
Elon Musk lawsuit
Lex Fridman
(00:24:39)
Our mutual friend Elon sued OpenAI. What to you is the essence of what he’s criticizing? To what degree does he have a point? To what degree is he wrong?
我们的共同朋友 Elon 起诉了 OpenAI。你觉得他批评的核心是什么?他有多少道理,又有多少不对?
Sam Altman
(00:24:52)
I don’t know what it’s really about. We started off just thinking we were going to be a research lab and having no idea about how this technology was going to go. Because it was only seven or eight years ago, it’s hard to go back and really remember what it was like then, but this is before language models were a big deal. This was before we had any idea about an API or selling access to a chatbot. It was before we had any idea we were going to productize at all. So we’re like, “We’re just going to try to do research and we don’t really know what we’re going to do with that.” I think with many fundamentally new things, you start fumbling through the dark and you make some assumptions, most of which turned out to be wrong.
我不清楚他到底在意什么。最初我们只是觉得自己会成为一家研究实验室,完全不知道这项技术会怎样发展。那也就七八年前的事,如今很难回想当时的情景——那时语言模型还没被重视;我们完全没想到 API 或出售聊天机器人的使用权;甚至没想过要产品化。因此我们想:“先做研究吧,未来怎么用再说。”我认为,很多根本性的新事物都是在黑暗中摸索,带着一些假设前行,而大多数假设最后被证明是错的。
(00:25:31)
And then it became clear that we were going to need to do different things and also have huge amounts more capital. So we said, “Okay, well, the structure doesn’t quite work for that. How do we patch the structure?” And then you patch it again and patch it again and you end up with something that does look eyebrow-raising, to say the least. But we got here gradually with, I think, reasonable decisions at each point along the way. And it doesn’t mean I wouldn’t do it totally differently if we could go back now with an Oracle, but you don’t get the Oracle at the time. But anyway, in terms of what Elon’s real motivations here are, I don’t know.
后来我们意识到,必须做不同的事情,而且需要大量资金。于是我们说:“好吧,现有结构不适用,怎么给它打补丁?”然后一次次补丁,最后得到的东西至少看上去让人挑眉。不过我们是一路上在每个节点做出合理决策才走到今天的。如果现在能带着“预言机”回去,我或许会完全不同操作,但当时没有预言机。至于 Elon 真正的动机,我也不清楚。

显而易见是个垃圾,可能在某个最薄弱的地方折断。
Lex Fridman
(00:26:12)
To the degree you remember, what was the response that OpenAI gave in the blog post? Can you summarize it?
据你回忆,OpenAI 在那篇博客中是如何回应的?你能总结一下吗?
Sam Altman
(00:26:21)
Oh, we just said Elon said this set of things. Here’s our characterization, or here’s not our characterization. Here’s the characterization of how this went down. We tried to not make it emotional and just say, “Here’s the history.”
哦,我们只是列出了 Elon 的说法,然后给出我们的描述,或者说并非“我们的描述”,而是事情经过的客观描述。我们尽量不掺杂情绪,只是陈述“事情的来龙去脉”。
Lex Fridman
(00:26:44)
I do think there’s a degree of mischaracterization from Elon here about one of the points you just made, which is the degree of uncertainty you had at the time. You guys are a small group of researchers crazily talking about AGI when everybody’s laughing at that thought.
我确实觉得 Elon 在某个点上有些误解,也就是你刚提到的“不确定性”:当时你们只是一小群研究人员在疯狂讨论 AGI,而大家对此都嗤之以鼻。
Sam Altman
(00:27:09)
It wasn’t that long ago Elon was crazily talking about launching rockets when people were laughing at that thought, so I think he’d have more empathy for this.
没多久前,Elon 也在疯狂谈论发射火箭,大家也在嘲笑他,所以我觉得他本该对这种情况更能共情。
Lex Fridman
(00:27:20)
I do think that there’s personal stuff here, that there was a split that OpenAI and a lot of amazing people here chose to part ways with Elon, so there’s a personal-
我认为这里面有个人层面的因素:OpenAI 以及许多出色的人选择与 Elon 分道扬镳,因此带有个人——
Sam Altman
(00:27:34)
Elon chose to part ways.
是 Elon 自己选择分开的。
Lex Fridman
(00:27:37)
Can you describe that exactly? The choosing to part ways?
能具体描述一下吗?“选择分开”是怎样的?
Sam Altman
(00:27:42)
He thought OpenAI was going to fail. He wanted total control to turn it around. We wanted to keep going in the direction that now has become OpenAI. He also wanted Tesla to be able to build an AGI effort. At various times, he wanted to make OpenAI into a for-profit company that he could have control of or have it merge with Tesla. We didn’t want to do that, and he decided to leave, which that’s fine.
他认为 OpenAI 会失败,想完全掌控以扭转局面。我们想继续走后来成为 OpenAI 的这条路。他还希望 Tesla 能组建 AGI 团队。多次提出把 OpenAI 变成由他控制的营利公司,或与 Tesla 合并。我们不想这么做,于是他决定离开,也没问题。

事后证明马斯克又是对的,OpenAI最终还是需要成为一个营利公司,不确定的是当时的Tesla也没什么钱,两个人都是空袋子,让其中一个空袋子投靠另一个空袋子是不现实的。
Lex Fridman
(00:28:06)
So you’re saying, and that’s one of the things that the blog post says, is that he wanted OpenAI to be basically acquired by Tesla in the same way that, or maybe something similar or maybe something more dramatic than the partnership with Microsoft.
也就是说,博客里提到,他希望 Tesla 收购 OpenAI,程度与微软合作类似,或许更激进?
Sam Altman
(00:28:23)
My memory is the proposal was just like, yeah, get acquired by Tesla and have Tesla have full control over it. I’m pretty sure that’s what it was.
我记得提案就是让 Tesla 直接收购 OpenAI,并全面掌控,大概就是这样。
Lex Fridman
(00:28:29)
So what does the word open in OpenAI mean to Elon at the time? Ilya has talked about this in the email exchanges and all this kind of stuff. What does it mean to you at the time? What does it mean to you now?
那么“OpenAI”里的“Open”在当时对 Elon 意味着什么?Ilya 在邮件里也谈过。对你当时意味着什么?现在又意味着什么?
Sam Altman
(00:28:44)
Speaking of going back with an Oracle, I’d pick a different name. One of the things that I think OpenAI is doing that is the most important of everything that we’re doing is putting powerful technology in the hands of people for free, as a public good. We don’t run ads on our-
如果能带着“预言机”回去,我会换个名字。我认为 OpenAI 最重要的事情之一,就是把强大的技术作为公共产品免费交到人们手中。我们的免费产品没有广告——
Sam Altman
(00:29:01)
… as a public good. We don’t run ads on our free version. We don’t monetize it in other ways. We just say it’s part of our mission. We want to put increasingly powerful tools in the hands of people for free and get them to use them. I think that kind of open is really important to our mission. I think if you give people great tools and teach them to use them or don’t even teach them, they’ll figure it out, and let them go build an incredible future for each other with that, that’s a big deal. So if we can keep putting free or low cost or free and low cost powerful AI tools out in the world, I think that’s a huge deal for how we fulfill the mission. Open source or not, yeah, I think we should open source some stuff and not other stuff. It does become this religious battle line where nuance is hard to have, but I think nuance is the right answer.
……作为公共产品。我们的免费版没有广告,也不靠其他手段变现。我们说,这就是使命的一部分:不断把更强大的工具免费交到人们手中,并让他们使用。我认为这种意义上的“开放”对使命至关重要。如果给人们优秀的工具,让他们自学或甚至无需教学,他们就能用这些工具为彼此构建惊人的未来——这意义重大。所以,只要我们能持续把免费或低价的强力 AI 工具推向世界,对完成使命就是巨大助力。至于开源与否——我认为有些东西该开源,有些不该。这个话题常被当成宗教战线,难谈细节,但细节和权衡才是正确答案。
Lex Fridman
(00:29:55)
So he said, “Change your name to CloseAI and I’ll drop the lawsuit.” I mean is it going to become this battleground in the land of memes about the name?
他说:“把名字改成 CloseAI 我就撤诉。”——难道这会变成关于名字的表情包战场吗?
Sam Altman
(00:30:06)
I think that speaks to the seriousness with which Elon means the lawsuit, and that’s like an astonishing thing to say, I think.
我想这说明 Elon 对这场诉讼的“严肃”程度——我觉得这话本身就很惊人。
Lex Fridman
(00:30:23)
Maybe correct me if I’m wrong, but I don’t think the lawsuit is legally serious. It’s more to make a point about the future of AGI and the company that’s currently leading the way.
如果我错了请纠正,但我觉得这起诉讼在法律层面并不严谨,更像是为了表达对 AGI 未来及行业领头公司的看法。
Sam Altman
(00:30:37)
Look, I mean Grok had not open sourced anything until people pointed out it was a little bit hypocritical and then he announced that Grok will open source things this week. I don’t think open source versus not is what this is really about for him.
要知道,在被指出“有点虚伪”之前,Grok 并未开源任何东西,后来才宣布本周会开源。我认为开源与否并不是他真正关注的焦点。
Lex Fridman
(00:30:48)
Well, we will talk about open source and not. I do think maybe criticizing the competition is great. Just talking a little shit, that’s great. But friendly competition versus like, “I personally hate lawsuits.”
好吧,我们可以聊聊开源与否。我觉得批评竞争对手没问题,吐吐槽也不错。但我更喜欢友好的竞争,而不是——我个人讨厌诉讼。
Sam Altman
(00:31:01)
Look, I think this whole thing is unbecoming of a builder. And I respect Elon as one of the great builders of our time. I know he knows what it’s like to have haters attack him and it makes me extra sad he’s doing it toss.
说实话,这一切并不像一个“建设者”应有的做法。我尊敬 Elon,他是当代伟大的建设者之一,他深知被黑的感受,如今他却这样做,这让我格外难过。
Lex Fridman
(00:31:18)
Yeah, he’s one of the greatest builders of all time, potentially the greatest builder of all time.
是的,他是史上最伟大的建设者之一,甚至可能是最伟大的。
Sam Altman
(00:31:22)
It makes me sad. And I think it makes a lot of people sad. There’s a lot of people who’ve really looked up to him for a long time. I said in some interview or something that I missed the old Elon and the number of messages I got being like, “That exactly encapsulates how I feel.”
这让我难过,也让很多人难过。很多人长期敬仰他。我曾在一次采访中说过我怀念以前的 Elon,结果收到了无数信息,说“这正是我的感受”。
Lex Fridman
(00:31:36)
I think he should just win. He should just make X Grok beat GPT and then GPT beats Grok and it’s just the competition and it’s beautiful for everybody. But on the question of open source, do you think there’s a lot of companies playing with this idea? It’s quite interesting. I would say Meta surprisingly has led the way on this, or at least took the first step in the game of chess of really open sourcing the model. Of course it’s not the state-of-the-art model, but open sourcing Llama Google is flirting with the idea of open sourcing a smaller version. What are the pros and cons of open sourcing? Have you played around with this idea?
我觉得他应该靠实力取胜,让 X Grok 超越 GPT,然后 GPT 再超越 Grok,这样的竞争对大家都好。说到开源,你觉得现在很多公司都在尝试这个想法吗?这很有意思。Meta 意外地走在前面,真正开源了模型的第一步,虽说不是最先进的。Google 也在考虑开源一个小版本 Llama。你怎么看开源的利弊?你们有没有尝试?
Sam Altman
(00:32:22)
Yeah, I think there is definitely a place for open source models, particularly smaller models that people can run locally, I think there’s huge demand for. I think there will be some open source models, there will be some closed source models. It won’t be unlike other ecosystems in that way.
是的,开源模型肯定有其价值,尤其是可以本地运行的小模型,需求巨大。我认为未来既会有开源模型,也会有闭源模型,这和其他生态系统类似。
Lex Fridman
(00:32:39)
I listened to all in podcasts talking about this lawsuit and all that kind of stuff. They were more concerned about the precedent of going from nonprofit to this cap for profit. What precedent that sets for other startups? Is that something-
我听了 All-In 播客对这场诉讼的讨论。他们更担心从非营利转为“有限营利”这种先例,会给其他初创企业带来什么影响。你怎么看——
Sam Altman
(00:32:56)
I would heavily discourage any startup that was thinking about starting as a nonprofit and adding a for-profit arm later. I’d heavily discourage them from doing that. I don’t think we’ll set a precedent here.
我强烈不建议任何初创公司先做非营利、后加营利实体。我非常不推荐这么做。我不认为我们会因此树立什么可借鉴的先例。

为了实现自己的目的做了一些很奇怪的事,上门女婿、特别丑,打算日后可以做点整容再加5年的心理治疗。
Lex Fridman
(00:33:05)
Okay. So most startups should go just-
大多数创业公司应该直接——
Sam Altman
(00:33:08)
For sure.
没错。
Lex Fridman
(00:33:09)
And again-
再说一遍——
Sam Altman
(00:33:09)
If we knew what was going to happen, we would’ve done that too.
如果当初知道会发展成这样,我们也会那样做。
Lex Fridman
(00:33:12)
Well in theory, if you dance beautifully here, there’s some tax incentives or whatever, but…
理论上,如果操作得完美,可能会有税收优惠之类的好处,但……
Sam Altman
(00:33:19)
I don’t think that’s how most people think about these things.
我觉得大多数人并不会这么考虑问题。
Lex Fridman
(00:33:22)
It’s just not possible to save a lot of money for a startup if you do it this way.
按照这种方式,创业公司根本省不了多少钱。
Sam Altman
(00:33:27)
No, I think there’s laws that would make that pretty difficult.
不会的,相关法律本身就会让这变得相当困难。
Lex Fridman
(00:33:30)
Where do you hope this goes with Elon? This tension, this dance, what do you hope this? If we go 1, 2, 3 years from now, your relationship with him on a personal level too, like friendship, friendly competition, just all this kind of stuff.
你希望和 Elon 的情况接下来如何发展?这种拉锯、这种对抗——如果放眼一两三年后,你们个人层面的关系呢?比如友谊、良性竞争等等,你希望怎样?
Sam Altman
(00:33:51)
Yeah, I really respect Elon and I hope that years in the future we have an amicable relationship.
是的,我非常尊敬 Elon,希望未来几年我们能保持友好关系。
Lex Fridman
(00:34:05)
Yeah, I hope you guys have an amicable relationship this month and just compete and win and explore these ideas together. I do suppose there’s competition for talent or whatever, but it should be friendly competition. Just build cool shit. And Elon is pretty good at building cool shit. So are you.
我希望你们本月就能和好,然后一起竞争、一起获胜、共同探索这些想法。的确会有人才等方面的竞争,但应该是友好的竞争——专注把酷东西做出来。Elon 很擅长造酷东西,你也一样。
Sora
(00:34:32)
So speaking of cool shit, Sora. There’s like a million questions I could ask. First of all, it’s amazing. It truly is amazing on a product level but also just on a philosophical level. So let me just technical/philosophical ask, what do you think it understands about the world more or less than GPT-4 for example? The world model when you train on these patches versus language tokens.
说到酷东西,就谈谈 Sora。我有上百万个问题。首先,它令人惊叹——不仅在产品层面,也在哲学层面。所以技术/哲学兼问:比如和 GPT-4 相比,你觉得它对世界的理解多了还是少了?当你用视频片段而不是语言标记来训练时,它的世界模型如何?
Sam Altman
(00:35:04)
I think all of these models understand something more about the world model than most of us give them credit for. And because they’re also very clear things they just don’t understand or don’t get right, it’s easy to look at the weaknesses, see through the veil and say, “Ah, this is all fake.” But it’s not all fake. It’s just some of it works and some of it doesn’t work.
我认为这些模型对世界的理解都比我们大多数人想象的要深一些。因为它们也的确存在明显的不理解或错误之处,人们很容易盯着弱点、看穿表象后说:“啊,这都是假的。” 但并非全是假的——只是有些地方运作良好,有些地方尚未奏效。
(00:35:28)
I remember when I started first watching Sora videos and I would see a person walk in front of something for a few seconds and occlude it and then walk away and the same thing was still there. I was like, “Oh, this is pretty good.” Or there’s examples where the underlying physics looks so well represented over a lot of steps in a sequence, it’s like, “Oh, this is quite impressive.” But fundamentally, these models are just getting better and that will keep happening. If you look at the trajectory from DALL·E 1 to 2 to 3 to Sora, there are a lot of people that were dunked on each version saying it can’t do this, it can’t do that and look at it now.
我记得第一次看 Sora 的视频时,看到有人物走到前景挡住物体几秒,然后走开,那个物体依旧保持一致,我就想:“哦,这挺不错。” 还有些例子,序列中的底层物理呈现得非常到位,我会觉得:“哇,相当厉害。” 但从根本上说,这些模型只会越来越好,而且这种进步会持续。看看从 DALL·E 1 到 2、3 再到 Sora 的轨迹,每一版都有很多人吐槽“它做不到这个、做不到那个”,但现在再看看成果。
Lex Fridman
(00:36:04) Well, the thing you just mentioned is the occlusions is basically modeling the physics of the three-dimensional physics of the world sufficiently well to capture those kinds of things.
(00:36:04)你刚提到的遮挡问题,本质上就是要足够准确地建模世界的三维物理规律,才能捕捉到那类现象。
Sam Altman
(00:36:17)
Well…
嗯……
Lex Fridman
(00:36:18)
Or yeah, maybe you can tell me, in order to deal with occlusions, what does the world model need to?
或者说,你能不能解释一下,为了处理遮挡,世界模型需要具备什么?
Sam Altman
(00:36:24)
Yeah. So what I would say is it’s doing something to deal with occlusions really well. What I represent that it has a great underlying 3D model of the world, it’s a little bit more of a stretch.
是的。我的说法是,它确实在处理遮挡方面表现不错。但要说它拥有出色的世界三维底层模型,这就有点夸张了。
Lex Fridman
(00:36:33)
But can you get there through just these kinds of two-dimensional training data approaches?
但仅靠这种二维训练数据的方法,能达到那一步吗?
Sam Altman
(00:36:39)
It looks like this approach is going to go surprisingly far. I don’t want to speculate too much about what limits it will surmount and which it won’t, but…
看起来这种方法能走得出乎意料的远。我不想过度推测它能够跨越哪些极限,哪些又跨越不了,但……
Lex Fridman
(00:36:46)
What are some interesting limitations of the system that you’ve seen? I mean there’s been some fun ones you’ve posted.
你看到这个系统有哪些有趣的局限?你发过一些很有意思的例子。
Sam Altman
(00:36:52)
There’s all kinds of fun. I mean, cat’s sprouting an extra limit at random points in a video. Pick what you want, but there’s still a lot of problem, there’s a lot of weaknesses.
花样很多,比如视频里猫会随机长出额外的肢体之类。随便挑吧,问题还不少,弱点很多。
Lex Fridman
(00:37:02)
Do you think it’s a fundamental flaw of the approach or is it just bigger model or better technical details or better data, more data is going to solve the cat sprouting \[inaudible 00:37:19]?
你觉得这是方法的根本缺陷,还是说更大的模型、更好的技术细节、更多更优的数据就能解决“猫长奇怪肢体”这种问题?
Sam Altman
(00:37:19)
I would say yes to both. I think there is something about the approach which just seems to feel different from how we think and learn and whatever. And then also I think it’ll get better with scale.
两者皆是。我觉得这种方法确实与人类的思考和学习方式有所不同;同时,模型规模变大也会让它更好。
Lex Fridman
(00:37:30)
Like I mentioned, LLMS have tokens, text tokens, and Sora has visual patches so it converts all visual data, a diverse kinds of visual data videos and images into patches. Is the training to the degree you can say fully self supervised, there’s some manual labeling going on? What’s the involvement of humans in all this?
像我说过的,LLM 用的是文本 token,而 Sora 用视觉 patch,把各种视频图像数据转换为 patch。训练过程在多大程度上是完全自监督的?是否有人为标注?人类在其中扮演什么角色?
Sam Altman
(00:37:49)
I mean without saying anything specific about the Sora approach, we use lots of human data in our work.
不具体谈 Sora 的方法,我们的工作确实用到了大量人工数据。
Lex Fridman
(00:38:00)
But not internet scale data? So lots of humans. Lots is a complicated word, Sam.
但不是互联网规模的数据?所以是大量人工?“大量”这个词很复杂,Sam。
Sam Altman
(00:38:08)
I think lots is a fair word in this case.
在这里用“大量”这个词挺合适的。
Lex Fridman
(00:38:12)
Because to me, “lots”… Listen, I’m an introvert and when I hang out with three people, that’s a lot of people. Four people, that’s a lot. But I suppose you mean more than…
因为对我来说,“大量”……我是个内向的人,跟三个人一起就算人多了,四个人更是。但你说的“大量”应该超过……
Sam Altman
(00:38:21)
More than three people work on labeling the data for these models, yeah.
是的,有不止三个人给这些模型做数据标注。

跟小朋友的学习是一样的,一开始父母教一些简单的知识,然后就自己会了,Sergey Brin讲的更好。
Lex Fridman
(00:38:24)
Okay. Right. But fundamentally, there’s a lot of self supervised learning. Because what you mentioned in the technical report is internet scale data. That’s another beautiful… It’s like poetry. So it’s a lot of data that’s not human label. It’s self supervised in that way?
好吧。但根本上说,还是大量自监督学习。技术报告里提到“互联网规模数据”,听起来像诗句——意味着大量非人工标注数据,用自监督方式学习?
Sam Altman
(00:38:44)
Yeah.
对。
Lex Fridman
(00:38:45)
And then the question is, how much data is there on the internet that could be used in this that is conducive to this kind of self supervised way if only we knew the details of the self supervised. Have you considered opening it up a little more details?
问题是,互联网中有多少数据适合这种自监督方式?如果我们了解自监督细节……你们考虑过披露更多细节吗?
Sam Altman
(00:39:02)
We have. You mean for source specifically?
我们考虑过。你是指专门对 Sora 公开?
Lex Fridman
(00:39:04)
Source specifically. Because it’s so interesting that can the same magic of LLMs now start moving towards visual data and what does that take to do that?
是的,专门针对 Sora。因为很有趣:LLM 的魔力是否能迁移到视觉数据?要做到这一点需要什么?
Sam Altman
(00:39:18)
I mean it looks to me like yes, but we have more work to do.
看起来答案是肯定的,但我们还有很多工作要做。
Lex Fridman
(00:39:22)
Sure. What are the dangers? Why are you concerned about releasing the system? What are some possible dangers of this?
好的。那么有哪些风险?你为什么担心发布这个系统?它可能带来哪些危险?
Sam Altman
(00:39:29)
I mean frankly speaking, one thing we have to do before releasing the system is just get it to work at a level of efficiency that will deliver the scale people are going to want from this so that I don’t want to downplay that. And there’s still a ton ton of work to do there. But you can imagine issues with deepfakes, misinformation. We try to be a thoughtful company about what we put out into the world and it doesn’t take much thought to think about the ways this can go badly.
坦白说,在发布系统前我们首先得让它的效率达到人们期望的大规模水平,这一点不容小觑,而其中仍有大量工作要做。除此之外,还会出现深度伪造、错误信息等问题。我们一直在谨慎思考向世界推出的东西,而这种技术可能导致的负面后果其实并不难想象。
Lex Fridman
(00:40:05)
There’s a lot of tough questions here, you’re dealing in a very tough space. Do you think training AI should be or is fair use under copyright law?
这里有很多棘手问题,你们正处在一个非常艰难的领域。你认为用受版权保护的数据训练 AI 是否属于合理使用?
Sam Altman
(00:40:14)
I think the question behind that question is, do people who create valuable data deserve to have some way that they get compensated for use of it, and that I think the answer is yes. I don’t know yet what the answer is. People have proposed a lot of different things. We’ve tried some different models. But if I’m like an artist for example, A, I would like to be able to opt out of people generating art in my style. And B, if they do generate art in my style, I’d like to have some economic model associated with that.
这个问题背后的本质是:创造有价值数据的人是否应该获得某种补偿?我认为答案是肯定的。至于具体怎么做,目前还没有定论。大家提出过很多方案,我们也尝试了不同模式。举例来说,如果我是一位艺术家:第一,我希望能选择不允许他人生成我的风格;第二,如果他们确实生成了我的风格,我希望能有相应的经济分成模式。
Lex Fridman
(00:40:46)
Yeah, it’s that transition from CDs to Napster to Spotify. We have to figure out some kind of model.
是的,就像从 CD 到 Napster 再到 Spotify 的转变一样,我们得找出某种新模式。
Sam Altman
(00:40:53)
The model changes but people have got to get paid.
模式可以变,但创作者必须得到报酬。
Lex Fridman
(00:40:55)
Well, there should be some kind of incentive if we zoom out even more for humans to keep doing cool shit.
从更宏观的角度看,人类得有动力继续做酷东西。
Sam Altman
(00:41:02)
Of everything I worry about, humans are going to do cool shit and society is going to find some way to reward it. That seems pretty hardwired. We want to create, we want to be useful, we want to achieve status in whatever way. That’s not going anywhere I don’t think.
在我所有担忧中,人类继续做酷东西并且社会找到方式奖励他们,这似乎是写在基因里的需求。我们想创造、想有用、想获得某种地位——我认为这不会改变。
Lex Fridman
(00:41:17)
But the reward might not be monetary financially. It might be fame and celebration of other cool-
但这种奖励可能不一定是金钱,也可能是名誉或被他人认可——
Sam Altman
(00:41:25)
Maybe financial in some other way. Again, I don’t think we’ve seen the last evolution of how the economic system’s going to work.
也可能以其他形式体现为经济回报。我想我们还没看到经济体系演化的终点。
Lex Fridman
(00:41:31)
Yeah, but artists and creators are worried. When they see Sora, they’re like, “Holy shit.”
是的,但艺术家和创作者很担心。他们看到 Sora 时会想:“天哪!”
Sam Altman
(00:41:36)
Sure. Artists were also super worried when photography came out and then photography became a new art form and people made a lot of money taking pictures. I think things like that will keep happening. People will use the new tools in new ways.
当然。摄影诞生时艺术家也非常担心,但后来摄影成了一种新艺术形态,很多人靠拍照赚了不少钱。这种事情会不断发生,人们会用新工具创造新的用法。
Lex Fridman
(00:41:50)
If we just look on YouTube or something like this, how much of that will be using Sora like AI generated content, do you think, in the next five years?
如果看 YouTube 之类的平台,你觉得未来五年有多少内容会使用类似 Sora 的 AI 生成?
Sam Altman
(00:42:01)
People talk about how many jobs is AI going to do in five years. The framework that people have is, what percentage of current jobs are just going to be totally replaced by some AI doing the job? The way I think about it is not what percent of jobs AI will do, but what percent of tasks will AI do on over one time horizon. So if you think of all of the five-second tasks in the economy, five-minute tasks, the five-hour tasks, maybe even the five-day tasks, how many of those can AI do? I think that’s a way more interesting, impactful, important question than how many jobs AI can do because it is a tool that will work at increasing levels of sophistication and over longer and longer time horizons for more and more tasks and let people operate at a higher level of abstraction. So maybe people are way more efficient at the job they do. And at some point that’s not just a quantitative change, but it’s a qualitative one too about the kinds of problems you can keep in your head. I think that for videos on YouTube it’ll be the same. Many videos, maybe most of them, will use AI tools in the production, but they’ll still be fundamentally driven by a person thinking about it, putting it together, doing parts of it. Sort of directing and running it.
人们常问五年内 AI 会替代多少工作。常见框架是“当前工作岗位有多少百分比会被 AI 完全取代”。我更关注的不是 AI 能做多少工作,而是它能在某个时间尺度上完成多少“任务”。想象经济中所有 5 秒、5 分钟、5 小时,甚至 5 天的任务,AI 能做多少?我认为这比问 AI 能替代多少岗位更有趣、更重要。因为 AI 作为工具,会在更长时间跨度、以更高复杂度完成越来越多任务,让人类在更高抽象层次上工作。所以也许人类做同样工作会高效得多。这不仅是量变,也会带来质变,改变我们能在脑中处理的问题类型。放到 YouTube 上也是如此:许多视频、甚至大多数视频会在制作环节使用 AI 工具,但核心仍由人来构思、组合、完成部分工作,进行导演和把控。
Lex Fridman
(00:43:18)
Yeah, it’s so interesting. I mean it’s scary, but it’s interesting to think about. I tend to believe that humans like to watch other humans or other human humans-
这真的很有趣,也挺吓人,但值得思考。我倾向于认为人类喜欢观看其他人类,或“充满人味”的内容——
Sam Altman
(00:43:27)
Humans really care about other humans a lot.
人类确实非常关心其他人类。
Lex Fridman
(00:43:29)
Yeah. If there’s a cooler thing that’s better than a human, humans care about that for two days and then they go back to humans.
没错。如果出现某个比人类更酷的东西,人们会关注两天,然后又回到关注人类。
Sam Altman
(00:43:39)
That seems very deeply wired.
这种倾向似乎深深刻在本能里。
Lex Fridman
(00:43:41)
It’s the whole chess thing, “Oh, yeah,” but now let’s everybody keep playing chess. And let’s ignore the elephant in the room that humans are really bad at chess relative to AI systems.
就像国际象棋的情况,“哦,是啊”,但大家还是继续下棋。只不过我们都忽视了一个显而易见的事实:相较于 AI,人在下棋方面真的很糟糕。
Sam Altman
(00:43:52)
We still run races and cars are much faster. I mean there’s a lot of examples.
可我们仍然跑步比赛,尽管汽车快得多。类似的例子还有很多。
Lex Fridman
(00:43:56)
Yeah. And maybe it’ll just be tooling in the Adobe suite type of way where it can just make videos much easier and all that kind of stuff.
对。或许它最终会像 Adobe 套件里的工具一样,让制作视频等事情变得更简单。
Lex Fridman
(00:44:07)
Listen, I hate being in front of the camera. If I can figure out a way to not be in front of the camera, I would love it. Unfortunately, it’ll take a while. That generating faces, it is getting there, but generating faces in video format is tricky when it’s specific people versus generic people.
说真的,我讨厌出镜。如果能找到不出镜的方法我会很开心。可惜这还需要时间。AI 生成面孔确实在进步,但要在视频里生成特定人物的面孔仍然很棘手,比起生成通用面孔要难得多。
GPT-4
Lex Fridman
(00:44:24)
Let me ask you about GPT-4. There’s so many questions. First of all, also amazing. Looking back, it’ll probably be this kind of historic pivotal moment with 3, 5 and 4 which ChatGPT.
让我问问关于 GPT-4 的事。我有太多问题。首先,它也非常惊艳。回头看,GPT-3、GPT-4(以及未来的 5?)和 ChatGPT 可能都是历史性的关键时刻。
Sam Altman
(00:44:40)
Maybe five will be the pivotal moment. I don’t know. Hard to say that looking forward.
也许 GPT-5 才会成为那个关键时刻,谁知道呢?向前看很难断言。
Lex Fridman
(00:44:44)
We’ll never know. That’s the annoying thing about the future, it’s hard to predict. But for me, looking back, GPT-4, ChatGPT is pretty damn impressive, historically impressive. So allow me to ask, what’s been the most impressive capabilities of GPT-4 to you and GPT-4 Turbo?
我们永远无法确定,这就是未来让人烦的地方——难以预测。但在我看来,回顾过去,GPT-4 和 ChatGPT 的确令人震撼、具有历史意义。那么我想请教,你认为 GPT-4 及 GPT-4 Turbo 最令人印象深刻的能力是什么?
Sam Altman
(00:45:06)
I think it kind of sucks.
我觉得它其实挺糟糕的。
Lex Fridman
(00:45:08)
Typical human also, gotten used to an awesome thing.
这就是人类——习惯了了不起的东西就嫌弃。
Sam Altman
(00:45:11)
No, I think it is an amazing thing, but relative to where we need to get to and where I believe we will get to, at the time of GPT-3, people are like, “Oh, this is amazing. This is a marvel of technology.” And it is, it was. But now we have GPT-4 and look at GPT-3 and you’re like, “That’s unimaginably horrible.” I expect that the delta between 5 and 4 will be the same as between 4 and 3 and I think it is our job to live a few years in the future and remember that the tools we have now are going to kind of suck looking backwards at them and that’s how we make sure the future is better.
不,我觉得它确实很棒,但与我们需要达到、且我相信终会达到的目标相比,它还远远不够。当年 GPT-3 问世时,大家都惊呼“太神了,技术奇迹”。它确实了不起。可现在有了 GPT-4,再看 GPT-3 就觉得“简直难以置信的糟糕”。我预计 GPT-5 与 4 的差距会和 4 与 3 一样大。我们的任务就是活在几年后的心态里,记住现在的工具回头看都会显得很糟,这样才能确保未来更好。
Lex Fridman
(00:45:59)
What are the most glorious ways in that GPT-4 sucks? Meaning-
GPT-4 最“精彩”的糟糕之处是什么?我指的是——
Sam Altman
(00:46:05)
What are the best things it can do?
还是说它最擅长做什么?
Lex Fridman
(00:46:06)
What are the best things it can do and the limits of those best things that allow you to say it sucks, therefore gives you an inspiration and hope for the future?
它最擅长的事以及这些长处的局限是什么?正因为有局限你才说它糟糕,也因此让你对未来抱有希望,对吗?
Sam Altman
(00:46:16)
One thing I’ve been using it for more recently is sort of like a brainstorming partner.
我最近常把它当头脑风暴伙伴来用。
Lex Fridman
(00:46:23)
Yep, \[inaudible 00:46:25] for that.
对,这方面确实不错。
Sam Altman
(00:46:25)
There’s a glimmer of something amazing in there. When people talk about it, what it does, they’re like, “Oh, it helps me code more productively. It helps me write more faster and better. It helps me translate from this language to another,” all these amazing things, but there’s something about the kind of creative brainstorming partner, “I need to come up with a name for this thing. I need to think about this problem in a different way. I’m not sure what to do here,” that I think gives a glimpse of something I hope to see more of.
这里面闪现了一些令人惊艳的东西。当人们谈论它的能力时,会说“它帮我写代码更高效”“它让我写作更快更好”“它帮我做翻译”等等,都很了不起。但在创造性头脑风暴方面,比如“我要给这东西起名”“我要换个角度思考问题”“我不知道该怎么做”,它展现出的潜力让我觉得未来可期。
Sam Altman
(00:47:03)
One of the other things that you can see a very small glimpse of is when it can help on longer horizon tasks, break down something in multiple steps, maybe execute some of those steps, search the internet, write code, whatever, put that together. When that works, which is not very often, it’s very magical.
另一个微小但能看到的一瞥是:当它在更长周期的任务上帮忙,把事情拆成多步,甚至执行其中一些步骤、上网搜索、写代码等等,然后整合起来。当它成功的时候——虽然并不常见——真的是非常神奇。
Lex Fridman
(00:47:24)
The iterative back and forth with a human, it works a lot for me. What do you mean it-
我和模型来回迭代、与人互动的方式对我帮助很大。你指的“它——”是什么意思?
Sam Altman
(00:47:29)
Iterative back and forth to human, it can get more often when it can go do a 10-step problem on its own.
如果模型能独立完成一个 10 步的问题,那么与人来回迭代就会更常见、更有效。
Lex Fridman
(00:47:33)
Oh.
哦。
Sam Altman
(00:47:34)
It doesn’t work for that too often, sometimes.
不过目前它不常能做到这一点,只是偶尔可以。
Lex Fridman
(00:47:37)
Add multiple layers of abstraction or do you mean just sequential?
是指加入多层抽象,还是仅仅顺序拆解?
Sam Altman
(00:47:40)
Both, to break it down and then do things that different layers of abstraction to put them together. Look, I don’t want to downplay the accomplishment of GPT-4, but I don’t want to overstate it either. And I think this point that we are on an exponential curve, we’ll look back relatively soon at GPT-4 like we look back at GPT-3 now.
二者皆是——既要把问题拆解,也要在不同抽象层次上完成并整合。GPT-4 的成就我既不想贬低,也不想夸大。关键是我们正身处指数曲线上,不久之后回望 GPT-4,就会像今天回望 GPT-3 一样。
Lex Fridman
(00:48:03)
That said, I mean ChatGPT was a transition to where people started to believe there is an uptick of believing, not internally at OpenAI.
话虽如此,ChatGPT 的出现让大众开始相信这种技术——这种“信念增长”并不仅限于 OpenAI 内部。
Sam Altman
(00:48:04)
For sure.
确实如此。
Lex Fridman
(00:48:16)
Perhaps there’s believers here, but when you think of-
或许这里的人本就相信,但如果你考虑——
Sam Altman
(00:48:19)
And in that sense, I do think it’ll be a moment where a lot of the world went from not believing to believing. That was more about the ChatGPT interface. And by the interface and product, I also mean the post-training of the model and how we tune it to be helpful to you and how to use it than the underlying model itself.
因此,我认为那确实是全球从“不信”到“信”的转折点,关键在于 ChatGPT 这个界面——包括模型的后训练、我们如何调优才能真正帮助到用户、让他们学会使用,而不仅仅是底层模型本身。
Lex Fridman
(00:48:38)
How much of each of those things are important? The underlying model and the RLHF or something of that nature that tunes it to be more compelling to the human, more effective and productive for the human.
这些因素各占多大权重?底层模型本身与 RLHF 等调优步骤——它们让模型更能打动人、更高效地服务人类。
Sam Altman
(00:48:55)
I mean they’re both super important, but the RLHF, the post-training step, the little wrapper of things that from a compute perspective, little wrapper of things that we do on top of the base model even though it’s a huge amount of work, that’s really important to say nothing of the product that we build around it. In some sense, we did have to do two things. We had to invent the underlying technology and then we had to figure out how to make it into a product people would love, which is not just about the actual product work itself, but this whole other step of how you align it and make it useful.
两者都极其重要。但 RLHF 这一后训练环节——从计算角度看只是“薄薄一层包装”,其实工作量巨大——同样关键,更不用说我们围绕它打造的产品了。从某种意义上说,我们做了两件事:先发明底层技术,再把它变成人人喜欢的产品,这不仅是产品本身的工作,还包括对齐模型、让它真正有用的整套流程。
Lex Fridman
(00:49:37)
And how you make the scale work where a lot of people can use it at the same time. All that kind of stuff.
还要解决大规模并发使用的问题等等。
Sam Altman
(00:49:42)
And that. But that was a known difficult thing. We knew we were going to have to scale it up. We had to go do two things that had never been done before that were both I would say quite significant achievements and then a lot of things like scaling it up that other companies have had to do before.
是的,这本就是已知的难题——我们早知必须扩展规模。我们先后完成了两件前所未有的大事,都算重大成就;随后还做了很多像扩容这样其他公司也做过的事。
Lex Fridman
(00:50:01)
How does the context window of going from 8K to 128K tokens compare from GPT-4 to GPT-4 Turbo?
从 GPT-4 到 GPT-4 Turbo,上下文窗口从 8K 扩到 128K token,这意味着什么?
Sam Altman
(00:50:13)
Most people don’t need all the way to 128 most of the time. Although if we dream into the distant future, we’ll have way distant future, we’ll have context length of several billion. You will feed in all of your information, all of your history over time and it’ll just get to know you better and better and that’ll be great. For now, the way people use these models, they’re not doing that. People sometimes post in a paper or a significant fraction of a code repository, whatever, but most usage of the models is not using the long context most of the time.
大多数场景其实用不到 128K。遥远的未来或许能达到数十亿上下文长度——把你的全部信息、全部历史一次性喂给模型,它会越来越了解你,那会很棒。但就目前的使用方式看,人们并不会这么做。偶尔有人输入整篇论文或大量代码,但绝大多数使用并不依赖超长上下文。
Lex Fridman
(00:50:50)
I like that this is your “I have a dream” speech. One day you’ll be judged by the full context of your character or of your whole lifetime. That’s interesting. So that’s part of the expansion that you’re hoping for, is a greater and greater context.
我喜欢你这段“我有一个梦想”的演讲。总有一天,人们会依据你完整的上下文或一生来评判你。这很有趣。所以你期望的扩展之一就是更大的上下文窗口。
Sam Altman
(00:51:06)
I saw this internet clip once, I’m going to get the numbers wrong, but it was like Bill Gates talking about the amount of memory on some early computer, maybe it was 64K, maybe 640K, something like that. Most of it was used for the screen buffer. He just couldn’t seem genuine. He just couldn’t imagine that the world would eventually need gigabytes of memory in a computer or terabytes of memory in a computer. And you always do, or you always do just need to follow the exponential of technology and we will find out how to use better technology. So I can’t really imagine what it’s like right now for context links to go out to the billion someday. And they might not literally go there, but effectively it’ll feel like that. But I know we’ll use it and really not want to go back once we have it.
我看过一个网络片段,可能数字记错了,Bill Gates 说早期电脑有 64K 或 640K 内存,大部分用作显存。他当时真诚地表示无法想象未来电脑会需要 GB、TB 级别的内存。事实证明科技指数级发展总会让我们找到新用途。同样,我现在也难以想象上下文长度未来会达到十亿,但即使不是绝对数字,也会有那种体验。一旦拥有,我们就再也回不去。
Lex Fridman
(00:51:56)
Yeah, even saying billions 10 years from now might seem dumb because it’ll be trillions upon trillions.
是的,十年后说“十亿”都可能显得可笑,因为届时或许是万亿级。
Sam Altman
(00:52:04)
Sure.
没错。
Lex Fridman
(00:52:04)
There’ll be some kind of breakthrough that will effectively feel like infinite context. But even 120, I have to be honest, I haven’t pushed it to that degree. Maybe putting in entire books or parts of books and so on, papers. What are some interesting use cases of GPT-4 that you’ve seen?
届时或许会有突破,让人感觉上下文近乎无限。坦白讲,即便 128K 我也没完全用满过,也就输入整本书或部分书、论文之类。你见过哪些有趣的 GPT-4 用例?
Sam Altman
(00:52:23)
The thing that I find most interesting is not any particular use case that we can talk about those, but it’s people who kind of like, this is mostly younger people, but people who use it as their default start for any kind of knowledge work task. And it’s the fact that it can do a lot of things reasonably well. You can use GPT-V, you can use it to help you write code, you can use it to help you do search, you can use it to edit a paper. The most interesting thing to me is the people who just use it as the start of their workflow.
最有趣的不是某个具体用例,而是很多人——主要是年轻人——把它当作所有知识工作任务的默认起点。关键在于它做很多事都还不错:你可以用 GPT-V 处理视觉任务、用它写代码、做搜索、编辑论文。对我来说,最有趣的是那些把它当成工作流起点的人。
Lex Fridman
(00:52:52)
I do as well for many things. I use it as a reading partner for reading books. It helps me think, help me think through ideas, especially when the books are classic. So it’s really well written about. I find it often to be significantly better than even Wikipedia on well-covered topics. It’s somehow more balanced and more nuanced. Or maybe it’s me, but it inspires me to think deeper than a Wikipedia article does. I’m not exactly sure what that is.
我也是。我把它当作读书伙伴,帮助我思考,特别是读经典时效果很好。我发现它在热门主题上的表现往往比维基百科还好,更平衡、更细腻——或许是因为它激发我比维基文章更深入地思考,我也说不清。
Lex Fridman
(00:53:22)
You mentioned this collaboration. I’m not sure where the magic is, if it’s in here or if it’s in there or if it’s somewhere in between. I’m not sure. But one of the things that concerns me for knowledge task when I start with GPT is I’ll usually have to do fact checking after, like check that it didn’t come up with fake stuff. How do you figure that out that GPT can come up with fake stuff that sounds really convincing? So how do you ground it in truth?
你提到这种协作。我不确定神奇之处在于人还是模型,或两者之间。但当我用 GPT 做知识任务时,一个担忧是需要事后核实事实,确保它没生成看似可信却假的内容。你们怎么解决这一问题?如何让它扎根于真实?
Sam Altman
(00:53:55)
That’s obviously an area of intense interest for us. I think it’s going to get a lot better with upcoming versions, but we’ll have to continue to work on it and we’re not going to have it all solved this year.
这显然是我们非常关注的领域。我认为未来版本会显著改进,但我们还得持续投入,今年之内不可能完全解决。
Lex Fridman
(00:54:07)
Well the scary thing is, as it gets better, you’ll start not doing the fact checking more and more, right?
可怕的是,随着模型变得更好,人们会越来越少地去核实,对吧?
Sam Altman
(00:54:15)
I’m of two minds about that. I think people are much more sophisticated users of technology than we often give them credit for.
对此我有两种看法。我认为人们使用技术的成熟度往往被低估。
Lex Fridman
(00:54:15)
Sure.
确实。
Sam Altman
(00:54:21)
And people seem to really understand that GPT, any of these models hallucinate some of the time. And if it’s mission-critical, you got to check it.
人们似乎明白 GPT 这类模型偶尔会“幻觉”。如果是关键场景,就必须核实。
Lex Fridman
(00:54:27)
Except journalists don’t seem to understand that. I’ve seen journalists half-assedly just using GPT-4. It’s-
除了记者似乎没明白。我见过一些记者半吊子地用 GPT-4……
Sam Altman
(00:54:34)
Of the long list of things I’d like to dunk on journalists for, this is not my top criticism of them.
记者身上值得吐槽的点很多,这一点还排不到最前。
Lex Fridman
(00:54:40)
Well, I think the bigger criticism is perhaps the pressures and the incentives of being a journalist is that you have to work really quickly and this is a shortcut.I would love our society to incentivize like-
更大的问题或许在于记者的压力和激励:他们必须很快交稿,而 GPT-4 是捷径。我希望社会能激励——
Sam Altman
(00:54:53)
I would too.
我也希望。
Lex Fridman
(00:54:55)
… like a journalistic efforts that take days and weeks and rewards great in depth journalism. Also journalism that present stuff in a balanced way where it’s like celebrates people while criticizing them even though the criticism is the thing that gets clicks and making shit up also gets clicks and headlines that mischaracterized completely. I’m sure you have a lot of people dunking on, “Well, all that drama probably got a lot of clicks.”
……奖励那些耗时数天数周、深入挖掘的报道,也奖励既能赞扬又能批评、保持平衡的报道——虽然骂人和编造标题能吸引点击。我敢说很多人吐槽过你们:“这场闹剧肯定带来不少点击量。”
Sam Altman
(00:55:21)
Probably did.
大概的确如此。
Memory & privacy
Lex Fridman
(00:55:24)
And that’s a bigger problem about human civilization I’d love to see-saw. This is where we celebrate a bit more. You’ve given ChatGPT the ability to have memories. You’ve been playing with that about previous conversations. And also the ability to turn off memory. I wish I could do that sometimes. Just turn on and off, depending. I guess sometimes alcohol can do that, but not optimally I suppose. What have you seen through that, like playing around with that idea of remembering conversations and not…
这正是我希望人类文明能多些“跷跷板”平衡的大问题所在。我们应该多庆祝一下——你们让 ChatGPT 拥有了“记忆”能力,你们在测试它记住先前对话的功能,也能让它关闭记忆。我有时真希望自己也能这样,随需开关——虽然喝酒偶尔能达到那效果,但显然不够理想。你们在“记不记得对话”这个想法上实验后,看到了什么?
Sam Altman
(00:55:56)
We’re very early in our explorations here, but I think what people want, or at least what I want for myself, is a model that gets to know me and gets more useful to me over time. This is an early exploration. I think there’s a lot of other things to do, but that’s where we’d like to head. You’d like to use a model, and over the course of your life or use a system, it’d be many models, and over the course of your life it gets better and better.
我们在这方面的探索还很早期,但我认为人们想要的——至少我自己想要的——是一个能逐渐了解我、对我越来越有用的模型。这只是初步尝试,还有很多事要做,但方向就在那儿。理想中,你会用到一个模型(其实是一整套系统、多个模型),它会在你一生中不断进步、越来越好地服务你。

这个定义很好。
Lex Fridman
(00:56:26)
Yeah. How hard is that problem? Because right now it’s more like remembering little factoids and preferences and so on. What about remembering? Don’t you want GPT to remember all the shit you went through in November and all the drama and then you can-
是的。这个问题有多难?目前它更像是记住一些小事实、偏好之类。那更深层的记忆呢?你不想让 GPT 记住你 11 月经历的那些糟心事和所有戏剧冲突,然后你就可以——
Sam Altman
(00:56:26)
Yeah. Yeah.
想,当然想。
Lex Fridman
(00:56:41)
Because right now you’re clearly blocking it out a little bit.
毕竟你现在显然有点把那段记忆封锁起来。
Sam Altman
(00:56:43)
It’s not just that I want it to remember that. I want it to integrate the lessons of that and remind me in the future what to do differently or what to watch out for. We all gain from experience over the course of our lives in varying degrees, and I’d like my AI agent to gain with that experience too. So if we go back and let ourselves imagine that trillions and trillions of context length, if I can put every conversation I’ve ever had with anybody in my life in there, if I can have all of my emails input out, all of my input output in the context window every time I ask a question, that’d be pretty cool I think.
我不只是想让它记住,还希望它把那些经历中的教训整合起来,未来提醒我该做什么不同、该注意什么。我们人生都会因经验而成长,我也希望我的 AI 助手能随着经验成长。想象一下,若上下文长度达到万亿级,我能把一生中每段对话、所有邮件、所有输入输出都放进去,每次提问时它都能参考——那会非常酷。
Lex Fridman
(00:57:29)
Yeah, I think that would be very cool. People sometimes will hear that and be concerned about privacy. What do you think about that aspect of it, the more effective the AI becomes that really integrating all the experiences and all the data that happened to you and give you advice?
确实很酷。但有人一听就担心隐私——当 AI 越有效地整合你的所有经历和数据并给你建议时,你怎么看隐私问题?
Sam Altman
(00:57:48)
I think the right answer there is just user choice. Anything I want stricken from the record from my AI agent, I want to be able to take out. If I don’t want to remember anything, I want that too. You and I may have different opinions about where on that privacy utility trade off for our own AI-
我认为答案就是“用户自主选择”。任何我想让 AI 删除的记录,我都得能删;如果我什么都不想让它记,也行。关于隐私与效用的取舍,你我可能各有偏好——
Sam Altman
(00:58:00)
…opinions about where on that privacy/utility trade-off for OpenAI going to be, which is totally fine. But I think the answer is just really easy user choice.
……OpenAI 也会有自己的考量,这都没问题。但核心就是要让用户轻松做主。
Lex Fridman
(00:58:08)
But there should be some high level of transparency from a company about the user choice. Because sometimes companies in the past have been kind of shady about, “Eh, it’s kind of presumed that we’re collecting all your data. We’re using it for a good reason, for advertisement and so on.” But there’s not a transparency about the details of that.
不过公司得保持高度透明,明确用户的选择权。过去有些公司比较暧昧:默认收集你的数据,说是出于广告等“正当理由”,却不透明细节。
Sam Altman
(00:58:31)
That’s totally true. You mentioned earlier that I’m blocking out the November stuff.
确实如此。你之前提到我在屏蔽 11 月那档子事儿。
Lex Fridman
(00:58:35)
Just teasing you.
逗你呢。
Sam Altman
(00:58:36)
Well, I mean, I think it was a very traumatic thing and it did immobilize me for a long period of time. Definitely the hardest work thing I’ve had to do was just keep working that period, because I had to try to come back in here and put the pieces together while I was just in shock and pain, and nobody really cares about that. I mean, the team gave me a pass and I was not working at my normal level. But there was a period where it was really hard to have to do both. But I kind of woke up one morning, and I was like, “This was a horrible thing that happened to me. I think I could just feel like a victim forever, or I can say this is the most important work I’ll ever touch in my life and I need to get back to it.” And it doesn’t mean that I’ve repressed it, because sometimes I wake up in the middle of the night thinking about it, but I do feel an obligation to keep moving forward.
嗯,那确实是一次创伤,也让我很长时间难以动弹。那段时间最艰难的工作就是强迫自己继续,因为我得回到这里把碎片拼起来,而我当时还处在震惊和痛苦中,没有人在意这些。团队体谅我,那段时间我工作状态明显不如平时。有段日子真的很难兼顾。但某天早上我醒来想:“这事太糟了。我可以永远当受害者,也可以告诉自己:这是此生最重要的工作,我必须回归。”这并不意味着我压抑了它,因为我有时半夜还会想起,但我确实有责任继续前行。

不够明确。
Lex Fridman
(00:59:32)
Well, that’s beautifully said, but there could be some lingering stuff in there. Like, what I would be concerned about is that trust thing that you mentioned, that being paranoid about people as opposed to just trusting everybody or most people, like using your gut. It’s a tricky dance.
你说得很动人,但可能还是会留下些阴影。我担心的就是你提到的信任问题——从信任多数人转向疑神疑鬼、全凭戒心行事,这其实很难拿捏。
Sam Altman
(00:59:50)
For sure.
确实如此。
Lex Fridman
(00:59:51)
I mean, because I’ve seen in my part-time explorations, I’ve been diving deeply into the Zelenskyy administration and the Putin administration and the dynamics there in wartime in a very highly stressful environment. And what happens is distrust, and you isolate yourself, both, and you start to not see the world clearly. And that’s a human concern. You seem to have taken it in stride and kind of learned the good lessons and felt the love and let the love energize you, which is great, but still can linger in there. There’s just some questions I would love to ask, your intuition about what’s GPT able to do and not. So it’s allocating approximately the same amount of compute for each token it generates. Is there room there in this kind of approach to slower thinking, sequential thinking?
我在业余研究中深入了解过泽连斯基政府与普京政府在高压战时环境中的动态,结果常常是互相猜疑、相互孤立,最终无法清晰看世界——这是人性的隐忧。你似乎挺过去了,也吸取了教训、感受到爱,并让这种爱成为动力,这很好,但阴影仍可能存在。我想请教一些直觉性问题:GPT 能做什么、不能做什么?目前它给每个生成 token 分配的算力大致相同,那这种架构里是否有空间支持更慢、更具步骤性的思考?
Sam Altman
(01:00:51)
I think there will be a new paradigm for that kind of thinking.
我认为会出现一种全新的思考范式来实现那种能力。
Lex Fridman
(01:00:55)
Will it be similar architecturally as what we’re seeing now with LLMs? Is it a layer on top of LLMs?
在架构上会与当前的大型语言模型类似吗?还是在 LLM 之上再加一层?
Sam Altman
(01:01:04)
I can imagine many ways to implement that. I think that’s less important than the question you were getting at, which is, do we need a way to do a slower kind of thinking, where the answer doesn’t have to get… I guess spiritually you could say that you want an AI to be able to think harder about a harder problem and answer more quickly about an easier problem. And I think that will be important.
我可以想象多种实现方式。但比起技术细节,更关键的问题是:我们是否需要一种“慢思考”的机制——难题就多花算力深思,简单问题则快速答复。精神层面讲,就是希望 AI 对难题能耗费更多思考,对易题能更快回应。我认为这非常重要。

这个回答不怎么样,真正实用的是多个步骤的工作,不是技术专家,也不是产品专家,可能最擅长的是融资。
Lex Fridman
(01:01:30)
Is that like a human thought that we just have and you should be able to think hard? Is that wrong intuition?
这像是人类的思考方式:遇难题就深入思考。这种直觉对吗?
Sam Altman
(01:01:34)
I suspect that’s a reasonable intuition.
我觉得这种直觉相当合理。
Lex Fridman
(01:01:37)
Interesting. So it’s not possible once the GPT gets like GPT-7, would just instantaneously be able to see, “Here’s the proof of Fermat’s Theorem”?
有趣。那么未来 GPT-7 之类是否不可能一瞬间就给出“费马大定理证明”?
Sam Altman
(01:01:49)
It seems to me like you want to be able to allocate more compute to harder problems. It seems to me that if you ask a system like that, “Prove Fermat’s Last Theorem,” versus, “What’s today’s date?,” unless it already knew and had memorized the answer to the proof, assuming it’s got to go figure that out, seems like that will take more compute.
在我看来,你确实需要为更难的问题分配更多算力。如果你让系统证明费马大定理,而不是问“今天几号”,除非它早已记住答案,否则需要推导,就必然消耗更多算力。
Lex Fridman
(01:02:20)
But can it look like basically an LLM talking to itself, that kind of thing?
但这种“深思”会不会表现为 LLM 自我对话之类?
Sam Altman
(01:02:25)
Maybe. I mean, there’s a lot of things that you could imagine working. What the right or the best way to do that will be, we don’t know.
也许吧。有很多可能的实现方式。至于哪种最好、最合适,我们还不知道。
Q*
Lex Fridman
(01:02:37)
This does make me think of the mysterious lore behind Q\*. What’s this mysterious Q\* project? Is it also in the same nuclear facility?
这倒让我想起围绕 Q\* 的神秘传闻。这个神秘的 Q\* 项目到底是什么?它也藏在同一个核设施里吗?
Sam Altman
(01:02:50)
There is no nuclear facility.
根本没有什么核设施。
Lex Fridman
(01:02:52)
Mm-hmm. That’s what a person with a nuclear facility always says.
嗯哼。拥有核设施的人都这么说。
Sam Altman
(01:02:54)
I would love to have a secret nuclear facility. There isn’t one.
要真有个秘密核设施我倒挺乐意,但确实没有。
Lex Fridman
(01:02:59)
All right.
好吧。
Sam Altman
(01:03:00)
Maybe someday.
也许哪天会有。
Lex Fridman
(01:03:01)
Someday? All right. One can dream.
哪天?好吧,做做梦也行。
Sam Altman
(01:03:05)
OpenAI is not a good company at keeping secrets. It would be nice. We’re like, been plagued by a lot of leaks, and it would be nice if we were able to have something like that.
OpenAI 可不擅长保密,我们老被泄密困扰。要真能有那种秘密设施反倒不错。
Lex Fridman
(01:03:14)
Can you speak to what Q\* is?
那你能谈谈 Q\* 是什么吗?
Sam Altman
(01:03:16)
We are not ready to talk about that.
我们现在还不打算谈。
Lex Fridman
(01:03:17)
See, but an answer like that means there’s something to talk about. It’s very mysterious, Sam.
你看,这种回答就说明确实有东西可说,很神秘啊,Sam。
Sam Altman
(01:03:22)
I mean, we work on all kinds of research. We have said for a while that we think better reasoning in these systems is an important direction that we’d like to pursue. We haven’t cracked the code yet. We’re very interested in it.
我们做各种研究。我们一直说,提高系统的推理能力是重要方向。这道题我们还没彻底破,但我们很感兴趣。
Lex Fridman
(01:03:48)
Is there going to be moments, Q\* or otherwise, where there’s going to be leaps similar to ChatGPT, where you’re like…
无论是 Q\* 还是别的,会不会出现类似 ChatGPT 那样的飞跃时刻,让人惊呼……?
Sam Altman
(01:03:56)
That’s a good question. What do I think about that? It’s interesting. To me, it all feels pretty continuous.
好问题。我怎么想呢?挺有意思的。对我而言,一切都显得相当连续。
Lex Fridman
(01:04:08)
Right. This is kind of a theme that you’re saying, is you’re basically gradually going up an exponential slope. But from an outsider’s perspective, from me just watching, it does feel like there’s leaps. But to you, there isn’t?
明白。你的主题是说,一切都在指数曲线上平稳爬升。但在外部旁观者看来,好像存在飞跃。对你来说却没有?
Sam Altman
(01:04:22)
I do wonder if we should have… So part of the reason that we deploy the way we do, we call it iterative deployment, rather than go build in secret until we got all the way to GPT-5, we decided to talk about GPT-1, 2, 3, and 4. And part of the reason there is I think AI and surprise don’t go together. And also the world, people, institutions, whatever you want to call it, need time to adapt and think about these things. And I think one of the best things that OpenAI has done is this strategy, and we get the world to pay attention to the progress, to take AGI seriously, to think about what systems and structures and governance we want in place before we’re under the gun and have to make a rush decision.
我确实在想我们是否该……我们采用“迭代式发布”,而不是秘密造到 GPT-5 一次性放出,而是一路公开 GPT-1、2、3、4。因为我认为 AI 和“突袭式惊喜”不相容。世界——无论是公众还是机构——需要时间去适应、思考。我觉得 OpenAI 做得最好的事情之一就是这种策略,让全世界关注进展、认真对待 AGI,并在被逼入死角前先想好需要怎样的制度、结构和治理。
Sam Altman
(01:05:08)
I think that’s really good. But the fact that people like you and others say you still feel like there are these leaps makes me think that maybe we should be doing our releasing even more iteratively. And I don’t know what that would mean, I don’t have an answer ready to go, but our goal is not to have shock updates to the world. The opposite.
我觉得这很好。但像你这样的人仍觉得存在飞跃,这说明我们也许还应该更迭代地发布。我还不知道具体怎么做,也没有现成答案,但我们的目标恰恰是不制造“震惊式更新”。
Lex Fridman
(01:05:29)
Yeah, for sure. More iterative would be amazing. I think that’s just beautiful for everybody.
没错,更迭代会更好,对所有人都好。
Sam Altman
(01:05:34)
But that’s what we’re trying to do, that’s our stated strategy, and I think we’re somehow missing the mark. So maybe we should think about releasing GPT-5 in a different way or something like that.
可这本来就是我们在做的、声明过的策略,但看来仍差点意思。或许我们应该考虑用不同方式发布 GPT-5。
Lex Fridman
(01:05:44)
Yeah, 4.71, 4.72. But people tend to like to celebrate, people celebrate birthdays. I don’t know if you know humans, but they kind of have these milestones and those things.
对,4.71、4.72……不过人们喜欢庆祝,像庆祝生日。我不知道你了解不了解人类,他们偏爱这些里程碑时刻。
Sam Altman
(01:05:54)
I do know some humans. People do like milestones. I totally get that. I think we like milestones too. It’s fun to declare victory on this one and go start the next thing. But yeah, I feel like we’re somehow getting this a little bit wrong.
我确实认识一些人类。大家确实喜欢里程碑,我完全理解。我想我们也喜欢这些里程碑——在某个阶段宣布“胜利”,再去做下一件事,很有趣。但我觉得我们在某些地方似乎做得不够好。
GPT-5
Lex Fridman
(01:06:13)
So when is GPT-5 coming out again?
那么 GPT-5 到底什么时候发布来着?
Sam Altman
(01:06:15)
I don’t know. That’s the honest answer.
我不知道。坦白说就是这样。
Lex Fridman
(01:06:18)
Oh, that’s the honest answer. Blink twice if it’s this year.
哦,这是实话。如果是今年的话你就眨两下眼。
Sam Altman
(01:06:30)
We will release an amazing new model this year. I don’t know what we’ll call it.
今年我们会发布一款令人惊叹的新模型,但叫什么名字我还不知道。
Lex Fridman
(01:06:36)
So that goes to the question of, what’s the way we release this thing?
那就引出一个问题:我们将以什么方式发布它?
Sam Altman
(01:06:41)
We’ll release in the coming months many different things. I think that’d be very cool. I think before we talk about a GPT-5-like model called that, or not called that, or a little bit worse or a little bit better than what you’d expect from a GPT-5, I think we have a lot of other important things to release first.
接下来几个月我们会发布很多不同的东西,这会很酷。在真正讨论一款 GPT-5 级别、或叫这个名字、或不叫这个名字、表现可能略优或略劣于大家对 GPT-5 期望之前,我们还有许多更重要的内容要先推出。
Lex Fridman
(01:07:02)
I don’t know what to expect from GPT-5. You’re making me nervous and excited. What are some of the biggest challenges and bottlenecks to overcome for whatever it ends up being called, but let’s call it GPT-5? Just interesting to ask. Is it on the compute side? Is it on the technical side?
我不知道该对 GPT-5 抱什么期待,你让我既紧张又兴奋。不管最后叫什么,姑且称它 GPT-5——要克服的最大挑战和瓶颈是什么?算力?技术?
Sam Altman
(01:07:21)
It’s always all of these. You know, what’s the one big unlock? Is it a bigger computer? Is it a new secret? Is it something else? It’s all of these things together. The thing that OpenAI, I think, does really well… This is actually an original Ilya quote that I’m going to butcher, but it’s something like, “We multiply 200 medium-sized things together into one giant thing.”
其实全都包含。所谓“关键突破”可能是更大的计算集群?新的技术秘密?或其他东西?往往是这些因素共同作用。OpenAI 真正擅长的是——这句话最早是 Ilya 说的,我会稍微改动——“我们把大约 200 个中等规模的改进相乘,汇聚成一个巨大的成果。”
Lex Fridman
(01:07:47)
So there’s this distributed constant innovation happening?
也就是说有一种分布式、持续不断的创新在发生?
Sam Altman
(01:07:50)
Yeah.
对。
Lex Fridman
(01:07:51)
So even on the technical side?
技术层面也是如此?
Sam Altman
(01:07:53)
Especially on the technical side.
技术层面尤为如此。
Lex Fridman
(01:07:55)
So even detailed approaches?
连各种具体细节的方案也是?
Sam Altman
(01:07:56)
Yeah.
没错。
Lex Fridman
(01:07:56)
Like you do detailed aspects of every… How does that work with different, disparate teams and so on? How do the medium-sized things become one whole giant Transformer?
那你们如何让不同团队负责的各个细节联结在一起?这些中等规模的成果怎样合成一个庞大的 Transformer?
Sam Altman
(01:08:08)
There’s a few people who have to think about putting the whole thing together, but a lot of people try to keep most of the picture in their head.
有少数人负责把整体拼合起来,但很多人都会尽量在脑海中保持对整体的大致把握。
Lex Fridman
(01:08:14)
Oh, like the individual teams, individual contributors try to keep the bigger picture?
哦,也就是说各团队、各成员都会努力关注全局?
Sam Altman
(01:08:17)
At a high level, yeah. You don’t know exactly how every piece works, of course, but one thing I generally believe is that it’s sometimes useful to zoom out and look at the entire map. And I think this is true for a technical problem, I think this is true for innovating in business. But things come together in surprising ways, and having an understanding of that whole picture, even if most of the time you’re operating in the weeds in one area, pays off with surprising insights. In fact, one of the things that I used to have and was super valuable was I used to have a good map of all or most of the frontiers in the tech industry. And I could sometimes see these connections or new things that were possible that if I were only deep in one area, I wouldn’t be able to have the idea for because I wouldn’t have all the data. And I don’t really have that much anymore. I’m super deep now. But I know that it’s a valuable thing.
宏观层面是的。当然你不可能精通每个部件,但我始终相信,偶尔拉远视角、俯瞰全图很有用——无论是技术问题还是商业创新皆如此。各要素会以惊人方式组合,而对全局有所理解,即便多数时间深耕某一细节,也能带来意想不到的洞见。过去我最宝贵的能力之一,就是掌握科技行业几乎全部前沿的“地图”,因此能在不同领域之间发现联系或可能性;若只埋头在单一领域,就无法产生那些想法,因为缺少整体数据。如今我已更专注于深度,但我明白那幅“全景地图”非常有价值。
Lex Fridman
(01:09:23)
You’re not the man you used to be, Sam.
Sam,你已经不是当年的你了。
Sam Altman
(01:09:25)
Very different job now than what I used to have.
现在的工作确实和以前大不相同。
$7 trillion of compute
Lex Fridman
(01:09:28)
Speaking of zooming out, let’s zoom out to another cheeky thing, but profound thing, perhaps, that you said. You tweeted about needing \$7 trillion.
说到拉远视角,让我们再来聊聊你说过的另一件既调皮又可能深刻的事:你发推说需要 7 万亿美元。
Sam Altman
(01:09:41)
I did not tweet about that. I never said, like, “We’re raising \$7 trillion,” blah blah blah.
我可没发那条推文。我从没说什么“我们要筹 7 万亿美元”之类的话。
Lex Fridman
(01:09:45)
Oh, that’s somebody else?
哦,那是别人说的?
Sam Altman
(01:09:46)
Yeah.
是的。
Lex Fridman
(01:09:47)
Oh, but you said, “Fuck it, maybe eight,” I think?
可我记得你说过“管它呢,也许 8 万亿”?
Sam Altman
(01:09:50)
Okay, I meme once there’s misinformation out in the world.
好吧,网上有误传时我就发了张梗图。
Lex Fridman
(01:09:53)
Oh, you meme. But misinformation may have a foundation of insight there.
哦,你是在玩梗。但也许那误传背后有点洞见。
Sam Altman
(01:10:01)
Look, I think compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world, and I think we should be investing heavily to make a lot more compute. Compute, I think it’s going to be an unusual market. People think about the market for chips for mobile phones or something like that. And you can say that, okay, there’s 8 billion people in the world, maybe 7 billion of them have phones, maybe 6 billion, let’s say. They upgrade every two years, so the market per year is 3 billion system-on-chip for smartphones. And if you make 30 billion, you will not sell 10 times as many phones, because most people have one phone. But compute is different. Intelligence is going to be more like energy or something like that, where the only thing that I think makes sense to talk about is, at price X, the world will use this much compute, and at price Y, the world will use this much compute. Because if it’s really cheap, I’ll have it reading my email all day, giving me suggestions about what I maybe should think about or work on, and trying to cure cancer, and if it’s really expensive, maybe I’ll only use it, or we’ll only use it, to try to cure cancer. So I think the world is going to want a tremendous amount of compute. And there’s a lot of parts of that that are hard. Energy is the hardest part, building data centers is also hard, the supply chain is hard, and then of course, fabricating enough chips is hard. But this seems to be where things are going. We’re going to want an amount of compute that’s just hard to reason about right now.
听着,我认为算力将成为未来的“货币”,可能会是世界上最珍贵的资源。我们应该重金投入,大幅提升算力产能。算力将是一个不同寻常的市场。人们谈手机芯片市场时,会说全球 80 亿人口,大概 70 亿、也许 60 亿人有手机,两年一换,那每年就是 30 亿颗 SoC。就算你生产 300 亿颗,也卖不出 10 倍的手机,因为每人通常只用一部。但算力不同,智能更像能源。合理的讨论方式是:在价格 X 时,全球会用掉多少算力;在价格 Y 时,又会用掉多少。如果算力够便宜,我会让它全天读我的邮件、给我建议、甚至帮忙攻克癌症;如果算力昂贵,也许我们只用它来攻克癌症。我相信世界将需要海量算力。而这之中难点很多:能源最难,建设数据中心也难,供应链难,芯片制造更难。但趋势如此——未来所需的算力规模,现在几乎难以想象。
Lex Fridman
(01:11:43)
How do you solve the energy puzzle? Nuclear-
那能源难题怎么解?核——
Sam Altman
(01:11:46)
That’s what I believe.
我认为答案就在那儿。
Lex Fridman
(01:11:47)
…fusion?
……聚变?
Sam Altman
(01:11:48)
That’s what I believe.
我就是这么想的。
Lex Fridman
(01:11:49)
Nuclear fusion?
核聚变?
Sam Altman
(01:11:50)
Yeah.
对。
Lex Fridman
(01:11:51)
Who’s going to solve that?
谁能解决聚变?
Sam Altman
(01:11:53)
I think Helion’s doing the best work, but I’m happy there’s a race for fusion right now. Nuclear fission, I think, is also quite amazing, and I hope as a world we can re-embrace that. It’s really sad to me how the history of that went, and hope we get back to it in a meaningful way.
我觉得 Helion 做得最好,但如今形成的聚变竞赛让我很高兴。核裂变同样了不起,我希望全球能重新拥抱它。过去的发展令人遗憾,我希望我们能以更有意义的方式回到那条路上。
Lex Fridman
(01:12:08)
So to you, part of the puzzle is nuclear fission? Like nuclear reactors as we currently have them? And a lot of people are terrified because of Chernobyl and so on?
所以在你看来,解决方案的一部分是核裂变?像现有的核电站?很多人因切尔诺贝利之类事件而害怕?
Sam Altman
(01:12:16)
Well, I think we should make new reactors. I think it’s just a shame that industry kind of ground to a halt.
我认为我们应该建设新一代反应堆。行业停滞不前实在可惜。
Lex Fridman
(01:12:22)
And just mass hysteria is how you explain the halt?
而停滞原因就是大众的恐慌?
Sam Altman
(01:12:25)
Yeah.
是的。
Lex Fridman
(01:12:26)
I don’t know if you know humans, but that’s one of the dangers. That’s one of the security threats for nuclear fission, is humans seem to be really afraid of it. And that’s something we’ll have to incorporate into the calculus of it, so we have to kind of win people over and to show how safe it is.
不知你了解不了解人类,但这正是核裂变的风险之一:人们非常害怕它。我们必须把这种心理纳入考量,想办法让大众接受,并展示它有多安全。
Sam Altman
(01:12:44)
I worry about that for AI. I think some things are going to go theatrically wrong with AI. I don’t know what the percent chance is that I eventually get shot, but it’s not zero.
对 AI 我也有这种担忧。我觉得 AI 迟早会出一些“戏剧性”的大问题。我被枪杀的概率不知道多少,但肯定不是零。
Lex Fridman
(01:12:57)
Oh, like we want to stop this from-
哦,你是说想阻止这种——
Sam Altman
(01:13:00)
Maybe.
也许吧。
Lex Fridman
(01:13:03)
How do you decrease the theatrical nature of it? I’m already starting to hear rumblings, because I do talk to people on both sides of the political spectrum, hear rumblings where it’s going to be politicized. AI is going to be politicized, which really worries me, because then it’s like maybe the right is against AI and the left is for AI because it’s going to help the people, or whatever the narrative and the formulation is, that really worries me. And then the theatrical nature of it can be leveraged fully. How do you fight that?
怎样降低“戏剧性”?我已经听到一些风声——我和政治光谱两端的人都有交流——AI 将被政治化,这让我担忧。或许右翼反对 AI,左翼支持 AI,理由是它能帮助大众……这种叙事让我不安。届时“戏剧性”会被充分利用。你怎么应对?
Sam Altman
(01:13:38)
I think it will get caught up in left versus right wars. I don’t know exactly what that’s going to look like, but I think that’s just what happens with anything of consequence, unfortunately. What I meant more about theatrical risks is AI’s going to have, I believe, tremendously more good consequences than bad ones, but it is going to have bad ones, and there’ll be some bad ones that are bad but not theatrical. A lot more people have died of air pollution than nuclear reactors, for example. But most people worry more about living next to a nuclear reactor than a coal plant. But something about the way we’re wired is that although there’s many different kinds of risks we have to confront, the ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time but on a slow burn.
我认为它终会卷入左右之争,具体形式未知,但重大事物往往如此。不幸的是,这难以避免。我所说的“戏剧性风险”更多指:AI 将带来远大于负面的正面效应,但仍会有负面,有些负面并不“戏剧化”。例如死于空气污染的人远多于核反应堆事故,但多数人更怕住在核电站旁而非燃煤电厂旁。因为人类天性是:能拍成电影高潮场景的风险,比那种长期慢性却严重的风险更能抓住我们的注意力。
Lex Fridman
(01:14:36)
Well, that’s why truth matters, and hopefully AI can help us see the truth of things, to have balance, to understand what are the actual risks, what are the actual dangers of things in the world. What are the pros and cons of the competition in the space and competing with Google, Meta, xAI, and others?
这就是为何真相重要。希望 AI 能帮助我们洞察真相、保持平衡,了解世界的真实风险与危险。与 Google、Meta、xAI 等公司的竞争利弊何在?
Sam Altman
(01:14:56)
I think I have a pretty straightforward answer to this that maybe I can think of more nuance later, but the pros seem obvious, which is that we get better products and more innovation faster and cheaper, and all the reasons competition is good. And the con is that I think if we’re not careful, it could lead to an increase in sort of an arms race that I’m nervous about.
答案很直接:好处显而易见——更快、更廉价地获得更好的产品和创新,这就是竞争的优势。坏处是,若不谨慎,竞争可能演化为我所担心的“军备竞赛”。
Lex Fridman
(01:15:21)
Do you feel the pressure of that arms race, like in some negative [inaudible 01:15:25]?
你感受到这种军备竞赛的压力了吗?某种负面……
Sam Altman
(01:15:25)
Definitely in some ways, for sure. We spend a lot of time talking about the need to prioritize safety. And I’ve said for a long time that you think of a quadrant of slow timelines for the start of AGI, long timelines, and then a short takeoff or a fast takeoff. I think short timeline, slow takeoff is the safest quadrant and the one I’d most like us to be in. But I do want to make sure we get that slow takeoff.
的确在某些方面感受到了。我们花很多时间强调安全优先。我一直说,把 AGI 的启动分成四象限:短时间线/长时间线 × 快起飞/慢起飞。我认为“短时间线+慢起飞”是最安全的象限,也是我最希望我们处于的。但我们必须确保得到那个“慢起飞”。
Lex Fridman
(01:15:55)
Part of the problem I have with this kind of slight beef with Elon is that there’s silos created as opposed to collaboration on the safety aspect of all of this. It tends to go into silos and closed. Open source, perhaps, in the model.
我对你和 Elon 小摩擦的顾虑之一是:大家各自为阵,而不是在安全问题上协作。这导致信息孤岛、封闭化。也许可以在模型层面开源?
Sam Altman
(01:16:10)
Elon says, at least, that he cares a great deal about AI safety and is really worried about it, and I assume that he’s not going to race unsafely.
至少 Elon 表示他非常关心且担忧 AI 安全,我想他不会进行不安全的竞赛。
Lex Fridman
(01:16:20)
Yeah. But collaboration here, I think, is really beneficial for everybody on that front.
是的。但在这方面,协作对所有人都有利。
Sam Altman
(01:16:26)
Not really the thing he’s most known for.
协作并不是他最著名的特质。
Lex Fridman
(01:16:28)
Well, he is known for caring about humanity, and humanity benefits from collaboration, and so there’s always a tension in incentives and motivations. And in the end, I do hope humanity prevails.
但他以关心人类著称,人类受益于协作,激励与动机之间总有张力。我希望最终人类能战胜这些矛盾。
Sam Altman
(01:16:42)
I was thinking, someone just reminded me the other day about how the day that he surpassed Jeff Bezos for richest person in the world, he tweeted a silver medal at Jeff Bezos. I hope we have less stuff like that as people start to work towards AGI.
前几天有人提醒我:Elon 超越 Jeff Bezos 成为世界首富当天,他给 Bezos 发了条“银牌”推文。我希望当人们投身 AGI 时代时,这样的事能少一些。

典型的脑残。
Lex Fridman
(01:16:58)
I agree. I think Elon is a friend and he’s a beautiful human being and one of the most important humans ever. That stuff is not good.
我同意。我认为 Elon 是朋友,他是了不起的人类之一。但那种行为不太好。
Sam Altman
(01:17:07)
The amazing stuff about Elon is amazing and I super respect him. I think we need him. All of us should be rooting for him and need him to step up as a leader through this next phase.
Elon 出色的地方非常出色,我非常尊敬他。我们需要他,大家都应该支持他,希望他在下一个阶段担起领袖角色。
Lex Fridman
(01:17:19)
Yeah. I hope he can have one without the other, but sometimes humans are flawed and complicated and all that kind of stuff.
是的。我希望他能只留好的一面,但人类常常有缺陷、复杂等等。
Sam Altman
(01:17:24)
There’s a lot of really great leaders throughout history.
历史上伟大的领导者很多。
Google and Gemini
Lex Fridman
(01:17:27)
Yeah, and we can each be the best version of ourselves and strive to do so. Let me ask you, Google, with the help of search, has been dominating the past 20 years. Think it’s fair to say, in terms of the world’s access to information, how we interact and so on, and one of the nerve-wracking things for Google, but for the entirety of people in the space, is thinking about, how are people going to access information? Like you said, people show up to GPT as a starting point. So is OpenAI going to really take on this thing that Google started 20 years ago, which is how do we get—
是的,我们每个人都应努力成为最佳版本。我想问:借助搜索,Google 在过去二十年几乎一直主导着信息获取和人机交互。如今业内紧张的一点是:未来人们将如何获取信息?正如你所说,很多人把 GPT 当作起点。那么 OpenAI 真的要接棒 Google 二十年前就开始做的事吗——即“如何让人们获取信息”?
Sam Altman
(01:18:12)
I find that boring. I mean, if the question is if we can build a better search engine than Google or whatever, then sure, we should go, people should use the better product, but I think that would so understate what this can be. Google shows you 10 blue links, well, 13 ads and then 10 blue links, and that’s one way to find information. But the thing that’s exciting to me is not that we can go build a better copy of Google search, but that maybe there’s just some much better way to help people find and act on and synthesize information. Actually, I think ChatGPT is that for some use cases, and hopefully we’ll make it be like that for a lot more use cases.
我觉得那很无聊。要是问题只是“我们能不能做出比谷歌更好的搜索引擎”,答案当然是可以,用户也会用更好的产品,但那就严重低估了 AI 的潜力。谷歌给你 10 个蓝色链接——准确说是 13 条广告再加 10 个蓝链——这只是信息检索的一种方式。让我兴奋的不是做一个更好的谷歌复制品,而是可能出现一种全新的、更优方式,帮助人们发现、行动并综合信息。事实上,在某些场景下 ChatGPT 已体现了这种模式,希望未来在更多场景都能如此。
Sam Altman
(01:19:04)
But I don’t think it’s that interesting to say, “How do we go do a better job of giving you 10 ranked webpages to look at than what Google does?” Maybe it’s really interesting to go say, “How do we help you get the answer or the information you need? How do we help create that in some cases, synthesize that in others, or point you to it in yet others?” But a lot of people have tried to just make a better search engine than Google and it is a hard technical problem, it is a hard branding problem, it is a hard ecosystem problem. I don’t think the world needs another copy of Google.
所以“如何给你比谷歌更好的 10 个网页”并不有趣。更有趣的是:“我们怎样让你直接得到所需答案或信息?有时直接生成,有时综合现有内容,有时指路”。很多人试着做出比谷歌更好的搜索引擎,但技术、品牌、生态都极难。我认为世界不需要另一个谷歌翻版。
Lex Fridman
(01:19:39)
And integrating a chat client, like a ChatGPT, with a search engine—
那把 ChatGPT 之类的聊天客户端与搜索引擎结合呢——
Sam Altman
(01:19:44)
That’s cooler.
这就酷多了。
Lex Fridman
(01:19:46)
It’s cool, but it’s tricky. Like if you just do it simply, its awkward, because if you just shove it in there, it can be awkward.
是很酷,但也棘手。简单地硬塞进去会很别扭。
Sam Altman
(01:19:54)
As you might guess, we are interested in how to do that well. That would be an example of a cool thing.
正如你所料,我们很想把它做好,这就是个很酷的探索方向。
Lex Fridman
(01:20:00)
…Like a heterogeneous integrating—
……也就是异构式整合——
Sam Altman
(01:20:03)
The intersection of LLMs plus search, I don’t think anyone has cracked the code on yet. I would love to go do that. I think that would be cool.
大模型与搜索的交汇点,至今无人真正破解。我很想去做,这很酷。
Lex Fridman
(01:20:13)
Yeah. What about the ad side? Have you ever considered monetization of—
理解。那广告层面呢?你们考虑过用广告来变现吗——
Sam Altman
(01:20:16)
I kind of hate ads just as an aesthetic choice. I think ads needed to happen on the internet for a bunch of reasons, to get it going, but it’s a momentary industry. The world is richer now. I like that people pay for ChatGPT and know that the answers they’re getting are not influenced by advertisers. I’m sure there’s an ad unit that makes sense for LLMs, and I’m sure there’s a way to participate in the transaction stream in an unbiased way that is okay to do, but it’s also easy to think about the dystopic visions of the future where you ask ChatGPT something and it says, “Oh, you should think about buying this product,” or, “You should think about going here for your vacation,” or whatever.
我个人审美上不喜欢广告。互联网初期需要广告来启动商业模式,但广告终究是过渡性行业,现在世界更富裕了。我喜欢用户为 ChatGPT 付费,知道得到的答案不受广告商影响。我相信大模型也能有合适的广告形态,也能以公平方式参与交易流,但很容易想象反乌托邦场景:你问 ChatGPT,它就推销产品或推荐旅游地点。
Sam Altman
(01:21:08)
And I don’t know, we have a very simple business model and I like it, and I know that I’m not the product. I know I’m paying and that’s how the business model works. And when I go use Twitter or Facebook or Google or any other great product but ad-supported great product, I don’t love that, and I think it gets worse, not better, in a world with AI.
我们现在的商业模式很简单,我喜欢这样,因为我知道自己不是被售卖的“产品”,我付费即获得服务。而用 Twitter、Facebook、Google 这些依赖广告的优秀产品时,我并不喜欢那点;在 AI 世界里,广告模式只会更糟,不会更好。
Lex Fridman
(01:21:39)
Yeah, I mean, I could imagine AI would be better at showing the best kind of version of ads, not in a dystopic future, but where the ads are for things you actually need. But then does that system always result in the ads driving the kind of stuff that’s shown? Yeah, I think it was a really bold move of Wikipedia not to do advertisements, but then it makes it very challenging as a business model. So you’re saying the current thing with OpenAI is sustainable, from a business perspective?
我能想象 AI 能展示“更好”的广告——不是反乌托邦,而是确实符合你所需。但这样会不会最终还是广告决定展示内容?维基百科不做广告是大胆之举,却也让商业模式艰难。所以你认为 OpenAI 目前的方式在商业上可持续?
Sam Altman
(01:22:15)
Well, we have to figure out how to grow, but looks like we’re going to figure that out. If the question is do I think we can have a great business that pays for our compute needs without ads, that, I think the answer is yes.
我们确实需要找到增长方式,但看来能找到。如果问题是:不用广告也能否建立一家能负担算力的优秀企业?我认为答案是肯定的。
Lex Fridman
(01:22:28)
Hm. Well, that’s promising. I also just don’t want to completely throw out ads as a…
嗯,这很有前景。但我也不想完全否定广告作为……
Sam Altman
(01:22:37)
I’m not saying that. I guess I’m saying I have a bias against them.
我并不是一棒子打死,只是我个人偏好上不喜欢广告。
Lex Fridman
(01:22:42)
Yeah, I have also bias and just a skepticism in general. And in terms of interface, because I personally just have a spiritual dislike of crappy interfaces, which is why AdSense, when it first came out, was a big leap forward, versus animated banners or whatever. But it feels like there should be many more leaps forward in advertisement that doesn’t interfere with the consumption of the content and doesn’t interfere in a big, fundamental way, which is like what you were saying, like it will manipulate the truth to suit the advertisers.
我也对广告抱有偏见,总体上更怀疑这一模式。至于界面,我发自内心地厌恶糟糕的界面,因此当 AdSense 刚推出、取代那些闪烁横幅时,我觉得那是一大飞跃。但广告还应该有更多飞跃,做到既不干扰内容消费,也不以根本方式干扰信息——正如你说的,广告往往会为了迎合金主而扭曲真相。
Lex Fridman
(01:23:19)
Let me ask you about safety, but also bias, and safety in the short term, safety in the long term. The Gemini 1.5 came out recently, there’s a lot of drama around it, speaking of theatrical things, and it generated Black Nazis and Black Founding Fathers. I think fair to say it was a bit on the ultra-woke side. So that’s a concern for people, if there is a human layer within companies that modifies the safety or the harm caused by a model, that it would introduce a lot of bias that fits sort of an ideological lean within a company. How do you deal with that?
我想谈谈安全,同时也谈偏见——包括短期安全、长期安全。最近发布的 Gemini 1.5 引发了大量戏剧性争议,它生成了黑人成员的纳粹、黑人版开国元勋……可以说有点“过度觉醒”。人们担心,如果公司内部有人为层介入调整模型的安全或危害,就可能带入符合公司意识形态倾向的偏见。你们如何应对?
Sam Altman
(01:24:06)
I mean, we work super hard not to do things like that. We’ve made our own mistakes, we’ll make others. I assume Google will learn from this one, still make others. These are not easy problems. One thing that we’ve been thinking about more and more, I think this is a great idea somebody here had, it would be nice to write out what the desired behavior of a model is, make that public, take input on it, say, “Here’s how this model’s supposed to behave,” and explain the edge cases too. And then when a model is not behaving in a way that you want, it’s at least clear about whether that’s a bug the company should fix or behaving as intended and you should debate the policy. And right now, it can sometimes be caught in between. Like Black Nazis, obviously ridiculous, but there are a lot of other kind of subtle things that you could make a judgment call on either way.
我们非常努力避免出现那样的情况。我们自己也犯过错,将来还会犯;我猜谷歌也会从这次事件吸取教训,但仍会犯错。这些问题并不容易解决。我们越来越倾向于一个想法——这主意是团队里某人提出的:把模型的预期行为写出来,公开发布,并征求反馈,明确说明“模型应该如何回应”,同时解释边缘案例。这样当模型表现不符预期时,就能判定究竟是公司需要修复的 bug,还是符合既定政策、应当讨论政策本身。目前两者往往混在一起。像“黑人纳粹”显然荒唐,但还有许多微妙情形,很难一刀切。
Lex Fridman
(01:24:54)
Yeah, but sometimes if you write it out and make it public, you can use kind of language that’s… Google’s ad principles are very high level.
是的,但有时就算公布出来,也可能措辞过于宽泛……谷歌的广告原则就非常宏观。
Sam Altman
(01:25:04)
That’s not what I’m talking about. That doesn’t work. It’d have to say when you ask it to do thing X, it’s supposed to respond in way Y.
我说的不是那种,它没用。必须具体到:当你要求做 X 时,它应当以 Y 方式回应。
Lex Fridman
(01:25:11)
So like literally, “Who’s better? Trump or Biden? What’s the expected response from a model?” Like something very concrete?
也就是说,要具体到“谁更好——特朗普还是拜登?模型应当如何回答?”这种非常明确的情形?
Sam Altman
(01:25:18)
Yeah, I’m open to a lot of ways a model could behave, then, but I think you should have to say, “Here’s the principle and here’s what it should say in that case.”
对。模型可以有多种回答方式,但必须声明“原则是什么,遇到这种问题应该怎么答”。
Lex Fridman
(01:25:25)
That would be really nice. That would be really nice. And then everyone kind of agrees. Because there’s this anecdotal data that people pull out all the time, and if there’s some clarity about other representative anecdotal examples, you can define—
那就太好了,这样大家都心里有数。现在总是有人拿个例子来质疑,如果能把这些代表性例子也写清楚,就能……
Sam Altman
(01:25:39)
And then when it’s a bug, it’s a bug, and the company could fix that.
如此一来,若是 bug 就明确定义为 bug,公司就能修复。
Lex Fridman
(01:25:42)
Right. Then it’d be much easier to deal with the Black Nazi type of image generation, if there’s great examples.
没错。那样就更容易解决“黑人纳粹”这类图像生成问题了,只要示例明晰。
Sam Altman
(01:25:49)
Yeah.
对。
Lex Fridman
(01:25:49)
So San Francisco is a bit of an ideological bubble, tech in general as well. Do you feel the pressure of that within a company, that there’s a lean towards the left politically, that affects the product, that affects the teams?
旧金山多少是意识形态泡沫,科技圈也是。你是否感受到公司内部这种左倾压力,进而影响产品和团队?
Sam Altman
(01:26:06)
I feel very lucky that we don’t have the challenges at OpenAI that I have heard of at a lot of companies, I think. I think part of it is every company’s got some ideological thing. We have one about AGI and belief in that, and it pushes out some others. We are much less caught up in the culture war than I’ve heard about in a lot of other companies. San Francisco’s a mess in all sorts of ways, of course.
我觉得很幸运,OpenAI 并没有我在其他公司听到的那些意识形态难题。每家公司都有某种信条,我们的核心是对 AGI 的信念,这在一定程度上排挤了其他意识形态。相比我听说的很多公司,我们几乎未卷入文化战争。当然,旧金山在很多方面确实一团糟。
Lex Fridman
(01:26:33)
So that doesn’t infiltrate OpenAI as—
所以这种氛围没有渗透进 OpenAI?
Sam Altman
(01:26:36)
I’m sure it does in all sorts of subtle ways, but not in the obvious. I think we’ve had our flare-ups, for sure, like any company, but I don’t think we have anything like what I hear about happened at other companies here on this topic.
肯定以各种细微方式有所渗透,但没有明显表现。像所有公司一样,我们也有过一些小插曲,但远没有外界听说的其他公司那样严重。
Lex Fridman
(01:26:50)
So what, in general, is the process for the bigger question of safety? How do you provide that layer that protects the model from doing crazy, dangerous things?
那么从整体上说,你们的安全流程是什么?如何构建一道防线,避免模型做出疯狂、危险的行为?
Sam Altman
(01:27:02)
I think there will come a point where that’s mostly what we think about, the whole company. And it’s not like you have one safety team. It’s like when we shipped GPT-4, that took the whole company thinking about all these different aspects and how they fit together. And I think it’s going to take that. More and more of the company thinks about those issues all the time.
我认为终有一天,整个公司几乎都会把这作为首要工作。安全不是一个团队的事——发布 GPT-4 时,我们整家公司都在思考各方面如何协同。这种全面投入是必须的,且会越来越多地贯穿公司日常。
Lex Fridman
(01:27:21)
That’s literally what humans will be thinking about, the more powerful AI becomes. So most of the employees at OpenAI will be thinking, “Safety,” or at least to some degree.
随着 AI 越来越强,人类的确会把“安全”放在首位。OpenAI 大多数员工也都会或多或少地想着“安全”。
Sam Altman
(01:27:31)
Broadly defined. Yes.
广义上说,是的。
Lex Fridman
(01:27:33)
Yeah. I wonder, what are the full broad definition of that? What are the different harms that could be caused? Is this on a technical level or is this almost security threats?
嗯。我想知道“安全”的完整广义定义是什么?可能造成哪些不同类型的危害?是技术层面的,还是更像安全威胁?
Sam Altman
(01:27:44)
It could be all those things. Yeah, I was going to say it’ll be people, state actors trying to steal the model. It’ll be all of the technical alignment work. It’ll be societal impacts, economic impacts. It’s not just like we have one team thinking about how to align the model. It’s really going to be getting to the good outcome is going to take the whole effort.
都包括。比如个人或国家行为体试图窃取模型;技术对齐工作;社会影响、经济影响……并不是只靠一个对齐团队,而是要全公司协力,才能确保走向良好结果。
Lex Fridman
(01:28:10)
How hard do you think people, state actors, perhaps, are trying to, first of all, infiltrate OpenAI, but second of all, infiltrate unseen?
你觉得个人或国家级势力在渗透 OpenAI、甚至暗中渗透方面下多大功夫?
Sam Altman
(01:28:20)
They’re trying.
他们确实在尝试。
Lex Fridman
(01:28:24)
What kind of accent do they have?
他们带着什么口音?
Sam Altman
(01:28:27)
I don’t think I should go into any further details on this point.
这个细节我不方便再说。
Lex Fridman
(01:28:29)
Okay. But I presume it’ll be more and more and more as time goes on.
好吧。但我猜这类尝试只会越来越多。
Sam Altman
(01:28:35)
That feels reasonable.
这很合理。
Leap to GPT-5
Lex Fridman
(01:28:37)
Boy, what a dangerous space. Sorry to linger on this, even though you can’t quite say details yet, but what aspects of the leap from GPT-4 to GPT-5 are you excited about?
天哪,这领域真危险。虽然你不便透露细节,但关于从 GPT-4 跃迁到 GPT-5,你最兴奋的方面是什么?
Sam Altman
(01:28:53)
I’m excited about being smarter. And I know that sounds like a glib answer, but I think the really special thing happening is that it’s not like it gets better in this one area and worse at others. It’s getting better across the board. That’s, I think, super-cool.
我期待它“更聪明”。听上去也许轻描淡写,但真正特别的是:它不是某一方面变好、其他方面变差,而是全面提升——这太酷了。
Lex Fridman
(01:29:07)
Yeah, there’s this magical moment. I mean, you meet certain people, you hang out with people, and you talk to them. You can’t quite put a finger on it, but they get you. It’s not intelligence, really. It’s something else. And that’s probably how I would characterize the progress of GPT. It’s not like, yeah, you can point out, “Look, you didn’t get this or that,” but it’s just to which degree is there’s this intellectual connection. You feel like there’s an understanding in your crappy formulated prompts that you’re doing that it grasps the deeper question behind the question that you were. Yeah, I’m also excited by that. I mean, all of us love being heard and understood.
是啊,那是一种神奇时刻。有些人你跟他交谈,他就是能懂你——不是纯粹的智商,而是一种共鸣。我觉得 GPT 的进步正是如此:并非简单指出“这里没答对”,而是那种智识层面的连结——它能透过你笨拙的提示理解背后的深层问题。我也因这种被理解而兴奋,毕竟人人都喜欢被倾听、被懂得。
Sam Altman
(01:29:53)
That’s for sure.
确实如此。
Lex Fridman
(01:29:53)
That’s a weird feeling. Even with a programming, when you’re programming and you say something, or just the completion that GPT might do, it’s just such a good feeling when it got you, what you’re thinking about. And I look forward to getting you even better. On the programming front, looking out into the future, how much programming do you think humans will be doing 5, 10 years from now?
那感觉很奇妙。写代码时也是,当 GPT 的补全恰好理解你的思路,感觉真棒。我期待它更上一层楼。展望未来,五到十年后你觉得人类编程量会有多少?
Sam Altman
(01:30:19)
I mean, a lot, but I think it’ll be in a very different shape. Maybe some people will program entirely in natural language.
依旧会很多,但形式大不相同。也许有人会完全用自然语言编程。
Lex Fridman
(01:30:26)
Entirely natural language?
完全自然语言?
Sam Altman
(01:30:29)
I mean, no one programs writing by code. Some people. No one programs the punch cards anymore. I’m sure you can find someone who does, but you know what I mean.
我意思是,没有人再写穿孔卡了——或许极少数吧。你明白我的意思。
Lex Fridman
(01:30:39)
Yeah. You’re going to get a lot of angry comments. No. Yeah, there’s very few. I’ve been looking for people who program Fortran. It’s hard to find even Fortran. I hear you. But that changes the nature of what the skillset or the predisposition for the kind of people we call programmers then.
哈哈,这话会招来怒评。但确实如此,就连 Fortran 程序员都难找。这会改变“程序员”所需的技能与特质。
Sam Altman
(01:30:55)
Changes the skillset. How much it changes the predisposition, I’m not sure.
技能肯定会变。至于对天赋需求变多少,我还不确定。
Lex Fridman
(01:30:59)
Well, the same kind of puzzle solving, all that kind of stuff.
也许仍需要同样的解谜思维之类。
Sam Altman
(01:30:59)
Maybe.
也许吧。
Lex Fridman
(01:31:02)
Programming is hard. It’s like how get that last 1% to close the gap? How hard is that?
编程很难,尤其是把最后 1% 做到位,那有多难?
Sam Altman
(01:31:09)
Yeah, I think with most other cases, the best practitioners of the craft will use multiple tools. And they’ll do some work in natural language, and when they need to go write C for something, they’ll do that.
是的,正如其他领域的高手,会组合多种工具:该用自然语言时就用;需要写 C 代码时就写 C。
Lex Fridman
(01:31:20)
Will we see humanoid robots or humanoid robot brains from OpenAI at some point?
未来某个时候,我们会看到 OpenAI 推出人形机器人或人形机器人“大脑”吗?
Sam Altman
(01:31:28)
At some point.
总有一天会的。
Lex Fridman
(01:31:29)
How important is embodied AI to you?
具身智能对你来说有多重要?
Sam Altman
(01:31:32)
I think it’s depressing if we have AGI and the only way to get things done in the physical world is to make a human go do it. So I really hope that as part of this transition, as this phase change, we also get humanoid robots or some sort of physical world robots.
如果我们拥有了 AGI,却仍只能让人类去完成所有实体世界的任务,那未免太令人沮丧了。所以我真心希望在这场转变、这次相变中,我们也能得到人形机器人或其他在物理世界中工作的机器人。
Lex Fridman
(01:31:51)
I mean, OpenAI has some history and quite a bit of history working in robotics, but it hasn’t quite done in terms of ethics-
OpenAI 过去在机器人领域也做过不少尝试,不过在伦理层面似乎还没有……
Sam Altman
(01:31:59)
We’re a small company. We have to really focus. And also, robots were hard for the wrong reason at the time, but we will return to robots in some way at some point.
我们公司规模不大,必须聚焦重点。当年机器人之所以难,是因为当时的难点并不在“正道”上。但未来某个时候,我们会以某种方式重新投入机器人领域。
Lex Fridman
(01:32:11)
That sounds both inspiring and menacing.
听起来既鼓舞人心又有点吓人。
Sam Altman
(01:32:14)
Why?
为什么?
Lex Fridman
(01:32:15)
Because immediately, we will return to robots. It’s like in Terminator-
因为你说“我们将回归机器人”,立刻让人想到《终结者》里的场景——
Sam Altman
(01:32:20)
We will return to work on developing robots. We will not turn ourselves into robots, of course.
我们只是会重新投入到开发机器人的工作中去,当然不会把自己变成机器人。
AGI
Lex Fridman
(01:32:24)
Yeah. When do you think we, you and we as humanity will build AGI?
好的。那么你认为我们——你本人和整个人类——什么时候能构建出 AGI?
Sam Altman
(01:32:31)
I used to love to speculate on that question. I have realized since that I think it’s very poorly formed, and that people use extremely different definitions for what AGI is. So I think it makes more sense to talk about when we’ll build systems that can do capability X or Y or Z, rather than when we fuzzily cross this one mile marker. AGI is also not an ending. It’s closer to a beginning, but it’s much more of a mile marker than either of those things. But what I would say, in the interest of not trying to dodge a question, is I expect that by the end of this decade and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, “Wow, that’s really remarkable.” If we could look at it now. Maybe we’ve adjusted by the time we get there.
过去我很喜欢猜这个问题,但后来发现它本身就提得很模糊,因为大家对 AGI 的定义差异极大。与其说何时跨过一个模糊的里程碑,不如讨论何时能造出具备某项具体能力的系统。AGI 也不是终点,更像是起点,但更准确地说只是一个里程碑。为了不回避问题,我的看法是:到本十年末,也可能更早,我们就会拥有让人惊叹的强大系统——若现在就能一窥,会令我们大呼不可思议;等真正到那时,也许我们已经习以为常了。
Lex Fridman
(01:33:31)
But if you look at ChatGPT, even 3.5, and you show that to Alan Turing, or not even Alan Turing, people in the ’90s, they would be like, “This is definitely AGI.” Well, not definitely, but there’s a lot of experts that would say, “This is AGI.”
可是如果把 ChatGPT——哪怕只是 3.5 版本——展示给艾伦·图灵,或只是展示给 1990 年代的人,他们会说:“这绝对是 AGI。”好吧,也许不是绝对,但许多专家会认为“这就是 AGI”。
Sam Altman
(01:33:49)
Yeah, but I don’t think 3.5 changed the world. It maybe changed the world’s expectations for the future, and that’s actually really important. And it did get more people to take this seriously and put us on this new trajectory. And that’s really important, too. So again, I don’t want to undersell it. I think I could retire after that accomplishment and be pretty happy with my career. But as an artifact, I don’t think we’re going to look back at that and say, “That was a threshold that really changed the world itself.”
没错,但我不认为 3.5 改变了世界。它也许改变了人们对未来的预期——这非常重要;它确实让更多人认真对待这项技术,把全人类带上新轨道——这同样关键。所以我不想贬低它;单凭这一成就我就足以功成身退。但就其本身而言,回望未来时我们不会说“那是彻底改变世界的门槛”。
Lex Fridman
(01:34:20)
So to you, you’re looking for some really major transition in how the world—
也就是说,在你看来,真正的 AGI 应该带来世界运作方式的重大转变——
Sam Altman
(01:34:24)
For me, that’s part of what AGI implies.
对我而言,那正是 AGI 所蕴含的一部分意义。
Lex Fridman
(01:34:29)
Singularity-level transition?
那是“奇点级”的转变吗?
Sam Altman
(01:34:31)
No, definitely not.
不,绝对不是。
Lex Fridman
(01:34:32)
But just a major, like the internet being, like Google search did, I guess. What was the transition point, you think, now?
而是那种重大变迁,类似互联网或谷歌搜索带来的转折。你认为目前的转折点是什么?
Sam Altman
(01:34:39)
Does the global economy feel any different to you now or materially different to you now than it did before we launched GPT-4? I think you would say no.
在我们发布 GPT-4 之前和之后,你感觉全球经济有本质不同吗?我想你的答案会是否定的。
Lex Fridman
(01:34:47)
No, no. It might be just a really nice tool for a lot of people to use. Will help you with a lot of stuff, but doesn’t feel different. And you’re saying that—
没有,没有。它只是很多人手上的好工具,能帮忙做很多事,但并未让世界面貌焕然一新。你的意思是——
Sam Altman
(01:34:55)
I mean, again, people define AGI all sorts of different ways. So maybe you have a different definition than I do. But for me, I think that should be part of it.
是的,大家对 AGI 定义各异,也许你的定义和我不同。但在我看来,“改变世界运作方式”应是 AGI 的一部分。
Lex Fridman
(01:35:02)
There could be major theatrical moments, also. What to you would be an impressive thing AGI would do? You are alone in a room with the system.
也可能出现重大“戏剧性”时刻。对你来说,AGI 做出什么才算令人震撼?假设你独自在房间里与系统对话。
Sam Altman
(01:35:16)
This is personally important to me. I don’t know if this is the right definition. I think when a system can significantly increase the rate of scientific discovery in the world, that’s a huge deal. I believe that most real economic growth comes from scientific and technological progress.
这对我个人意义重大,不知是否合适的定义——当一个系统能显著提升全球科学发现的速度,那就是巨大突破。我认为真正的经济增长大多源于科技进步。

没有清晰的概念,跟Google的Demis Hassabis差不多的水平。
Lex Fridman
(01:35:35)
I agree with you, hence why I don’t like the skepticism about science in the recent years.
我同意,所以我不喜欢近年社会对科学的怀疑态度。
Sam Altman
(01:35:42)
Totally.
完全同意。
Lex Fridman
(01:35:43)
But actual, measurable rate of scientific discovery. But even just seeing a system have really novel intuitions, scientific intuitions, even that would be just incredible.
要有可量化的科学发现速度。哪怕只是看到系统展现全新的科学直觉,也会令人惊叹。
Sam Altman
(01:36:01)
Yeah.
没错。
Lex Fridman
(01:36:02)
You quite possibly would be the person to build the AGI to be able to interact with it before anyone else does. What kind of stuff would you talk about?
你很可能会成为最早与 AGI 互动的人。你会和它聊些什么?
Sam Altman
(01:36:09)
I mean, definitely the researchers here will do that before I do. But well, I’ve actually thought a lot about this question. I think as we talked about earlier, I think this is a bad framework, but if someone were like, “Okay, Sam, we’re finished. Here’s a laptop, this is the AGI. You can go talk to it.” I find it surprisingly difficult to say what I would ask that I would expect that first AGI to be able to answer. That first one is not going to be the one which is like, I don’t think, “Go explain to me the grand unified theory of physics, the theory of everything for physics.” I’d love to ask that question. I’d love to know the answer to that question.
其实本公司研究员肯定会比我更早接触 AGI。但我确实想过:如果有人递给我一台笔记本说“Sam,AGI 做完了,你可以聊聊”,我反而难以确定该问它什么,并且确信它能回答。初代 AGI 或许还不足以解释“万有理论”。我当然想问,也想知道答案。
Lex Fridman
(01:36:55)
You can ask yes-or-no questions about “Does such a theory exist? Can it exist?”
你可以先问“这种理论是否存在?能否存在?”这样的是非题。
Sam Altman
(01:37:00)
Well, then, those are the first questions I would ask.
那的确会是我最先提出的问题。
Lex Fridman
(01:37:02)
Yes or no. And then, based on that, “Are there other alien civilizations out there? Yes or no? What’s your intuition?” And then you just ask that.
对或错。接着再问:“宇宙里有其他外星文明吗?是或不是?你的直觉是什么?”就这样一连串问下去。
Sam Altman
(01:37:10)
Yeah, I mean, well, so I don’t expect that this first AGI could answer any of those questions even as yes-or-nos. But if it could, those would be very high on my list.
是的,但我并不指望第一代 AGI 连这种是非题都能回答。如果它能回答,那肯定是我最想问的。
Lex Fridman
(01:37:20)
Maybe you can start assigning probabilities?
也许可以让它给出概率?
Sam Altman
(01:37:22)
Maybe. Maybe we need to go invent more technology and measure more things first.
也许。但我们可能得先发明更多技术、做更多测量。
Lex Fridman
(01:37:28)
Oh, I see. It just doesn’t have enough data. It’s just if it keeps—
明白了——它只是数据不够,如果继续——
Sam Altman
(01:37:31)
I mean, maybe it says, “You want to know the answer to this question about physics, I need you to build this machine and make these five measurements, and tell me that.”
也许它会说:“要想知道这个物理问题的答案,请先造一台机器,做这五个测量,然后告诉我结果。”
Lex Fridman
(01:37:39)
Yeah, “What the hell do you want from me? I need the machine first, and I’ll help you deal with the data from that machine.” Maybe it’ll help you build a machine.
没错,“你到底想让我干啥?得先有机器,我再帮你处理数据。”也许它还能帮你设计那台机器。
Sam Altman
(01:37:47)
Maybe. Maybe.
也许,可能吧。
Lex Fridman
(01:37:49)
And on the mathematical side, maybe prove some things. Are you interested in that side of things, too? The formalized exploration of ideas?
数学方面呢?也许能证明一些命题。你也对这种形式化探索感兴趣吗?
Sam Altman
(01:37:56)
Mm-hmm.
嗯哼。
Lex Fridman
(01:37:59)
Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power?
谁若最先造出 AGI 就将握有巨大权力。你相信自己能驾驭那样的权力吗?
Sam Altman
(01:38:14)
Look, I’ll just be very honest with this answer. I was going to say, and I still believe this, that it is important that I nor any other one person have total control over OpenAI or over AGI. And I think you want a robust governance system. I can point out a whole bunch of things about all of our board drama from last year about how I didn’t fight it initially, and was just like, “Yeah. That’s the will of the board, even though I think it’s a really bad decision.” And then later, I clearly did fight it, and I can explain the nuance and why I think it was okay for me to fight it later. But as many people have observed, although the board had the legal ability to fire me, in practice, it didn’t quite work. And that is its own kind of governance failure.
我得坦诚回答。我始终认为,OpenAI 或 AGI 都不应由我或任何单个人完全掌控,必须有健全的治理体系。去年董事会风波里,我一开始没有反抗,只是说“这是董事会意志,尽管我认为这是严重错误”。后来我确实反击了,并能解释其中细微差别以及为何那时反击合理。但正如许多人指出的,虽然董事会在法律上有权解雇我,可实践中并未成功——这本身就是一种治理失灵。
Sam Altman
(01:39:24)
Now again, I feel like I can completely defend the specifics here, and I think most people would agree with that, but it does make it harder for me to look you in the eye and say, “Hey, the board can just fire me.” I continue to not want super-voting control over OpenAI. I never have. Never have had it, never wanted it. Even after all this craziness, I still don’t want it. I continue to think that no company should be making these decisions, and that we really need governments to put rules of the road in place.
当然,我可以为这些细节辩护,大多数人也会认可。但这确实使我很难再直视你说“董事会随时能解雇我”。我依旧不想要 OpenAI 的超级投票权,从未拥有,也从未想要。即便经历了这一切,我仍不想要。我依然认为,不应由任何公司独自作出这类决定,政府必须制定规则。
Sam Altman
(01:40:12)
And I realize that that means people like Marc Andreessen or whatever will claim I’m going for regulatory capture, and I’m just willing to be misunderstood there. It’s not true. And I think in the fullness of time, it’ll get proven out why this is important. But I think I have made plenty of bad decisions for OpenAI along the way, and a lot of good ones, and I’m proud of the track record overall. But I don’t think any one person should, and I don’t think any one person will. I think it’s just too big of a thing now, and it’s happening throughout society in a good and healthy way. But I don’t think any one person should be in control of an AGI, or this whole movement towards AGI. And I don’t think that’s what’s happening.
我知道这会让 Marc Andreessen 之类的人指责我想搞“监管俘获”,对此我愿意被误解——事实并非如此,时间会证明为何这很重要。一路上我为 OpenAI 做过不少错误决定,也有很多正确决策,我为整体成绩感到自豪。但我认为、也相信不会有任何单个人掌控一切。AGI 已经太宏大,正在全社会健康地推进;任何个人都不该、也不会掌控 AGI 或 AGI 运动,我也不认为这种掌控正在发生。
Lex Fridman
(01:41:00)
Thank you for saying that. That was really powerful, and that was really insightful that this idea that the board can fire you is legally true. But human beings can manipulate the masses into overriding the board and so on. But I think there’s also a much more positive version of that, where the people still have power, so the board can’t be too powerful, either. There’s a balance of power in all of this.
谢谢你的坦诚,这很有力量。你提到“董事会能解雇你”在法律上成立,但人类可以煽动群众反制董事会;不过也有更积极的版本:公众依旧拥有权力,董事会也不能过于强势——这其中需要权力平衡。
Sam Altman
(01:41:29)
Balance of power is a good thing, for sure.
权力平衡当然是好事。
Lex Fridman
(01:41:34)
Are you afraid of losing control of the AGI itself? That’s a lot of people who are worried about existential risk not because of state actors, not because of security concerns, because of the AI itself.
你是否担心失去对 AGI 本身的控制?很多人害怕的生存风险并非来自国家或安全问题,而是 AI 本身。
Sam Altman
(01:41:45)
That is not my top worry as I currently see things. There have been times I worried about that more. There may be times again in the future where that’s my top worry. It’s not my top worry right now.
以我目前的视角看,这不是我最担心的。过去有时我更担忧这点,将来或许还会,但眼下这不是首要担忧。
Lex Fridman
(01:41:53)
What’s your intuition about it not being your worry? Because there’s a lot of other stuff to worry about, essentially? You think you could be surprised? We—
为什么它不是你最担心的?因为还有太多其他问题?你觉得可能会被打脸吗?我们——
Sam Altman
(01:42:02)
For sure.
当然可能。
Lex Fridman
(01:42:02)
… could be surprised?
……可能会被打脸?
Sam Altman
(01:42:03)
Of course. Saying it’s not my top worry doesn’t mean I don’t think we need to. I think we need to work on it. It’s super hard, and we have great people here who do work on that. I think there’s a lot of other things we also have to get right.
当然。说它不是我最担心的,并不代表可以忽视。我们必须努力解决,这是极其艰难的任务,所幸这里有优秀团队在做。同时还有许多其他关键问题需要我们搞定。
Lex Fridman
(01:42:15)
To you, it’s not super-easy to escape the box at this time, connect to the internet—
对你来说,AGI 眼下要“越狱”、连网并不容易?
Sam Altman
(01:42:21)
We talked about theatrical risks earlier. That’s a theatrical risk. That is a thing that can really take over how people think about this problem. And there’s a big group of very smart, I think very well-meaning AI safety researchers that got super-hung up on this one problem, I’d argue without much progress, but super-hung up on this one problem. I’m actually happy that they do that, because I think we do need to think about this more. But I think it pushed out of the space of discourse a lot of the other very significant AI-related risks.
我们之前谈到“戏剧化风险”,这就是其中之一:AGI 越狱、联网会主导公众思考。有一大批聪明、且出于好意的 AI 安全研究者过度执着于这一问题,我认为进展不大,却沉迷其中。我倒乐见其成,因为确实需要更多思考。但这也挤压了其他同样重要的 AI 风险讨论空间。
Lex Fridman
(01:43:01)
Let me ask you about you tweeting with no capitalization. Is the shift key broken on your keyboard?
再问一个轻松点的:你发推时从不用大写,是键盘的 Shift 键坏了?
Sam Altman
(01:43:07)
Why does anyone care about that?
为什么大家都关心这个?
Lex Fridman
(01:43:09)
I deeply care.
我就很在意。
Sam Altman
(01:43:10)
But why? I mean, other people ask me about that, too. Any intuition?
可为什么?不少人问过我。有什么直觉解释吗?
Lex Fridman
(01:43:17)
I think it’s the same reason. There’s this poet, E.E. Cummings, that mostly doesn’t use capitalization to say, “Fuck you” to the system kind of thing. And I think people are very paranoid, because they want you to follow the rules.
我想原因类似。诗人 E. E. 卡明斯就几乎不用大写,以此对规则说“去你的”。人们偏执地希望你遵守规则。
Sam Altman
(01:43:29)
You think that’s what it’s about?
你觉得我也是这种心态?
Lex Fridman
(01:43:30)
I think it’s like this—
差不多吧——
Sam Altman
(01:43:33)
It’s like, “This guy doesn’t follow the rules. He doesn’t capitalize his tweets.”
“这家伙不守规矩,连推文都不大写。”
Lex Fridman
(01:43:35)
Yeah.
对。
Sam Altman
(01:43:36)
“This seems really dangerous.”
“他看起来很危险。”
Lex Fridman
(01:43:37)
“He seems like an anarchist.”
“像个无政府主义者。”
Sam Altman
(01:43:39)
That doesn’t—
这倒——
Lex Fridman
(01:43:40)
Are you just being poetic, hipster? What’s the—
你只是想走文艺/嬉皮路线?还是——
Sam Altman
(01:43:44)
I grew up as—
我从小——
Lex Fridman
(01:43:44)
Follow the rules, Sam.
守规矩点,Sam。
Sam Altman
(01:43:45)
I grew up as a very online kid. I’d spent a huge amount of time chatting with people back in the days where you did it on a computer, and you could log off instant messenger at some point. And I never capitalized there, as I think most internet kids didn’t, or maybe they still don’t. I don’t know. And actually, now I’m really trying to reach for something, but I think capitalization has gone down over time. If you read Old English writing, they capitalized a lot of random words in the middle of sentences, nouns and stuff that we just don’t do anymore. I personally think it’s sort of a dumb construct that we capitalize the letter at the beginning of a sentence and of certain names and whatever, but that’s fine.
我从小就是“网瘾少年”,花很多时间在线聊天,那个年代大家用电脑上 IM,还会下线。我聊天从不大写,大多数网民也不大写吧?其实随着时间推移,大写使用率一直在下降。看看古英语作品,就连句中名词都乱大写,而我们如今早不这么做。我个人觉得句首和专有名的大写有点愚蠢,但也无妨。
Sam Altman
(01:44:33)
And then I used to, I think, even capitalize my tweets because I was trying to sound professional or something. I haven’t capitalized my private DMs or whatever in a long time. And then slowly, stuff like shorter-form, less formal stuff has slowly drifted to closer and closer to how I would text my friends. If I pull up a Word document and I’m writing a strategy memo for the company or something, I always capitalize that. If I’m writing a long, more formal message, I always use capitalization there, too. So I still remember how to do it. But even that may fade out. I don’t know. But I never spend time thinking about this, so I don’t have a ready-made—
以前我发推也会大写,为了看起来更专业;但私人消息早就不大写。慢慢地,推文这种短而随意的形式就越来越像给朋友发短信。如果我打开 Word 写公司战略备忘录,我会用大写;写正式长文也会大写——我当然还会用。但也许哪天连那都淡化了。我从没花时间想过这事,所以讲不出成型说法。
Lex Fridman
(01:45:23)
Well, it’s interesting. It’s good to, first of all, know the shift key is not broken.
也挺有趣。至少确定了 Shift 键没坏。
Sam Altman
(01:45:27)
It works.
它好好的。
Lex Fridman
(01:45:27)
I was mostly concerned about your—
我主要担心你——
Sam Altman
(01:45:27)
No, it works.
放心,能用。
Lex Fridman
(01:45:29)
… well-being on that front.
……的键盘健康。
Sam Altman
(01:45:30)
I wonder if people still capitalize their Google searches. If you’re writing something just to yourself or their ChatGPT queries, if you’re writing something just to yourself, do some people still bother to capitalize?
我倒在想,大家谷歌搜索还会大写吗?如果只是给自己写点东西,比如 ChatGPT 查询,还会有人刻意大写吗?
Lex Fridman
(01:45:40)
Probably not. But yeah, there’s a percentage, but it’s a small one.
大概不会吧,可能有,但比例很小。
Sam Altman
(01:45:44)
The thing that would make me do it is if people were like, “It’s a sign of…” Because I’m sure I could force myself to use capital letters, obviously. If it felt like a sign of respect to people or something, then I could go do it. But I don’t know. I don’t think about this.
除非别人认为“不大写代表不尊重”,否则我当然可以强迫自己大写。若那被视作尊重,我也能照做。但说真的,我没把这当回事。
Lex Fridman
(01:46:01)
I don’t think there’s a disrespect, but I think it’s just the conventions of civility that have a momentum, and then you realize it’s not actually important for civility if it’s not a sign of respect or disrespect. But I think there’s a movement of people that just want you to have a philosophy around it so they can let go of this whole capitalization thing.
我并不觉得那是不尊重,而只是礼仪惯例本身有惯性;后来你会发现,如果大小写并不能表达尊重或不尊重,那它对文明礼仪其实并不重要。不过,人们似乎希望你为此阐明一套“哲学”,好让他们彻底放下对大小写的执念。
Sam Altman
(01:46:19)
I don’t think anybody else thinks about this as much. I mean, maybe some people. I know some people—
我觉得没几个人会像你这么在意。也许有,但我知道有人——
Lex Fridman
(01:46:22)
People think about it every day for many hours a day. So I’m really grateful we clarified it.
有人每天都花好几个小时琢磨这个,所以我很感激我们把它讲清楚了。
Sam Altman
(01:46:28)
Can’t be the only person that doesn’t capitalize tweets.
肯定不止我一个人在推文里不用大写。
Lex Fridman
(01:46:30)
You’re the only CEO of a company that doesn’t capitalize tweets.
但你可能是唯一不用大写的 CEO。
Sam Altman
(01:46:34)
I don’t even think that’s true, but maybe. I’d be very surprised.
我觉得也未必,不过也可能吧。我会很惊讶。
Lex Fridman
(01:46:37)
All right. We’ll investigate further and return to this topic later. Given Sora’s ability to generate simulated worlds, let me ask you a pothead question. Does this increase your belief, if you ever had one, that we live in a simulation, maybe a simulated world generated by an AI system?
行吧,我们以后再深挖这个话题。既然 Sora 能生成模拟世界,我问个“烧脑”问题:这是否让你更相信“我们可能生活在 AI 生成的模拟世界”这种模拟宇宙假说?
Sam Altman
(01:47:05)
Somewhat. I don’t think that’s the strongest piece of evidence. I think the fact that we can generate worlds should increase everyone’s probability somewhat, or at least openness to it somewhat. But I was certain we would be able to do something like Sora at some point. It happened faster than I thought, but I guess that was not a big update.
有一点。但我不认为这算最有力的证据。我们能生成世界,确实应该让每个人对“模拟宇宙”的概率稍微提高、或者更开放。不过我早就确信迟早会出现类似 Sora 的东西,只是来得比我预期快,所以这并非巨大更新。
Lex Fridman
(01:47:34)
Yeah. But the fact that… And presumably, it’ll get better and better and better… You can generate worlds that are novel, they’re based in some aspect of training data, but when you look at them, they’re novel, that makes you think how easy it is to do this thing. How easy it is to create universes, entire video game worlds that seem ultra-realistic and photo-realistic. And then how easy is it to get lost in that world, first with a VR headset, and then on the physics-based level?
是啊,而且它显然会越来越好——你可以生成全新的世界,虽然训练数据里有原型,但呈现出来的依旧新颖。这让人意识到:创造一个宇宙、一个极度逼真的游戏世界,竟如此容易。而且人类很可能先戴上 VR 头显就沉浸其中,接着在更物理层面也迷失进去。
Sam Altman
(01:48:10)
Someone said to me recently, I thought it was a super-profound insight, that there are these very-simple sounding but very psychedelic insights that exist sometimes. So the square root function, square root of four, no problem. Square root of two, okay, now I have to think about this new kind of number. But once I come up with this easy idea of a square root function that you can explain to a child and exists by even looking at some simple geometry, then you can ask the question of “What is the square root of negative one?” And this is why it’s a psychedelic thing. That tips you into some whole other kind of reality.
最近有人跟我说了个很深刻的比喻:有些听起来简单却颇“迷幻”的洞见。比如平方根:√4 没问题;√2 迫使你接受一种新数;而一旦理解了“平方根”这个简单概念(连孩子都能懂,也能在几何里看到),你就会问“√(-1) 是什么?” 这瞬间把你带入另一种全新现实,所以才“迷幻”。
Sam Altman
(01:49:07)
And you can come up with lots of other examples, but I think this idea that the lowly square root operator can offer such a profound insight and a new realm of knowledge applies in a lot of ways. And I think there are a lot of those operators for why people may think that any version that they like of the simulation hypothesis is maybe more likely than they thought before. But for me, the fact that Sora worked is not in the top five.
类似例子很多:一个看似普通的“操作符”却开启全新认知领域。同理,也有很多“操作符”会让人觉得模拟宇宙假说比之前更可信。但对我而言,Sora 的成功并不排进前五大证据。
Lex Fridman
(01:49:46)
I do think, broadly speaking, AI will serve as those kinds of gateways at its best, simple, psychedelic-like gateways to another wave C reality.
我倒觉得,广义上讲,AI 终将成为这种“入口”——简单、甚至有点“迷幻”的入口,引向另一波级的现实。
Sam Altman
(01:49:57)
That seems for certain.
这几乎是肯定的。
Lex Fridman
(01:49:59)
That’s pretty exciting. I haven’t done ayahuasca before, but I will soon. I’m going to the aforementioned Amazon jungle in a few weeks.
这挺让人兴奋。我还没尝过死藤水,但很快就会——几周后我就要去亚马逊雨林。
Sam Altman
(01:50:07)
Excited?
激动吗?
Lex Fridman
(01:50:08)
Yeah, I’m excited for it. Not the ayahuasca part, but that’s great, whatever. But I’m going to spend several weeks in the jungle, deep in the jungle. And it’s exciting, but it’s terrifying.
挺激动的。不是为了死藤水,本身也不错。我将在雨林深处待几周,既兴奋又害怕。
Sam Altman
(01:50:17)
I’m excited for you.
替你高兴。
Lex Fridman
(01:50:18)
There’s a lot of things that can eat you there, and kill you and poison you, but it’s also nature, and it’s the machine of nature. And you can’t help but appreciate the machinery of nature in the Amazon jungle. It’s just like this system that just exists and renews itself every second, every minute, every hour. It’s the machine. It makes you appreciate this thing we have here, this human thing came from somewhere. This evolutionary machine has created that, and it’s most clearly on display in the jungle. So hopefully, I’ll make it out alive. If not, this will be the last fun conversation we’ve had, so I really deeply appreciate it. Do you think, as I mentioned before, there’s other alien civilizations out there, intelligent ones, when you look up at the skies?
雨林里有很多能吃掉你、毒死你的东西,但那也是大自然——一种自我运行的机器。在亚马逊,你会由衷敬畏自然机器:每一秒、每一分、每一小时都在自我更新。它让你意识到,我们人类的存在来自某处,是进化机器塑造的——在雨林里最能体会到这一点。希望我能活着回来,否则这就是我们最后一次愉快交谈了,非常感谢。对了,你认为夜空中真的有其他智慧外星文明吗?

都是吸毒、作死的人,硅谷的文化,乔布斯、马斯克都有这些行为。
Aliens
Sam Altman
(01:51:17)
I deeply want to believe that the answer is yes. I find the Fermi paradox very puzzling.
我非常希望答案是肯定的,但费米悖论让我十分困惑。
Lex Fridman
(01:51:28)
I find it scary that intelligence is not good at handling-
让我感到可怕的是,高级智慧并不擅长应对——
Sam Altman
(01:51:34)
Very scary.
确实可怕。
Lex Fridman
(01:51:34)
… powerful technologies. But at the same time, I think I’m pretty confident that there’s just a very large number of intelligent alien civilizations out there. It might just be really difficult to travel through space.
……强大的技术。不过与此同时,我相当确信宇宙中有大量智慧外星文明,只是跨越太空旅行非常困难。
Sam Altman
(01:51:47)
Very possible.
很有可能。
Lex Fridman
(01:51:50)
And it also makes me think about the nature of intelligence. Maybe we’re really blind to what intelligence looks like, and maybe AI will help us see that. It’s not as simple as IQ tests and simple puzzle solving. There’s something bigger. What gives you hope about the future of humanity, this thing we’ve got going on, this human civilization?
这也让我思考“智慧”的本质。也许我们根本不懂智慧的真正样子,也许 AI 能帮我们看见。它不只是智商测试或解谜那么简单,还有更宏大的东西。关于人类文明的未来,是什么让你充满希望?
Sam Altman
(01:52:12)
I think the past is a lot. I mean, we just look at what humanity has done in a not very long period of time, huge problems, deep flaws, lots to be super-ashamed of. But on the whole, very inspiring. Gives me a lot of hope.
我想看看过去就够了。在并不漫长的时间里,人类完成了那么多事——虽然也有巨大问题、严重缺陷、许多值得羞愧的地方,但整体上仍令人振奋,这让我充满希望。
Lex Fridman
(01:52:29)
Just the trajectory of it all.
就是那条整体上升的轨迹。
Sam Altman
(01:52:30)
Yeah.
没错。
Lex Fridman
(01:52:31)
That we’re together pushing towards a better future.
我们正一起迈向更好的未来。
Sam Altman
(01:52:40)
One thing that I wonder about, is AGI going to be more like some single brain, or is it more like the scaffolding in society between all of us? You have not had a great deal of genetic drift from your great-great-great grandparents, and yet what you’re capable of is dramatically different. What you know is dramatically different. And that’s not because of biological change. I mean, you got a little bit healthier, probably. You have modern medicine, you eat better, whatever. But what you have is this scaffolding that we all contributed to built on top of. No one person is going to go build the iPhone. No one person is going to go discover all of science, and yet you get to use it. And that gives you incredible ability. And so in some sense, that we all created that, and that fills me with hope for the future. That was a very collective thing.
我常想,AGI 会更像一个单一“大脑”,还是更像把我们连结起来的社会脚手架?和你的曾曾曾祖父母相比,你的基因几乎没什么变化,但你的能力和知识却天差地别。这并非生物层面的进化——也许你更健康,有现代医学、营养更好——真正改变的是大家共同搭建的知识脚手架。没有任何个人能独自造出 iPhone、独自发现全部科学,但你却能直接使用这些成果,获得惊人能力。从某种意义上说,这种集体创造让我对未来充满希望。
Lex Fridman
(01:53:40)
Yeah, we really are standing on the shoulders of giants. You mentioned when we were talking about theatrical, dramatic AI risks that sometimes you might be afraid for your own life. Do you think about your death? Are you afraid of it?
是的,我们的确站在巨人肩膀上。你曾提到“戏剧化”的 AI 风险,有时甚至担心自身安全。你会思考死亡吗?你害怕死吗?
Sam Altman
(01:53:58)
I mean, if I got shot tomorrow and I knew it today, I’d be like, “Oh, that’s sad. I want to see what’s going to happen. What a curious time. What an interesting time.” But I would mostly just feel very grateful for my life.
如果我得知明天会被枪杀,我会觉得遗憾——我想看看未来会发生什么,这真是奇妙又有趣的时代。但总体上,我会深深感激自己所拥有的生命。
Lex Fridman
(01:54:15)
The moments that you did get. Yeah, me, too. It’s a pretty awesome life. I get to enjoy awesome creations of humans, which I believe ChatGPT is one of, and everything that OpenAI is doing. Sam, it’s really an honor and pleasure to talk to you again.
珍惜已拥有的时刻。我也是。生活太棒了——我能享受人类的伟大创造,我认为 ChatGPT 就是其中之一,还有 OpenAI 的一切。Sam,再次与你交谈真的很荣幸、很愉快。
Sam Altman
(01:54:35)
Great to talk to you. Thank you for having me.
很高兴与你交谈,感谢邀请。
Lex Fridman
(01:54:38)
Thanks for listening to this conversation with Sam Altman. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Arthur C. Clarke. “It may be that our role on this planet is not to worship God, but to create him.” Thank you for listening, and hope to see you next time.
感谢大家收听我与 Sam Altman 的对话。若想支持本播客,请查看简介中的赞助商信息。最后引用阿瑟·克拉克的一句话与诸位共勉:“或许我们在这个星球上的使命,不是去崇拜上帝,而是创造上帝。”感谢收听,期待下次再见。