Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.
我们的使命是确保通用人工智能——即通常比人类更聪明的AI系统——能造福全人类。
If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.
如果通用人工智能能够成功被创造出来,这项技术将帮助我们提升人类整体水平:增加资源丰裕度、为全球经济注入强劲动力,并推动新的科学知识发现,从而突破可能性的极限。
AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.
通用人工智能有望为每个人赋予令人惊叹的新能力;我们可以想象这样一个世界:人们几乎能在任何认知任务上获得帮助,为人类的灵巧思维和创造力提供巨大的乘数效应。
On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.
另一方面,通用人工智能也面临被滥用、发生严重事故以及造成社会动荡的风险。鉴于通用人工智能潜在的巨大益处,我们并不认为社会可以或应该永远阻止它的发展;相反,社会以及通用人工智能的研发者必须找到正确的发展方式。
Although we cannot predict exactly what will happen, and of course our current progress could hit a wall, we can articulate the principles we care about most:
尽管我们无法准确预测会发生什么,而且我们目前的进展也有可能遭遇瓶颈,但我们可以阐明我们最为关心的原则:
We want AGI to empower humanity to maximally flourish in the universe. We don’t expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of humanity.
我们希望通用人工智能能够让人类在宇宙中最大限度地繁荣发展。我们并不认为未来会是一个不折不扣的乌托邦,但我们希望在其中最大化正面影响、最小化负面影响,让通用人工智能成为人类的“增幅器”。
We want the benefits of, access to, and governance of AGI to be widely and fairly shared.
我们希望通用人工智能的益处、使用渠道以及治理都能得到广泛而公平的共享。
We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios.
我们希望能够成功应对巨大的风险。在面对这些风险时,我们也承认,理论上看似正确的事情在实践中往往会以出人意料的方式发展。我们认为必须通过部署功能相对受限的技术版本,不断学习和适应,从而最大程度地减少“只有一次机会把事情做对”的情形。
The short term
短期展望
There are several things we think are important to do now to prepare for AGI.
我们认为,为迎接通用人工智能(AGI),目前有几项工作至关重要。
First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.
首先,随着我们创建出越来越强大的系统,我们希望在现实环境中进行部署并积累运营经验。我们相信,这是在确保通用人工智能安全出现的最佳途径——与其突然进入AGI时代,不如让世界逐步过渡到拥有AGI的状态。我们预计强大的AI将显著加快世界的进步步伐,而循序渐进的调整方式更为适宜。
A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.
循序渐进的过渡为公众、政策制定者和各类机构提供了时间来了解正在发生的变化,并亲身体验这些系统所带来的利弊,从而为调整经济和出台监管提供条件。此外,这种方式也允许社会和AI在相对风险较低的情况下协同进化,让人们有机会共同思考自己所需要的东西。
We currently believe the best way to successfully navigate AI deployment challenges is with a tight feedback loop of rapid learning and careful iteration. Society will face major questions about what AI systems are allowed to do, how to combat bias, how to deal with job displacement, and more. The optimal decisions will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far. This makes planning in a vacuum very difficult.
我们目前认为,要成功应对AI部署所带来的挑战,需要快速学习与谨慎迭代相结合的紧密反馈循环。社会将面临一些重大问题,如允许AI系统执行哪些任务、如何对抗偏见以及如何应对工作被替代等。由于最优决策取决于技术的实际发展路径,加之新兴领域往往无法准确预测未来,这使得人们很难在“真空”环境中制定周全的计划。
Generally speaking, we think more usage of AI in the world will lead to good, and want to promote it (by putting models in our API, open-sourcing them, etc.). We believe that democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas.
总体而言,我们相信在现实世界中更多地使用AI能带来正面效益,并希望通过在API中提供模型、开放源代码等方式予以推广。我们相信,AI的民主化使用也会带来更多且更优质的研究成果,分散权力,使益处更加广泛,并为更多人贡献新思路创造条件。
As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential(opens in a new window).
随着我们的系统越来越接近通用人工智能,我们对模型的创建和部署将愈发谨慎。我们的决策所需的慎重程度将远高于社会对一般新技术的要求,也可能高于许多用户的期望。AI领域中有些人认为AGI(以及其后继系统)的风险是虚构的;如果他们最终被证明是对的,我们会非常欣喜,但在此之前,我们将把这些风险视为生存威胁般地对待。
At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans around continuous deployment.
未来某个时刻,部署AGI所带来的收益和风险之间的平衡(例如为恶意行为者提供能力、引发社会或经济动荡、加速不安全的竞争等)可能会发生改变。如果出现这样的情况,我们可能会对持续部署的规划做出重大调整。
“As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models.”
“随着我们的系统越来越接近通用人工智能,我们在模型的创建和部署上变得愈发谨慎。”
Second, we are working towards creating increasingly aligned and steerable models. Our shift from models like the first version of GPT-3 to InstructGPT and ChatGPT(opens in a new window) is an early example of this.
第二,我们正在努力打造与人类价值观更加对齐并可定向的模型。从最初版本的 GPT-3 过渡到 InstructGPT 和 ChatGPT(新窗口打开)就是一个早期示例。
In particular, we think it’s important that society agree on extremely wide bounds of how AI can be used, but that within those bounds, individual users have a lot of discretion. Our eventual hope is that the institutions of the world agree on what these wide bounds should be; in the shorter term we plan to run experiments for external input. The institutions of the world will need to be strengthened with additional capabilities and experience to be prepared for complex decisions about AGI.
尤其值得强调的是,我们认为社会应就 AI 的使用范围达成极其宽泛的共识,但在这些共识所限定的范围内,个人用户应拥有高度的自主权。我们最终希望世界各国的机构能共同商定这些广泛边界的具体内容;在短期内,我们计划开展一些实验以获得外部意见。全球各机构还需要加强相应能力和经验,以应对有关 AGI 的复杂决策。
The “default setting” of our products will likely be quite constrained, but we plan to make it easy for users to change the behavior of the AI they’re using. We believe in empowering individuals to make their own decisions and the inherent power of diversity of ideas.
我们的产品“默认设置”可能会相当受限,但我们计划让用户能够轻松地调整所使用 AI 的行为。我们相信应赋予个人作出自主决定的权利,并相信多元化观点所蕴含的力量。
We will need to develop new alignment techniques as our models become more powerful (and tests to understand when our current techniques are failing). Our plan in the shorter term is to use AI to help humans evaluate the outputs of more complex models and monitor complex systems, and in the longer term to use AI to help us come up with new ideas for better alignment techniques.
随着我们的模型日益强大,我们需要开发新的对齐技术(以及相应的测试手段,以识别当前技术失效的时机)。在短期内,我们计划借助 AI 帮助人类评估更复杂模型的输出并监控复杂系统;从长远来看,我们希望利用 AI 帮助我们提出改进对齐技术的新思路。
Importantly, we think we often have to make progress on AI safety and capabilities together. It’s a false dichotomy to talk about them separately; they are correlated in many ways. Our best safety work has come from working with our most capable models. That said, it’s important that the ratio of safety progress to capability progress increases.
值得注意的是,我们认为 AI 安全与 AI 能力往往需要同步推进。将两者割裂开来是一个错误的二分法;两者在许多方面密切相关。我们最出色的安全研究成果往往来自与最强大的模型共同进行的工作。但与此同时,确保安全进展与能力进展的比率不断提升同样至关重要。
Third, we hope for a global conversation about three key questions: how to govern these systems, how to fairly distribute the benefits they generate, and how to fairly share access.
第三,我们希望就以下三个关键问题在全球范围内展开讨论:如何对这些系统进行治理,如何公平分配它们所创造的收益,以及如何公平地提供使用途径。
In addition to these three areas, we have attempted to set up our structure in a way that aligns our incentives with a good outcome. We have a clause in our Charter about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.
除了上述三个方面,我们还尝试通过组织架构设计来确保我们的激励机制与理想结果相一致。我们的章程中有一条规定,要求我们在 AGI 研发的后期阶段应协助其他机构提高安全性,而不是与之展开竞赛。我们对股东可获得的回报设置了上限,以防止我们出于无限追逐价值的动机,而冒险部署潜在毁灭性危险的技术(当然,这也是与社会共享收益的方式)。我们还设立了一个非营利机构来监管我们,确保我们以造福人类为目标运营(并可凌驾于任何营利性利益之上),包括在必要时为安全而免除对股东的股权义务,以及资助全球最全面的全民基本收入实验。