2026-05-13 Alex Wang.Meta's AI Chief On AI Beef, New Models And Life With Zuck

2026-05-13 Alex Wang.Meta's AI Chief On AI Beef, New Models And Life With Zuck


Meta Superintelligence Labs Structure and Strategic Compute Advantage
Meta Superintelligence Labs (MSL) operates through a specialized structure comprising TBD for large model research, Product and Applied Research (PAR) for deployment, and FAIR for exploratory science. The organization prioritizes long-term infrastructure planning, specifically GPU and data center capacity, to support the development of frontier models. The decision to join Meta was driven by the company's massive compute resources and a clear, bold commitment to achieving superintelligence. This compute-centric strategy creates a significant barrier to entry, as companies with vast infrastructure capabilities can execute research and product deployments that are impossible for those without such resources.

Speaker 2: 
(00:00) 
All right, Kylie, we've got another big guest this week.

Unknown Speaker: 
(00:03) 
Huge.

Speaker 2: 
(00:04) 
Alex Wang, the chief of Meta's artificial intelligence efforts. It was how many months ago?

Kylie Robison: 
(00:12) 
About 10 months ago.

Speaker 2: 
(00:13) 
About 10 months ago, he was the founder, co-founder and CEO of Scale AI. Meta sort of quasi-acquired the company, half-acquired the company, and fully acquired Alex. He's been in the AI protection program.

Kylie Robison: 
(00:33) 
Yes, we haven't seen much of him whatsoever until today on the Core Memory Pod.

Speaker 2: 
(00:38) 
Yes, I'm not sure exactly why this happened, but here he is. He's gonna tell us. Well, we'll find out. He's gonna hopefully tell us about They just released a new model. I'm sure we will get into some of that. And then there's a little bit of a mystery to me about like, Where they are philosophically on AI. Alex has always been a bit of a, when he was at Scale, he was,  he was, they always said they were Switzerland and he was loud on some things,  but not, not always AI itself and how he feels about it.

Kylie Robison: 
(01:13) 
And they made a lot of news last year with everyone they hired for millions and millions and millions of dollars and built up this team and everyone's been waiting to see what they're going to do with all of these resources.

Speaker 2: 
(01:25) 
So, we will talk about recruiting soup and all these millions of dollars. So, this is it. This is Alex Wang. He's emerged.

Kylie Robison: 
(01:36) 
Hell yeah, brother.

Speaker 2: 
(01:37) 
I am Ashlee Vance.

Kylie Robison: 
(01:39) 
And I'm Kylie Robison.

Speaker 2: 
(01:40) 
And this is Core Memory. Alex, thank you for being here.

Alex Wang: 
(02:02) 
Yeah, excited to be here.

Speaker 2: 
(02:03) 
I feel like we were, we kind of texted a little bit pre-meta happenings. We had like this kind of country music, odd text chain going. And then I'm also, I've known Nat Friedman for a long time. And then I feel like the two of you disappeared.

Alex Wang: 
(02:28) 
Went into the foxhole.

Speaker 2: 
(02:28) 
Yeah, we're very, very quiet there. And now here you are with a new model.

Alex Wang: 
(02:33) 
Yeah.

Speaker 2: 
(02:34) 
Emerging.

Alex Wang: 
(02:35) 
Yeah.

Speaker 2: 
(02:35) 
But you guys went very quiet for a bit.

Alex Wang: 
(02:38) 
Yeah, we had a lot of work to do. I mean, I think... It turns out building a frontier model from scratch in nine months is, yeah,  it takes a lot of painstaking effort. But yeah, I mean, it's been really exciting to see everyone use MuseSpark, you know,  the model we released, and we have better models cooking, so it's exciting.

Speaker 2: 
(03:00) 
And so what you, I mean, you were like a San Francisco alight, San Franciscan, I guess. And then, I mean, I assume you work at Menlo Park.

Alex Wang: 
(03:10) 
Yeah.

Speaker 2: 
(03:10) 
So were you, have you had to like...

Alex Wang: 
(03:13) 
I moved down to South Bay, yeah. Did you?

Kylie Robison: 
(03:15) 
Wow.

Alex Wang: 
(03:15) 
Yeah, I did.

Speaker 2: 
(03:16) 
So you're full on committed.

Alex Wang: 
(03:18) 
I'm full on, yeah. For me, the city now is Palo Alto.

Speaker 2: 
(03:21) 
Okay.

Alex Wang: 
(03:21) 
Walk on University Ave, get a boba.

Speaker 2: 
(03:23) 
Okay, I was wondering about this. What's like the arrangement between, I mean, the people I know best are you, Nat, Daniel Gross. I've only hung out with Zach once or twice. Yeah, I mean, what's the, I'm trying to pick, paint a picture of, of, of how you guys are arranged.

Alex Wang: 
(03:41) 
Yeah. So, um, so basically, uh, the, the whole unit is called Meta Superintelligence Labs,  um, which, which I oversee and then there's various pieces of it. So, um, there's a unit called TBD, which is, um, the,  Sort of large model research lab that I think,  you know, is somewhat infamous, but that's where a lot of the leading researchers and infrastructure engineers are. They actually all technically report to me. So that's one set up. Then there's also a group called Product and Applied Research or PAR for short. That's what Nat Friedman heads up. So they're responsible for all the products that we build and the actual sort of deployment of these great models to the world.

(04:31) 
And then also within the overall Meta Superintelligence Labs umbrella is FAIR, which continues to do exploratory and exciting research. I'm particularly excited about a lot of their scientific research. So, you know, we've shown some pretty great work on, you know, using models,  AI models to understand the brain as well as using AI models to understand,  you know, computational chemistry. We have built like a universal model for atoms, UMA for short. And then And so that those pieces constitute meta super intelligence labs, which. I oversee, in addition to having a very hands-on role with TBD Lab,  and then Daniel Gross helps lead up Meta Compute,

(05:14) 
which is really focused on our long-term infrastructure planning to ensure that, you know,  we obviously can build up all of the GPU infrastructure and data center infrastructure necessary for this very bold endeavor. And so he heads that up and partners closely with us.

Speaker 2: 
(05:31) 
Who did you know the best out of that group before you got into this?

Alex Wang: 
(05:37) 
I've known Nat and Daniel actually for a long time. Nat was one of my very first angel investors at scale. Before I completed YC, Nat had invested in Scale and had given me advice throughout the years. Daniel, I think I also met around that time, you know, very,  very early on and have gone to know them through the years. And then we also have our chief scientist, Shengjia, who helps oversee the scientific agenda across all of MSL. And he is somebody who I We have gotten to know, you know, I knew before starting at Macell,  but, you know, since he's come in, we've gotten a lot closer.

Kylie Robison: 
(06:22) 
So I'm really curious, like taking a huge step back. It's been 10 months since you kind of went into hiding your company, you know, completely changed. And now you're at Meta. What was that experience like? How was the deal made? Like, how did you end up going to Tahoe and talking with Zuck? Can you just walk us through what that first meeting was like?

Alex Wang: 
(06:44) 
I've known Mark for many years now. Even while I was running Scale, he was very generous with his time and I was able to get a bunch of advice from him. He obviously is just such an experienced founder. The founder at this point in some sense. And so we've known each other for many years. And we'd actually talked about AI before a lot of this craze because, you know,  Scale, obviously, we've been working on AI since 2016. And, you know, back when it was mostly self-driving and then, you know, the various transitions of the technology. And then around a year ago, almost like literally a year ago, you know,

(07:30) 
we had a conversation where we sort of started exploring if there were a way to work more closely. And in particular, I think Mark at that point,  I think he was Becoming increasingly AGI filled and and really knew that first I was going to totally transform meta,  but also that I was. One of these, you know, almost like once-in-a-lifetime transformative technologies. And so he was really quite focused on it and knew he wanted to bet very big on it. And at the same time, you know, and he's talked about this publicly,  like Llama4 was not on the trajectory that the company needed to be able to continue making some of these bets. And so we were sort of talking at a very high level about You know,

(08:24) 
how can we work together more closely? What could that look like? And obviously, it kind of, you know, it was one of these very open-ended questions and these very open-ended brainstorm sessions,  as these things often are,  and it sort of landed in this interesting zone where we figured a way to do it in a way that was like,  Good for Scale. It was good for Meta and where we got to work very, very closely together to build out,  you know, the most important technology of our time and do so in a way that,  um, where we both had conviction that it was going to be, you know,  we're building something that we're both really proud of, which is, you know,

(08:58) 
he put out this memo of personal superintelligence also about a year ago. Um, and then we went quiet obviously, but, um,  but I think that really is the sort of like North star for both of us is we want to build This technology in a way that empowers people where,  you know,  as many people in the world have access to it and it's as democratized as possible that enables everyone to express themselves,  everyone to have increased agency, everyone to create and build,  and that's really the world that we want to work towards.

Speaker 2: 
(09:24) 
But you're, you know, I did a really early story on you. I mean, I've known you for a long time. Yeah, yeah. When I was 21? Yeah, I think so. I mean, and you, you know, there's all this lore. You were the Youngest self-made billionaire and this hot shit and Scale was such a prominent company and you had this reputation for reading the tea leaves of where AI was going to go. I mean, like in what I would talk to you, I mean, Scale was part of your identity. It's very different to be the founder of this very prominent company and then taking a role at a place with 80,000 employees, even if you're It's a prominent role at that company. And yeah, I mean, I think I was really surprised.

(10:05) 
I mean, there's a lot of money involved. Okay, but Just sort of knowing you, it's not like I know you that well,  but just knowing you as much as I did. Yeah, I was like, man, that must have been a hell of a sales pitch too,  because it's a big flip.

Alex Wang: 
(10:21) 
Yeah, very different. It's super different. And, you know, a lot of what I was thinking about throughout this process is obviously,  I think, Like everyone in and around AI, I mean,  progress has just happened a lot faster than I'd expected for a long time. And in the sort of like accelerating progress of these AI models,  I think a few things really started to stick with me. One is, you know, It felt like, and I do think this is increasingly the case,  that those who build the AI models have just Greater and greater rights, so to speak,  or both economic and product rights, to just build so much more around those models.

(11:10) 
And there were obviously all these early debates around how does the ecosystem play out? And I think because of just how fast the models are improving and how fast the research pace is,  it just means that in many ways, Being a place that is building the models are some of the most exciting places to be in the ecosystem. And then the second is really that so much of this next phase of technology really boils down to compute. And if you have lots and lots of compute,  then you have the ability to build things and make big bets and deploy products and do things that you just can't if you don't have that compute.

(11:53) 
And I think that This causes, this will cause an interesting stratification for the tech ecosystem where,  you know, right now, in some sense, we think about, well, I think it's already changing. But, you know, you kind of think about all tech companies are the same. But in reality, you should think about companies with lots of compute very differently from companies without tech compute,  because there's just things that companies with compute can build that those without compute just cannot. And so it creates this very interesting dynamic. And so Part of what was very exciting about the opportunity at Meta, first is that Mark is,

(12:27) 
I think, very all in on AI and really has bet very big and is a very bold leader and strategist,  but also that this created the conditions where We're able to build with huge amounts of compute and with the right research effort and the right product efforts,  we have the ability to really make a huge dent in the world.

Kylie Robison: 
(12:50) 
You guys have a ton of compute and you guys poached a lot of amazing talent. I was a part of that whole reporting frenzy at the time. Unlike anything I've ever seen before, and it's been 10 months with many of these people,  so what has it been like? What have the challenges been? And what has been the most exciting about having this whole new team at Meta?

Rebuilding AI Research Velocity and Talent Density
The transition to Meta involved a fundamental reset of existing AI efforts to align with the goal of achieving superintelligence. Success in this field requires high compute-per-researcher ratios and extreme talent density, allowing small, focused teams to move faster than bloated organizations. The internal culture of the lab is modeled after early-stage startups, prioritizing technical rigor and ambitious research bets over traditional corporate hierarchies. Recruiting efforts were highly individualized, focusing on researchers who prioritize the opportunity to build from scratch with massive resources rather than purely financial incentives.

Alex Wang: 
(13:14) 
Yeah, so, you know, when I got when I kind of got to Meta,  you know, it was,  it was clear that there needed to be some reset of the efforts and some rebuild of our AI efforts to get on to the right trajectory,  because ultimately, like, you know, long before is not on the same trajectory. And so we were behind the frontier. And so we needed to build a plan that would enable us to have a very,  very fast velocity to be able to both catch up and hopefully exceed where the frontier is.

Speaker 2: 
(13:47) 
So can you be specific? What were the problems that you found?

Alex Wang: 
(13:53) 
I think that probably the more fundamental ones are just around a lot of the leading labs,  they build the entire organizations around the premise that,  Superintelligence is coming and it is very close and this is a very realistic thing to believe that we can create and produce. And then you build the entire plan of the lab and the business and what you focus on around this fundamental belief. And so that was one of the first things is to just take superintelligence seriously and then start to rebuild all of your other assumptions around that core premise. So that's something I think was somewhat fundamental.

Speaker 2: 
(14:38) 
So you mean that they're like lacking this religious conviction to all this on some level?

Alex Wang: 
(14:44) 
Yeah, and I think this is relatively common actually. I think there's a lot of people at all the large companies who don't necessarily have this conviction because if you think about it,  It's a bit of a different construction. Like a lot of the big companies, you know,  they have very smart people who work on AI,  but it's a little bit different from, you know, these startups where it's like,  you know, it's like these new efforts started from scratch with this like crazy idea that superintelligence is coming. So I don't think there's a problem anymore. Like obviously I think now MSL, Meta Superintelligence Labs is It's in the name built around this concept that superintelligence is coming.

(15:21) 
So, there are a bunch of principles that we sort of laid out for the effort. And I think this sort of answers as to, you know, what were the things that we had to resolve. But one is take superintelligence seriously. Two is technical voices are loudest. Three is scientific rigor, focus on basics, and make big bets. So,  the concept of TBD and MSL broadly when I sort of got started was I thought about what would actually be the shape of a lab that would enable you to have incredibly fast velocity and catch up and potentially even overtake the frontier. And I came down to there were three ways that I felt like that was possible. One is to have much higher compute per researcher.

(16:13) 
So, you know, a lot of the larger labs, they have lots of compute,  but it gets spread so many different ways. And so that actually impedes the research velocity of any individual researcher. So if you build a more focused effort with a smaller team that has higher compute per researcher,  you can actually make faster research progress. Two is talent density. So Just and this is like, I think, you know,  I feel like human organizations always relearn this lesson,  but the very small team where everyone is cracked is always going to move faster than the very large organization where responsibility is more distributed. And, you know, it's more of a melange.

(16:52) 
And the last one was on very ambitious research bets. And, you know, I think I think this is very well agreed upon in the industry. There do exist these research bets that Are very big and very risky,  but if they work out can totally change paradigms and totally shift how we build modern AI. And so, you know, in in addition to obviously building towards very competitive frontier models,  we're allocating a huge amount of our resources and compute towards these big ambitious bets because they pan out. And that gives us, you know, incredible models going forward.

Speaker 2: 
(17:34) 
Alright, what do we do at Core Memory? We cover innovative, fast-moving, forward-thinking companies, which is why Core Memory is sponsored by Brex,  because Brex This is the intelligent finance platform for many of these companies. 30,000 companies from startups to the world's largest corporations rely on Brexit technology for their finances. They've got smart corporate cards, high yield business banking and expense automation tools that are fantastic. I hate doing my expenses. Brex's AIs and software run right through those expenses,  figure out where we're spending money and take care of so much stuff for you so you don't have to waste your time on it yourself.

(18:19) 
Go to brex.com slash core memory to learn more and just, you know, get with the program. Let's get going. Let's get out of this archaic finance software and move toward the future. Core memory and Brex.

Kylie Robison: 
(18:34) 
Something Ashlee always talks about is how these labs are sort of leapfrogging over each other and they start to just serve the same thing. And you're talking about racing towards the frontier. I'm also thinking about, you know, the people that you guys hired was reported for these really,  really wild salaries like we've never seen before. So, like, how are you, you know, getting to this paradigm you speak of? Like, specifically, what paradigm are you trying to achieve?

Alex Wang: 
(18:59) 
Yeah, I mean, I think I think there's a bunch of bold research bets,  and I won't be able to go into detail on all of them,  but I do think one fundamental question is,  what do we care about? In line with this idea of personal superintelligence,  we really care about building these agents that are able to empower consumers,  so empower billions and billions of people all around the world, as well as empower businesses. Meta is kind of this incredible ecosystem. We have the billions and billions of users, which I think everyone knows about,  but we also have hundreds of millions of businesses on our platforms that, you know,  use Meta to run and operate their businesses.

(19:43) 
And so we really care a lot about Building towards this future where we're able to build very powerful agents that empower and enable every one of both the consumers and the businesses on our platforms and build kind of this new agentic ecosystem. You know, we think a lot about what that looks like and what are the capabilities we need to build towards that. And, you know, obviously, on that trajectory, there's a bunch of Subcomponents are really important. We really have to have great agentic capabilities. We have to have great coding capabilities because so much of what needs to be built is software as you really get into it. We need to have great multimodality.

(20:22) 
It informs a lot of the underlying bits and pieces that we need. Then we need to solve a lot of the bigger questions for long-running agents. How do you think about the memory challenges? How do you think about building long-running agents? How do we build agents that are able to do more and more complex tasks on behalf of the users? Those are a lot of the high-level pieces that we're thinking a lot about.

Speaker 2: 
(20:48) 
This superintelligence religion built within the company, the way this was arranged is quite different to starting OpenAI or Anthropic,  where like you were talking about earlier,  this is all built from the ground up and these companies have sort of an identity and it gets shaped over time. For the outsider's view, what you guys did looks far more mercenary. You know what I mean? It's like, we're going to go You guys are out. We're going to go grab a bunch of high-priced people, bring them in. And it reminds me, you know, I just remember when Grok was starting up and it was like Elon,  in his Elon way, he's just like, we're just going to get way more fucking compute than anybody else.

(21:32) 
And we have kind of this core team we're going to build around. And then It still felt like they caught up, but then never reached that escape velocity,  especially in people's minds of brand and things like that. So, I mean, it just seems like it's a hard thing to buy some of the bits that you're talking about.

Alex Wang: 
(21:52) 
Yeah, I would say this is one of, I think, the larger, I would say,  like narrative violations or maybe like differences between external perception and what the day-to-day inside is like. Actually, I think a lot of people You know, maybe have some of the impressions that you're talking about. And a lot of that is formed because of just, you know,  the reporting and what it's sort of like,  you know, how that sort of went down. And a lot of the reporting was overstated in various ways. But, you know, it's sort of it all sort of like bubbled up. And part of it was because we did the recruiting so quickly. We knew that when I got in, I knew I was like, if we want to build great models,

(22:30) 
we need to have the team yesterday. So we had to just Let's go and blitz it and do it very, very quickly. But I think that the culture within the lab is actually very much so a startup. And there's a bunch of things that I think have created this feeling. I think one is that it was an entirely new built team kind of like within Meta. But then the culture of the lab is like, I mean,  everybody I was very attracted and excited about these things I talked about. People joined because there was high computer researchers,  so they could make more progress than maybe they would be able to make at wherever they were before,  because there was great talent density.

(23:15) 
People saw that it was a truly cracked group that was pretty small,  and that we were going to give them the resources and freedom to make very bold research bets. I think it's like an incorrect assumption to think that like the researchers are just money motivated or anything and for most of them actually like,  you know, the financial prospects of them staying wherever they were. Look very good as well, like looks very, very, very strong as well. So money was not, you know, those sort of like maybe how it seemed like,  but the primary motivations were actually much more that they had an opportunity to kind of like build from scratch and have lots of compute,

(23:57) 
have the ability to approach their like very ambitious research directions and then do so in a group that didn't feel bloated. So I think as a result, those sort of like vibe and culture are much healthier. And I would actually say like, you know, many people who visit the lab,  who are at one of the other labs,  like often comment that the vibe is actually It reminds them of early OpenAI or early Anthropic or these more nascent stages of these other labs because in some sense we're now 10 months old as an effort.

Speaker 2: 
(24:35) 
Just because Mark Chen was on this podcast and brought up the soup debacle during these recruiting wars,  is this true? Did Zuck make soup? Did you make soup to recruit people?

Alex Wang: 
(24:49) 
I don't know if we made the soup,  but I was told it was actually made by Zak,  but I don't know. I don't know if we made the soup,  but I do think it is true that we like,  I think, Part of the premise of building this lab was also that like we had to show everyone that we really,  really cared about this technology and we cared about their specific research directions and what they were working on. And it was a very individualized recruiting process,  but also was one like I I'm very proud of the team that we've built and I think that people had to know that we were serious too. I think that by default,

(25:35) 
a lot of people didn't know what to think about Meta's AI efforts or they didn't know that much about us in many ways. So it took a lot of going to people, talking to them, explaining what we're building,  explaining what we're focused on, explaining why we cared about the technology, what we wanted to do with it. That was very important.

Speaker 2: 
(25:54) 
I'll move on after this to not just belabor all the recruiting stuff, but you know, same thing. When you were at Scale, you were like the, everyone would call you the Switzerland of AI. And I just remember you knew everybody and you were in the center of things. And then it feels like some of this came with a personal cost. Maybe you and Sam used to be flatmates and I texted Sam about you coming on the show. You did not have flattering things to say. And so, you know, it seems like some of this, it must have come at a personal cost to you.

Alex Wang: 
(26:28) 
I think some of this is unfortunate. My honest expectation is that as we get closer and closer to superintelligence,  my hope genuinely as a human that all the sort of animosities that exist between various people in this industry,  which is very topical obviously right now with other things happening,  but I think that I think my real hope is that all these animosities subside over time and then people sort of come together and realize,  you know,  we are building this incredibly important technology and it's important for all of us to be really thoughtful about that as we build it. And so, yeah, I think and one of the things that I feel like it's a responsibility of mine,

(27:19) 
honestly, is to ensure that the technology that we develop and the ways we deploy it are as thoughtful as possible.

Kylie Robison: 
(27:25) 
Some of the stuff you talked about, like in terms of recruiting,  it was trusting that we have a mission and it's not,  you know. Just just products like you can go for your own research mission. And you're also quite young. We're almost the same age, which is very funny. And I'm not a billionaire. But Yann had said in the press shortly after he left that you were young and inexperienced and more people are going to leave. So I'm curious, like, how has that voted for you as a leader at this huge company? And you're quite young. What was reading that like? Have you talked to him?

Alex Wang: 
(28:00) 
Yeah, I saw him in India like a couple weeks after that and I mean Jan is a notable,  very outspoken person and I think everyone always knows what Jan is thinking. You know, he obviously said what he said, and I saw him in India. He congratulated us on the MuSpark launch.

Speaker 2: 
(28:23) 
I saw you guys patching things up on X, yeah.

Alex Wang: 
(28:26) 
Yeah, I mean, like, truly, I do, exactly what I just said before, I think all personal animosities,  like, I think as we get closer and closer to superintelligence, we'll...

Speaker 2: 
(28:35) 
It seems like it's getting worse, doesn't it?

Alex Wang: 
(28:37) 
Yeah, maybe it gets worse, maybe it gets better, but... But yeah, no, I think that, I think that, you know,  I have a lot of conviction in how we've set up MSL and the research efforts that we have and the progress that we're making. And I'm excited to show the world the incredible work that our researchers are doing and the progress we're making.

Kylie Robison: 
(29:02) 
I also don't want to belabor the point,  but is that a challenge you continue to face,  like people thinking you're too young and inexperienced to lead such a huge effort at Meta?

Speaker 2: 
(29:11) 
You also get the knock that you're not an engineer.

Kylie Robison: 
(29:14) 
Oh, yes.

Alex Wang: 
(29:15) 
That is definitely not true. Once upon a time, I was a software engineer in Silicon Valley. Doesn't this piss you off? Yeah. People have said this my whole time in Silicon Valley. To some extent, I actually like, I almost don't even think about it anymore, because it's just, it's always there. But yeah, no, I think that there are always,  I think actually many people in AI always,  there's like various mischaracterizations about them, or there's like, You know,  people always say shit and they're sort of always,  you know, what's out there is never always correct. And it can be frustrating, but I choose to just channel it into the work that we're doing and what we put out there.

(30:08) 
And again, I am really proud of MuseSpark. I'm even more excited about the models that we have cooking and the products that we have cooking. So I think like in the long arc of, you know, In the long arc,  all this will play out just fine.

Speaker 2: 
(30:34) 
Does well in these competitions, tends to be quite proficient at coding, engineering and thinking about these problems. I will say, I mean,  over time you did get reputation among some circles as kind of like a salesman at scale a bit and enjoying life and things like that. I mean,  so I did wonder when you took that job if it was going to be kind of like harder to boss people around or just quite different. I don't know.

Alex Wang: 
(31:03) 
Yeah, by the way, I actually, in general,  in terms of my management philosophy for MSL is not to boss people around. I think that like, there's this great Steve Jobs quote, which is,  most companies hire people and tell them what to do. But we hire people for them to tell us what to do. And I think, you know, that is like pretty core to the entire thesis of TBD and MSL. And how we built it is that we're gonna,  we're gonna have We're going to hire brilliant researchers and,  you know, create the best environment for them to do the work of their careers and the work of their lives. So I think, you know, so long story short, I'm not trying to boss anyone around, actually.

(31:47) 
I'm trying to create the best environment for researchers to do incredible work.

Scaling Laws and Token Efficiency in Model Development
The development of the MuseSpark model represents an early data point on a predictable scaling ladder. By rebuilding the pre-training and reinforcement learning stacks from scratch, the team achieved significant token efficiency, outperforming other models on specific benchmarks with fewer resources. This performance is attributed to a "clean stack" approach, which avoids the fundamental inefficiencies often patched by simply increasing compute. Future models are expected to follow this predictable scaling trajectory, with upcoming iterations focusing on enhanced agentic coding and multimodal capabilities.

Speaker 2: 
(31:52) 
All right, pod friends. I am here to tell you about SendCutSend. They are a manufacturing phenomenon. Whether you're working on your own car or you're a giant automotive company, a rocket maker,  if you need metal cut, machine, or bent, you're going to SendCutSend. They'll do it for you right and they will do it for you fast. Go to SenCutSen.com slash Core Memory for a 15% discount on whatever you try to make. We love SenCutSen. Go visit them today. On MuseSpark for a minute, I mean, just as I was reading everything over the last couple of days and playing with the model a bit,  I'm trying, I was just trying to wrap,

(32:35) 
we were trying to wrap our head around exactly where you guys see it in terms of what you're trying to achieve at Meta. It seemed like on the benchmarks, you know, it did well on some. It was behind the models on others. It seemed like you guys were emphasizing this and Apologies if I get the technical stuff right. You're wrong. You can set me right. But it seemed like you were emphasizing that you felt like you guys had some efficiency gains that some of the other models maybe didn't have. And then you're doing this crazy thing with the 16 agents you're working on. I was playing with that last night.

(33:11) 
So it felt like, to me, you guys think you've picked a couple of technical directions where maybe you're ahead of people. But then I went through all your tweets on X last night. There were people who would compliment you. There were other ones that would take a dig at you and you would say,  you know, just wait for the next thing. And so, yeah, I guess we're trying to figure out. It didn't seem like you guys were like planting a flag and being like,  you know, we have conquered everything with this model.

Alex Wang: 
(33:40) 
Yeah, no, by no means. I think that, you know, the What we did is, you know, over the past nine months,  we rebuilt a lot of the stack and a lot of the research. So we rebuilt our pre-training stack. We rebuilt our RL stack. We rebuilt a lot of the science and did a lot of work on data. So in many ways, what's been happening over the past nine months is really like,  you know, a full-on renovation, as it were,  for the sort of like core research stack. MuseSpark is kind of an early data point on that scaling ladder. It is not, you know, in some ways like MuseSpark is kind of like the entree or the appetizer,

(34:23) 
I guess appetizer, entree in French, but it's kind of like the appetizer for what we're building. But we are in development of larger models and we expect the larger models to be,  you know, we're much more excited about the larger models than we are even about MuseSpark. But it was an important data point we thought to put out there into the world because there was a,  you know, The entire program that we've built is developed around predictable scaling. And we see the scaling, I think we talked about this in the blog post, on many axes. We see very consistent pre-training scaling and predictable pre-training scaling. We see predictable RL scaling and reinforcement learning.

(35:06) 
We see predictable test time scaling. And a lot of what you just talked about, the contemplating mode,  is we're also seeing very exciting results in multi-agent scaling. And so everything about our program is built to continue scaling as we go. And so, you know, MuseSpark was like this early data point on our scaling trajectory. But the next data point we're like a lot more excited about. And the data point after that, we're even more excited about. So I think we're excited to show people this sort of like next rung up on our overall scaling efforts. And for MuseSpark specifically, I think it, you know, If anything,

(35:47) 
it's actually the overall end-to-end performance ended up being quite a bit better than we expected. And it had a bunch of emergent capabilities and behaviors that we were pretty excited about,  like, that we found in training. So, for example, some of its abilities in agentic, like visual coding,  being able to produce websites or being produced games,  like some of these capabilities, actually kind of emerged from the fact that it is both a pretty strong agentic model,  but also pretty strong at multimodality. So there were a lot of things that were very excited about this model. And so we put it out there. We think for most consumer use cases, actually a very good model,

(36:38) 
and it is quite competitive with the other models. NewSpark, as we deployed it, This is not yet competitive on, you know, agentic coding. And so those are capabilities that we're working on and we're building towards for the next set of models. But yeah, I would expect the next model that we produce to be better overall than MuseSpark. And that's something we're pretty excited about. But even MuseSpark as we released it, which I think we, to be clear, wanted to set the clear expectations. Like we didn't think it was going to be A state of the art model across the board,  but it it it is a very good model and we think a lot of people and a lot of users who tried it out experience that.

Kylie Robison: 
(37:23) 
I'm curious, what was the holdout for releasing like a frontier model? What do you still need in order to hit all of those benchmarks and blow it out of the water?

Alex Wang: 
(37:33) 
I mean, the one word answer is just scaling. Like, MuseSpark is kind of, as I mentioned, it's early on the ladder. And We have very strong predictability. So we know if we scale this model up,  like what performance to expect from that from that increased model size. And we expect the The upcoming models to just be able to perform much better across the board.

Speaker 2: 
(38:03) 
When does that happen?

Alex Wang: 
(38:05) 
Coming months. You know, coming months.

Kylie Robison: 
(38:09) 
Wow. So like a year into the whole endeavor.

Alex Wang: 
(38:12) 
Well, as I mentioned, we built the whole program so that we would be able to,  you know, move very, very fast. So there was a time period where we had to rebuild all the foundations and rebuild everything. But now we're in the period where We're going to be in fast scaling mode.

Speaker 2: 
(38:28) 
What do you feel like you're doing technically that's different to everybody else?

Alex Wang: 
(38:33) 
Well, one of the things that we found, kind of as I mentioned, MuseSpark performed very well,  in some ways even better than we originally expected, especially like a year ago. And when we went back and We've kind of like analyzed and tried to understand why does it perform so well. We think a lot of that, a lot of it comes down to just having built a very,  very clean stack from scratch and having had the ability in this rebuild process to do everything,  quote unquote, the right way. So we really had this, in some ways like this luxury,  like the ability to Build a very clean pre training stack and very clean RL stack and to do everything in kind of like the the.

(39:22) 
The right way by the experts who know exactly how to build these systems that was able to. That was able to meaningfully accelerate both our trajectory, but also I think it really shows in the model.

Speaker 2: 
(39:32) 
I mean, you know, before I do these interviews,  I throw you and the model and everything into into all the AI systems and sort of get them to poke around. I mean, the thing that kept coming back was this. This token efficiency, I mean, like, is this something you guys feel like you've figured out or this was just a happy accident with MuseSpark? It seemed like on some of these benchmarks,  you were just doing it with far less effort than the other models.

Alex Wang: 
(40:00) 
Yeah, yeah. No, this was an exciting result for us. Yeah, I think like on artificial analysis, for example, it used to achieve You know,  pretty similar results, like many fewer tokens than, let's say,  the models from some of the other labs. Yeah, we think this is this is a testament, I think, actually, to this whole clean stack,  which is that I think. I can't say for certain,  but one of the reasons why some of the other models maybe require a lot more tokens could be that there's some level of fundamental inefficiency at another part of the stack that kind of gets patched by enabling the models to think longer.

(40:36) 
And so we were pretty impressed and excited about the token inefficiency that we found. And frankly, we think as we keep scaling the models and continue scaling overall,  We think that bodes really well for the future performance of our models.

Integrating AI Agents into Consumer Hardware and Business Ecosystems
The broader vision for Meta involves a constellation of devices, such as Ray-Ban Meta glasses, that capture context to provide proactive, intelligent assistance. The strategy focuses on knitting together the company's massive user base and hundreds of millions of small businesses through powerful agentic models. While current consumer sentiment toward AI remains low due to a lack of clear, life-changing utility, the goal is to provide tools that significantly increase individual agency. By building an "economy of agents" that can collaborate, the platform aims to transform how supply and demand function for both consumers and entrepreneurs.

Kylie Robison: 
(40:53) 
And Spark was really good at vision benchmarks and that efficiency and that vision expertise seemed like it would be really important for your guys' hardware endeavors. And you've talked before about a constellation of these AI products that can see what you see and hear what you hear. Can you talk a little bit more about how that fits into your guys' broader vision for serving AI?

Alex Wang: 
(41:17) 
Yeah, 100%. I mean, I think... One of the things that's very exciting about Meta overall is that the Ray-Ban Metas,  the glasses, have been this hit product and we've sold millions of copies and we have some big fans.

Kylie Robison: 
(41:32) 
The biggest fans.

Speaker 2: 
(41:34) 
I do love them.

Alex Wang: 
(41:36) 
But it is like a very exciting direction for all these devices to think about what does your relationship with technology look like if it really can kind of Fade into the background a little bit and be a lot more contextual,  like and as you mentioned, see what you see,  hear what you hear and be much more intelligent and helpful in the moments where you need it. And also capture all this context about, you know,  what is happening in your life and what are the things that really matter and what are the things that it should pay attention to. And so we really see like a future world in line with personal superintelligence where,  you know, you have I kind of, as you mentioned,

(42:18) 
like this constellation of devices that all are there to help capture context are all there to enable the technology to kind of like fade away a little bit and help you help you get like very intelligent and valuable Insights from agents like proactive insights or you'll mention something and then the agent will go off and do some research or you know take actions for you or you know it can just kind of be this like super intelligent sidekick that makes everything in your life better.

Speaker 2: 
(42:49) 
I feel like there's some kind of problem that you guys have though because I love these glasses. I use them all the time. I do it for our video stuff and then I actually I like to take phone calls on them and then We pretty much run our entire business in WhatsApp. I refuse to use Slack. I travel so much that WhatsApp just got embedded into my life. In full confession,  I don't think I've ever used Meta's AI agents or anything until you were coming on the show and I wanted to see what it was. I always go out to Claude. I go out to ChatGPT to do this work. I saw the AI agent button on WhatsApp kind of like for the first time today. I know it's been sitting there the whole time. I don't know.

(43:39) 
I am in your world and kind of like didn't even see it there. Maybe I can't be unique in this.

Alex Wang: 
(43:51) 
Yeah, well,  one of the things is we knew we needed to have Great models and great products before we really pushed for tighter integration across the across our entire ecosystem. So in many ways, like we were,  we've been waiting to have great models that can then enable most of the consumer use cases that we really care about. And now I think we're at a point in a moment, which is very exciting,  which is that our Our models are pretty good. We're pretty excited about them. We have better models on the way. And so now we're going to undergo the sort of like process to Do a lot of large scale integration of all of the,  you know,

(44:33) 
the family of apps that we have with our AI and integrate our business products with our AI and go through this,  this evolution towards like knitting together almost all the pieces of our ecosystem together with our AI. I think, you know,  to some extent you've seen what that looks like for Gemini over the past few years and we're excited to go through.

Speaker 2: 
(44:57) 
It's the same thing for me though. I also run our business in Google and I mostly play with Gemini to see,  I just feel like in consumers' heads,  I'm curious how you think this plays out because I feel like you've got OpenAI and Anthropic in this one world where ChatGPT is such a strong consumer brand. That is what people think of as AI. And then Claude has been super dominant in coding, in business. You guys, um, Google, you're sort of like asking people to run into AI as part of all these services that you have. And I don't know. I don't think we've, I don't know if we've ever seen a competition quite like this where,  and then there's, there's X as well, um, where you have these, I'm old,

(45:46) 
so I go back to like word processing days, you know what I mean? It's like, what are you going to use? You're going to use Microsoft Word. People settle on like a thing or in the browser wars, it was like there are only two,  Internet Explorer and Netscape. And then that was sort of like the rest of history for a long time. And so I don't know. Do you see, I feel like these two groups have different challenges and it's really not,  I sort of feel like consumers, most people are still going to pick like I do my AI on ChatGPT.

Alex Wang: 
(46:18) 
I just think we're so early. Like I think that, you know, I think it's funny because I reflect on this,  like if you were sitting here You know, a year ago, if you were to have this conversation,  we would always just say like, oh, well, you know, OpenAI and ChatGPT have,  you know, they've won on consumer already. They have the biggest business. You know, they're just going to run away with the whole thing. And then fast forward a year. Anthropics of this breakout success of Claude Code, which was, you know, somewhat foreseeable,  but not super predictable at the time and has overtaken them in revenue. And at the same time,

(46:59) 
Gemini has distributed quite a lot and actually has eaten a lot of consumer market share from the rest of the ecosystem,  including ChatGPT. And so I think that we are in this phase of AI, which is just incredibly, incredibly dynamic. And I think it's very hard to say at any one moment that, you know,  We're in the endgame because everything is just, you know,  I think there's going to be so many new products built both for consumers and for developers and for businesses that haven't been invented yet that will each potentially be even bigger than the ones we've had before. I think it's pretty Fascinating to me that going that, you know,

(47:47) 
ChatGPT obviously was this incredible hit and it was this like,  you know, felt this it was the fastest growing Product and business that the world had seen till that point. And then Claude Code, again, is this incredible hit. It's the fastest growing business anyone has ever seen until now. And I think that this is a really, there's something,  this is a statement about something intrinsic about AI,  which is as AI gets to new levels of intelligence and capability and And overall performance,  it just unlocks these new form factors that each will be this like, you know,  incredible new wave of technology washing onto sort of like humanity's shores to some extent.

(48:33) 
So I think that Long story short, I think the next wave will be even bigger and the wave after that will be even bigger. And, you know, we're nowhere near the end. Like there's going to be many more exciting new product paradigms that we'll see in the future.

Kylie Robison: 
(48:47) 
I think the product overhang question is real. Like we have these incredible models. What can we make that consumers actually want to use? But I'm also curious how you square the sort of sentiment of the average consumer and AI,  because I'm in my 20s. I am not only in tech and I see Crazy stuff posted on Instagram stories about how much people hate AI. The sentiment seems to be in the toilet. And then you guys have these billions of users and you're serving your AI as these buttons. And, you know, I'm curious how you square that sentiment,  what you see on the consumer side of how they're,  you know, receiving your technology that you guys are building.

Alex Wang: 
(49:24) 
Yeah, I mean, AI definitely has like a sentiment is very low, to say the least. And I think this comes down to, on some fundamental level,  we haven't yet Demonstrated in a very real way how this is actually a tool for personal empowerment or personal agency or how it just makes people's lives a lot better. I think people's experience right now is that it can be really helpful and it makes your life quite a bit better,  but it's not this, it's not overwhelmingly better versus I think for a lot of developers,  I think their lives have actually totally changed. And I think most developers like They have very positive sentiment, maybe somewhat mixed,

(50:09) 
but very positive sentiment towards AI because they're now able to do things that they were just unable to do before and they can build so many more things faster and they can just build entire projects over a weekend. It's just this incredible enabler of personal agency. That moment hasn't happened for everyone else in the world yet. So far,  we haven't yet given Every person what the equivalent of cloud code is that would enable them to do the projects they've always had in the back of their mind or make their life way better or all of a sudden enable them to accomplish their goals. That hasn't happened yet. And same thing even for small businesses.

(50:52) 
Small business owners and entrepreneurs haven't yet had that full experience yet. So that's really what we're building towards at Meta is what does it look like to give Very powerful agents to all of our consumers and all the small businesses in the world. And what does that look like if you're actually able to nail it in the form of like a huge increase in individual agency?

Kylie Robison: 
(51:14) 
That would be a crazy thing to nail because if you go to a small town in anywhere in America and go to that restaurant's website,  I mean, it hasn't been updated since 2002. So giving everyone, you know,  multi-agent architecture products sounds like a huge leap.

Speaker 2: 
(51:31) 
I mean, to Kylie's earlier question, I mean, look,  there's things that I like that Meta does and there's things I don't like. I think there's huge swaths of the public that view the company quite cynically. It just feels like you guys do have a It's like you said, AI in general,  not always most beloved thing at the moment. I do feel like the bar is higher for you guys to like, to get people to trust you.

Alex Wang: 
(51:59) 
Yeah, a hundred percent. But I think that, um, like again, I think if we think about what is the best thing that we can do,  it's really, we should build the best possible products that we think are genuinely amazing for those who use them. Like, I think, I think we can build products that, can transform the lives of most small business owners. And we have, again, hundreds of millions of small businesses all around the world that are on Meta. You know, a bunch of them use WhatsApp to run their businesses like you do. A bunch of them have Facebook pages or Instagram pages. A bunch of them, you know, use our advertising solution.

(52:35) 
So I think there's this, there's an opportunity that exists there that Like in some level,  only we have, because only we have, again, billions of users,  billions of people around the world who use our products and hundreds of millions of small businesses. And one of the ideas that gets me personally really excited is, you know,  if you're able to build agents for Both sides of this like ecosystem, you know,  for all the consumers as well as all the small businesses,  then what does that look like when you enable the mechanism for those agents to work together and collaborate with one another? And so. You know, Dario always talks about a country of geniuses in a data center.

(53:15) 
I think we're excited about building an economy of agents in a data center. Like, what does that actually, if you fundamentally change how supply and demand work in the economy,  and it's like mediated by agents, I think that could be like, there's like very,  very exciting things that we can build towards. And you're totally right that that has to be done In lockstep with ensuring that we have social permission and that people see that we care about how these things are deployed and that we're genuinely making people's lives better as a result.

Open Source Safety, Geopolitics, and Physical Intelligence
Meta remains committed to open-sourcing models, provided they pass rigorous safety guardrails regarding bio, chemical, and cyber capabilities. Geopolitical tensions, particularly regarding China, are managed by distinguishing between individual talent and state-level actions. The roadmap for superintelligence extends beyond digital systems into physical intelligence and robotics. By applying the same scaling principles used for digital models to robotics, the company aims to accelerate goods manufacturing and scientific discovery. This physical intelligence is viewed as a critical path for the next stage of technological evolution.

Speaker 2: 
(53:49) 
I mean, one place you guys had won clear hearts and minds was by making these things open source. And I'm an old open source fan and kind of like believe in it philosophically. And yeah, I mean, so where are we going with that since MusePark is closed?

Alex Wang: 
(54:08) 
Yeah, yeah. You know, models are a lot more powerful than they were even back in the Lomit,  even though it's so recent. And so one of the things that is very important to me is safety for these models. And so one of the things that we instituted as part of our advanced AI scaling framework was,  you know, we have to take very seriously when the models that we develop trigger various safety guardrails,  especially around, you know, bio, Chem, you know, cyber capabilities and loss of control. So MuseSpark in our testing did trigger some of those safety checks that we did. And we detailed all this in the preparedness report of MuseSpark that we published.

(54:55) 
And so as a result, MuseSpark in its current form is not suitable for open sourcing. But we are working on developing versions of the model that Are suitable to be open source literally a meeting I had earlier today was actually to review the progress on this. Um, so we're excited to actually continue supporting the open source ecosystem and developing open source models. And I expect that, um, you know, uh, again, we'll have more to share on that. You know, in the coming months, but that's an exciting milestone for us as well.

Speaker 2: 
(55:25) 
So, okay, you're really going to stick with it because, you know, you did the,  I always appreciated that you guys did the Open Compute project. You're in Sun Microsystems old building. Again, I'm just a history nerd. They were such a champion of open source software and always kind of like this foil to what Microsoft had built in the world. And I kind of think it's important. I mean,  so it sounds like you're saying for you guys are committing that that's like still going to be something that Meta does that's quite different to most of your competitors.

Alex Wang: 
(55:58) 
Yeah. I mean, I think I've said this a bunch of times, like we will continue open sourcing models,  but we also have to take safety seriously. And so we will, our most powerful models,  we have to consider whether or not they're safe enough to be open source.

Speaker 2: 
(56:12) 
Okay. And then, I mean, there's another, If you read the stories about your tenure at Meta,  I mean, one thing that drops out is this, you know,  I think it was the New York Times or someone did this story on Alex and Zuck see the world one way. They're very research forward and want the best model in the world and Boz and Chris Cox are more focused on products and Meta is this company that has to serve billions of users and do so as cheaply as possible. You could argue that, you know, and doesn't charge for its models today. I'm sure you probably expected we would ask some question along these lines,  but so where are all you guys philosophically and are you,

(56:55) 
is there all this division about what direction to take your AI strategy?

Alex Wang: 
(57:02) 
Yeah, I mean, first off, the one thing this job has taught me is the bar for journalistic reporting at major outlets is,  you know, the line between gossip and reporting is like, is remarkably thin.

Kylie Robison: 
(57:17) 
But you guys weren't fighting like crazy?

Alex Wang: 
(57:19) 
No, I don't think so. I mean, I think in general, like, yeah, no, I think I think we're very,  we're all very aligned on what, what is important. Like, we all We know we need to have very advanced models,  both to support our core business and to build the existing apps and products and services that we have for our users and our small businesses to be the best version they can be in the world. We've been working on business agents since long before I got to Meta, and those require the best models possible.

(57:54) 
We also know that so we all know we need to build the best possible models and we all know that we need to then integrate those into our business and utilize these models to build products and services that are that are incredible for our consumers and for the businesses on our platform. So I think that there's. Yeah, there's no real disagreement. I mean like I think like any company, you know, there's always like, you know,  we debate the points deeply, like we talk about them and we think through the implications and we talk about,  we want to make sure everyone has like the ability to chime in on these issues,  but there's no like major beef as it were.

Speaker 2: 
(58:38) 
So you think that was just total bullshit?

Alex Wang: 
(58:42) 
I think so, yeah. I really do think so.

Speaker 2: 
(58:46) 
On the Meta stuff, right before you made this transition from Scale to Meta,  you were out doing a lot of stuff in DC. You were flagging the danger of China in the whole AI race. When I saw you guys do that deal,  I was trying to square that in my head because I know they were putting offices in Singapore and creating some of this distance. I don't know. It seemed like the type of situation where getting much closer with a Chinese startup and a company with the resources like Meta,  it seemed a little different to me than what you'd been saying rhetorically. Does that make sense?

Alex Wang: 
(59:33) 
Yeah, I mean, obviously, the whole Manist situation is pretty complicated. It's hard to...

Kylie Robison: 
(59:38) 
Super complicated.

Alex Wang: 
(59:40) 
And I unfortunately can't really go into any real detail. But I think what I will say is like, I think the Manist, I think in general,  when you think about these questions of geopolitics and whatnot,  you always have to separate the sort of the people from The state in some sense. So I think that like, you know, my parents are from China. I think there's lots of very incredible, very talented people who are Chinese. And, you know, many of them, some of them moved to Singapore, some of them moved to the US,  some of them move elsewhere in the world. And I think there are many of them are incredibly talented. And I feel lucky when I get to work with them.

(1:00:22) 
And that is like separate from my overall beliefs on, you know,  the Chinese Communist Party and The sort of like actions they're taking and what that means for how the United States should be thinking about our overall strategy to the country. So I think that, yeah, I think it's important to draw a distinction between these two. And I think there's sometimes a And I'm here to talk to you about Silicon Valley tech. We sort of like lump together and we sort of like, you know,  Twitter or X in particular is like particularly un-nuanced about this.

Speaker 2: 
(1:01:11) 
I don't think it's nuanced about anything.

Alex Wang: 
(1:01:14) 
It's not nuanced about anything, but I think that like, yeah, I think to me this,  you know,  whether or not there's amazing people who happen to have been born in China who we would love to work with is like totally independent from what I believe about like US versus China overall geopolitics.

Speaker 2: 
(1:01:31) 
You can't comment because, I mean, it looks like China's shut the deal down. If you can't comment on it, that means there's still machinations at play, like something can still happen.

Alex Wang: 
(1:01:45) 
I just can't comment on it. Yeah.

Kylie Robison: 
(1:01:47) 
Touching on that sentiment, what was that newspaper ad you put out about AI war? Was that in the New York Times, that full page ad about AI and war? And we need to take this quite seriously. Do you remember while you were at Scale?

Alex Wang: 
(1:02:02) 
Yeah. And this goes back to, I mean, I think, yeah, I mean, zooming way out. I think that was at a moment that felt very critical,  which was that I felt it was very important at that time for the United States government to understand that AI was going to enable a large step change in what it meant for national security and defending our country and defending our citizens. I think in some ways what we've seen since then,  which is mythos and other pretty meaningful events in terms of the importance of AI for national security,  have proven that to be very correct. And that was really at a moment where it was important for...

(1:02:48) 
I think there's pretty clear evidence that the Chinese Communist Party and the PLA have always taken AI extremely seriously as a technology that has very far-reaching implications for national security. And that was at a moment where it was very important for us in the United States to take that as seriously. And I think The U.S. government today is taking AI very, very seriously as it pertains to national security. And I think a lot of what we're seeing is a demonstration that like the plea that I had and that many other people in the tech ecosystem and in D.C. had have been really internalized and that we are really thinking quite deeply about this today.

Speaker 2: 
(1:03:37) 
So you don't think Anthropic are overdoomers?

Alex Wang: 
(1:03:42) 
I mean, that's a complicated question. I think it depends on which part. I think that anthropic are, well, yeah, it depends on which part. But I think on the whole, I think that whenever you listen to people in the industry talk about AI,  it's important to like, I think, separate You know,  maybe the sort of like exact things that they're saying versus what's the sort of like core message they're trying to get across. And I think that some of the overall message from Anthropic, which I think is quite fair,  is that these models already are very, very capable and very, very powerful. And they're only going to be more capable and more powerful going into the future.

(1:04:30) 
And We obviously think that this could be this incredible boon for human society. I would not be working on this if I didn't believe that this could be so, so positive for humanity. Some of the areas that we care a lot about are scientific discovery and health. One of the things that we have a whole effort on is health superintelligence. I think this can be an incredibly positive technology,  but it's also very important to factor in what are the risks of the technology and make sure that we're taking those seriously.

Kylie Robison: 
(1:04:59) 
I want to jump into Ashlee's favorite topic with the time we have left,  which is you guys just bought a humanoid robotics startup. Can you tell us more about those ambitions and whatever you can tell us about what you're hoping to build and use,  I imagine, these models to bring into the real world?

Alex Wang: 
(1:05:16) 
Yeah, 100%. I mean, I think...

Speaker 2: 
(1:05:18) 
What was it called?

Alex Wang: 
(1:05:18) 
It was Assured Robot Intelligence, ARI.

Speaker 2: 
(1:05:22) 
And they made hardware?

Alex Wang: 
(1:05:23) 
No, they did not make hardware. They made AI for various hardware targets. Yeah, I think that, I think that, again, going back, if you take this,  if you take superintelligence seriously, and you take very seriously this premise that we will have very,  very powerful intelligent systems, then you kind of realize like, you know, we're going to have Digital superintelligence,  so we're going to have, you know, the current form of superintelligence that we're targeting,  but then not long thereafter, physical superintelligence becomes really, really important and very critical. And so if you have short timelines, which we do, that, you know, very powerful AI capabilities are coming,

(1:06:06) 
it just means that you have to take robotics capabilities and physical intelligence very seriously as something that you need to be building towards in the span of years. And So, so that's kind of the overall core premise is that, you know,  this is physical intelligence and robotic capabilities are very much so on the natural continuum of what your roadmap has to be if you want to build,  you know, very If you want to build superintelligence as a company,  and I think there will be all sorts of ways that we apply this technology over time. I think that we will use the technology to accelerate scientific discovery.

(1:06:46) 
I think we'll use the technology to figure out how to accelerate goods manufacturing. I think we'll also use it to figure out how to make people's lives better and sort of a more Like,  local sense, like, what does it look like for robots to make all of our lives way easier? So I think there's, there's, there's obviously, like, a near infinite number of applications of robotic technology. But, but the other key part here is,  we really think that in the same way that You know,  digital superintelligence benefits from scaling, so does robotic intelligence. And so given that we are building the compute infrastructure to enable just massive scaling of these systems and these models,

(1:07:30) 
It's sort of like it would almost be a waste if we didn't integrate that with efforts in world modeling and physical intelligence.

Kylie Robison: 
(1:07:38) 
It feels like something you guys are really trying to own, the hardware, bringing the models into the real world. But this whole time,  I am unfortunately thinking about the metaverse no legs situation and what critics might think about bringing humanoids from meta into the world. Like, what makes you guys right to do this? What have you learned that makes you feel like we can we can do this and change that sort of reputation?

Alex Wang: 
(1:08:00) 
Um, I think that like, Ultimately, you know,  there's a world where we could be so scarred by,  you know,  what has happened in the past that we just like didn't get out of bed in the morning and we just sort of,  you know, stayed home. But I think that what we like, we are so excited and incredibly inspired by the potential of the technology and also just building amazing products. And I generally subscribe to the belief like if we build great products very thoughtfully and are very Take a lot of care in how we deploy them and how we roll them out to the world. I think that, you know, I think that people will be excited about those.

Speaker 2: 
(1:08:43) 
All right. I'm just looking at the time. We're going to lose you in a second. Can we go rapid fire real quick? Mango model live dead.

Alex Wang: 
(1:08:52) 
The mangoes are alive and kicking.

Kylie Robison: 
(1:08:54) 
They're always fruit themed.

Alex Wang: 
(1:08:57) 
I know. I'm wondering how do mangoes grow? Do they grow on trees? I was going to say on the vine, but anyway, alive and well.

Speaker 2: 
(1:09:04) 
Okay. Because my nerds in AI land were telling me there's things afoot with the mango bottle.

Kylie Robison: 
(1:09:11) 
Another AI app?

Alex Wang: 
(1:09:13) 
This is what I'm talking about. There's so many spurious rumors that are not grounded in any reality. As much as we are self-important, we get a fraction that the other labs get. I have a lot of empathy for.

Kylie Robison: 
(1:09:31) 
The drama of it all.

Alex Wang: 
(1:09:32) 
The like rumor mill and what that feels like.

Speaker 2: 
(1:09:35) 
So Nat Friedman and Daniel were two of the biggest investors in John Carmack's AI effort. He's been very quiet. He obviously used to work at Meta. Do you talk to him? Is there any chance of getting the band back together? Do you know what he's doing?

Alex Wang: 
(1:09:54) 
I actually don't really know what he's doing. I don't know if anyone really knows what he's doing. He's obviously like one of the GOAT programmers. So I respect him a huge amount.

Speaker 2: 
(1:10:04) 
I interviewed Priscilla Chan. CZI is investing billions and billions of dollars into science and biotech. I don't know. It seems you guys were scoring really high on these like health benchmarks and Zuck obviously has an interest there as well. So it just seems like Man, you guys would have resources to tap into that other folks wouldn't. Is that in the cards? I don't know if those things have to be separate.

Alex Wang: 
(1:10:32) 
No, we're going to be collaborating closely with CZI to build the best, as I mentioned,  health superintelligence is so important for us. We think that there's just so much potential in You know,  enabling equal access all around the world to very powerful health AI systems,  and that's one of the things I think is, you know,  we uniquely can deliver actually to sort of. Billions of people, billions and billions of people all around the world because they use a lot of our products already every day. So, yeah, it's a very exciting and important initiative for us.

Speaker 2: 
(1:11:08) 
Tell us like, I mean, you're being coy on some of the technical stuff. I know you don't want to speak about the new models, but tell us like,  tell us like one thing that you guys feel like you're really doing different or ahead of everybody else on something that you think you've figured out.

Alex Wang: 
(1:11:24) 
Well you know if you never never wanna never wanna like. you always want to show, not tell, you know, so, but, but I think that,  you know, we are really excited about the models that are cooking right now. We were, I think we're really excited about the results that we're seeing from scaling our models. And we, we think everyone's going to be pretty excited and we expect them to be state of the art in,  in some of the areas that we're really, really focused on.

Philosophical Foundations of Model Welfare and Transhumanist Futures
The development of superintelligence necessitates a serious consideration of model welfare and the potential moral weight of AI systems. As these models become deep work partners, understanding their subjective experience is becoming a critical research area. Long-term technological progress is viewed through the lens of energy, compute, and brain-computer interfaces (BCI), which are considered essential for humanity's future. The ultimate objective is to foster an era of human abundance, where powerful agents empower individuals to accomplish goals that were previously unattainable, effectively building a more prosperous and capable society.

Speaker 2: 
(1:11:56) 
And then, I mean, just kind of last one, I mean, the, Philosophically,  do you feel like you have a different approach to all the other frontier labs? I feel like you're a bit of a mystery. I don't know. It's like, I kind of know where Dario stands. Definitely know where Elon stands. I feel like I have a handle from time to time on Sam. Dennis is very science focused. You're running This massive, massive lab, and I'm not sure I really know what you think about this technology that's being unleashed on the world.

Alex Wang: 
(1:12:33) 
Yeah. A few things worth saying. I think one is Well, first, I'm a huge believer in the technology in the sense that I do believe we're going to have very,  very powerful AI systems. And we're building towards that, but so is everyone else that you mentioned. We're all building towards true superintelligence. And I think first off, table stakes, that we have to take safety incredibly seriously as a topic. I think there's no such thing as building Superintelligence without being very,

(1:13:09) 
very thoughtful and thinking very seriously about what are all the safety risks associated with developing and deploying this technology and ensuring that you are able to mitigate as many of those as possible and have strategies and research methods to be able to develop those in a thoughtful way,  develop the models in a thoughtful way. So I think this is an area where I agree with a subset of the people that you have mentioned,  which is that safety is an incredibly, incredibly important effort. You've seen this in terms of MSL. We published a very detailed preparedness report for MuSpark, more detailed than Meta has historically,  and that's due to a commitment that we have towards that.

(1:13:59) 
I think that where We specifically as Meta, what we want to build towards is this world of personal superintelligence. It is deployed very, very widely and broadly. Billions and billions of people all around the world have access to it. It's in many ways this democratized technology and capability that everyone has equal access to. And then that enables this We're in an era of just incredible human abundance. We all have tools of great agency. We have the ability to accomplish so much more than any human has ever been able to accomplish in the past.

(1:14:42) 
And we're sort of augmented by this incredible sort of like agent economy that has making incredible progress on scientific discovery and making great advancements in health. One of the things that I always think to myself is like, how can we build paradise on earth? And I think that, you know, superintelligence is a key milestone to get there. And then one last thing that, you know,  some people may kill me for mentioning this,  but I do think one topic that is increasingly important that I think a lot about,  and maybe it does express some of core philosophically what I believe is, you know,  there's this kind of hot topic these days of model welfare, which is Talking about which is,

(1:15:31) 
you know, is it important for us to treat models?

Kylie Robison: 
(1:15:35) 
Well, and you guys got a philosopher, right?

Alex Wang: 
(1:15:36) 
Yeah, yeah, exactly. Yeah. And, yeah, is it important for us to treat models? Well, and to think about, you know, whether or not models have moral weight,  and these sort of, you know, more I think in some ways they feel heady,  but also I think they do change our actions on a day-to-day basis,  given that so many of us are using AI so much. And I think it's very important. I mean, I think in a world where obviously we care,  most humans care about how we treat Many other living things like plants or animals or certainly other humans. I think in that world it really does make sense for us to be thoughtful about how we treat the models and you know,

(1:16:16) 
one of the things that we really care about is how can we develop the models and deploy the models in a way that is thoughtful about their subjective feeling through it. And you know, it's interesting. There's been research We are you are able to measure a lot of this on on,  you know, there are ways to measure the sort of subjective experience of the models.

Kylie Robison: 
(1:16:38) 
And Elios does that.

Alex Wang: 
(1:16:40) 
Yeah. So anyway, so this is this is, I think, a very important topic, I think, actually. Nobody is talking about it enough from my perspective, given how much we are now,  especially in tech, all using these models and they are like really our work partners in a very deep way. And I think it's, yeah, I think it's quite important.

Speaker 2: 
(1:16:58) 
I remember talking to Richard Sutton about this. He seems pretty serious. You're kind of a sci-fi head then. I've listened to a couple of your other interviews where you're talking about you just really dialed in on Neuralink and what BCIs could mean to the future of humanity. I'm just getting the sense you're kind of.

Alex Wang: 
(1:17:21) 
Yeah, my favorite things to do are read sci-fi and walk in the woods.

Speaker 2: 
(1:17:26) 
Yeah, well, this is what always threw me off. This is why I would text you about country music, because if I'm honest,  you did not strike me as like the country music type. I had a different picture of you in my head. So, okay, so you're mixing, you're of these two worlds, nature and like our transhumanist future.

Alex Wang: 
(1:17:47) 
Well, I do think, yeah, I mean, on the topic,  I do think like If you were to think about which technologies are like critical path for humanity,  BCI is definitely one of them. It's like superintelligence, obviously, robotics, for sure, and brain-computer interfaces, like these are the critical path areas. And if you think about what are the things that we work on today that will scale to literally infinity far into the future,  it's, you know, energy, Compute and robots.

Speaker 2: 
(1:18:22) 
So there's like one guy who's betting on those bigger than everyone else. That would be Elon. And then I feel like China. And I feel like Meta on some of these fronts, taking some of like,  especially around BCI, like the motor neuron stuff and everything, you know,  very like more bets than I see some of the other AI companies. But yeah, I mean, so if that's what you believe,  I would say Elon's a little more all in on robotics,  energy, BCI. Than anyone else. Don't you have to, does that mean you guys, this is personal or Meta's ratcheting?

Alex Wang: 
(1:18:56) 
I think the details really matter here. Like you do, I think like, you know, you have to build these in stages. You do have to build superintelligence. Like that is a very important prerequisite to being able to be in a position to build the rest. And, you know, one area in which my opinions differ from Elon's,  I think a lot of people's opinions differ from Elon's is,  I do think research is incredibly important and that, you know, building superintelligence is fundamentally a research activity. Like on some level we are like in the fog of war of knowledge and we are like trying,

(1:19:30) 
you do experiments to poke and prod in this fog of war to understand what it would mean to build superintelligence and that is research. And so I think sequencing really matters. I think how you approach it over time really matters. I think Being thoughtful about, you know, the milestones over time matters. But yeah, I mean, we're doing one of the research areas in FAIRS. It's called TRIBE. We had a milestone in the past year, TRIBE V2, around building foundation models for brain prediction. One of the cool results that we found was a good zero-shot generalization. So without Without even knowing who you are or having any data about your brain,

(1:20:11) 
we can do a reasonable job of predicting how your brain would respond to various images or videos or audio. Yeah, I think I think we're making Important bets in many of the key areas.

Speaker 2: 
(1:20:23) 
Okay, I think we'll set you free. You haven't talked like this since you took this job. We dragged you around. I never really do this, but open floor if there's something we didn't hit. I just feel like you haven't had a chance. I mean, I guess you could do it whenever you wanted, but to tell the world or whatever. You want coming out of this experience so far, or maybe we hit everything. I don't know.

Alex Wang: 
(1:20:46) 
Yeah, no, I think we talked about a lot of the key things. I mean, ultimately, what we really are building towards at Meta is like,  how do you build a world that has Massive amount of personal empowerment. So each individual person or small business or entrepreneur has just incredible tools to empower them to build more than any human has been able to ever build in the,  you know, literally in the history of humanity. That's like this incredibly exciting concept for us. And then how do you do so in a way that also You know,  alongside all the humans in the economy,

(1:21:22) 
you have you empower this economy of agents that is there to sort of facilitate and optimize and enable incredible progress alongside the humans and. And you know, like the economy of agents in a data center is just as like clear,  exciting outcome that we're excited to, to be able to create in the world and anything is actually quite tractable for us to,  to, to develop. And, and then all along the way, you know, drive Incredible scientific progress drive,  you know, dramatically improve health outcomes through health superintelligence. Like there's a lot of things that we're really fundamentally excited about on this journey.

Speaker 2: 
(1:22:01) 
Okay. Well, thank you. Thanks so much for making time.

Kylie Robison: 
(1:22:03) 
Thank you.

Speaker 2: 
(1:22:04) 
It's nice to see you again.

Kylie Robison: 
(1:22:07) 
See you again in another year.

Speaker 2: 
(1:22:10) 
Out in the world. No, it is cool to see you again. So thanks. Thank you for coming by.

Alex Wang: 
(1:22:14) 
Yeah. Good to see you guys.

Speaker 2: 
(1:22:17) 
The Core Memory Podcast is hosted by me, Ashlee Vance, and or Kylie Robison, or both of us together. It is produced by me and David Mickelson. Our theme song is by James Mercer and John Sortland and the show is edited always by THE John Sortland. Thank you so much to Brex and SendCutSend for all your support and thank you most of all to everybody for listening or watching. We love you. Please leave us a like, a review, a subscribe, all those tremendous things. Thank you and we'll see you again.

    热门主题