All-In with Chamath, Jason, Sacks & Friedberg
All-In with Chamath, Jason, Sacks & Friedberg

Anthropic's $30B Ramp, Mythos Doomsday, OpenClaw Ankled, Iran War Ceasefire, Israel's Influence

1d ago1:29:1816,519 words
0:000:00

(0:00) Bestie intros: Brad Gerstner joins the show! (4:22) Anthropic blocks Mythos release for security concerns: serious or marketing stunt? (24:07) Are OpenAI and Anthropic trying to kill OpenClaw?...

Transcript

EN

How many PRs you think are going to get pushed to the core structural interne...

What's the over under number?

Because I'll give you a number. You're going to say zero. My answer to that is I'll say like 10,000. But it's going to be a meaningful thing. And if it prevents your browser history from being released everybody in the world

Jamat, that may be something that you're willing to, you know, let 100 days pass on.

I think you got Jamat's attention when you sent browser history.

What about the dick pics? He's not released himself. Alright everybody. Welcome back to the number one podcast in the world. David Freeberg is out this week.

But in his place. The one, the only our fifth bestie brand, Gershner. I mean, what don't you give me, it puts a little namaste in your pay day anymore. Have some of you. You know how to bring the greatest moderator, but now it's just going to make sure, you

know what these guys beat me up. They beat me up and they just beat the joy out of me doing this program. It's because you're a row kind of apologists now. No. I will get into it.

Okay. Same for the fuck. Oh, kind of apologists. Because I said, like, hey, they've stopped retardomaxing and they've started doing like some logical things.

Yeah. Okay. He was great to be here. Great to be here. Good to have you here.

And of course, we have David Sachs is back. Everybody wants to hear from David Sachs. You missed you last week, bestie. We didn't beat the joy out of you. We just tried to beat some of the hot air.

Oh. I mean, any fluff that you can put on the show that this involves you talking and saying nothing is. Oh, that's the stuff we got to learn. Yeah.

I'm going to run up. Yeah. I'm going to run up. I will cut it out and then we'll just put a promo in for the Syndicate.com. Thank you.

Thanks. That's also with us. Champo. How's your retardomaxing going since last week? Did you have a retardomaxing full weekend?

Did you have a good full weekend of just smoking cigars in the back deck and not ruminating

about all the chaos you've caused in the last 20 years?

I think I've done general and more. Good. Thanks. I'm not. Oh, you have.

But there's been some chaos. I don't know. I don't think about it. You can't. Bro, you can't have ups without downs, man.

It's like, what are you there to do? Just like play can't everybody and be a loser or you're there to be a winner? Yes, you're in the arena. But have you stopped going to there after realizing what we're dating? What's up with this sudden interest in retardomaxing?

Are you like that collovicular for retardomaxing? No.

The world finally caught up with me.

That's it. And I've been retardomaxing this whole time. They just didn't have a name for it, guys. Okay. He likes videos.

Really good. I watch two more of this week. What take us through? What's so appealing about not ruminating smoking a cigar and just living your life? Because what he says actually works at every level of society and every sort of thing

that you may want to achieve. Even if you're trying to like climb the rungs, you very quickly learn that the more you want something, the less you're going to get it.

And I think that's like his real message is let go, live life and just try stuff or don't

try stuff. And I think that that detachment is really healthy for people. I like it. I like it a lot. Do you think I assess this?

I actually didn't know. Alaisha Long, but Eli, I think is how he goes by. But he's fantastic. He's got his YouTube channel. Mark and recent found him, and he's like, this is this guy is the new guy modern day

philosopher. He keeps you a roadmap for how to live your life, right? New age sage. What's the name of the guy? The character's name from Dune.

I was into girls. I was into girls. I was dating girls. He's the reason I'll guide you. He's the reason I'll guide you.

I was dating girls. He's the reason I'll guide you. He's the reason I'll guide you. He's the reason I'll guide you. He's the reason I'll guide you.

He's the reason I'll guide you. He's the reason I'll guide you. He's the reason I'll guide you. He's the reason I'll guide you. He's the reason I'll guide you.

Okay. We're emanating. It's just not worth it. Just everybody go food. Just do stuff.

Stop blabbering in your own head. Just do stuff. Absolutely. All right. Listen.

Speaking of doing stuff. And profit. Just withholding. It's newest model. Beathos.

I'm using the Greek pronunciation. It's newest model. Meathos. Saying it is far too dangerous for any of us to have access to it. According to the company, the model autonomously found thousands of vulnerabilities, including

bugs in every major operating system and web browser. This little study they did included 20 year old exploits that had been missed by security audits for decades. Some examples.

They found a 27 year old vulnerability in open BSD used in firewalls and critical infrastructure.

They found a 16 year old bug in FF MPEG. That was missed by automated tools after 5 million scans.

The Linux kernel.

All kinds of bugs they found. They released a hype video.

Keeping up. Why they were not going to share this model.

Here's Daria. Come on the program anytime. But it's a side-effect of being good at code. It's also good at cyber. The model that we're experimenting with is by and large as good as a professional human

identifying bugs. It's good for us because we can find more of an oldberry sooner and we can fix them. It has the ability to chain together vulnerabilities. So what this means is you find two vulnerabilities. Either of which doesn't really get you very much independently.

This model is able to create exploits out of three, four, sometimes five vulnerabilities. In sequence, give you some kind of very sophisticated end outcome. All right, Brad. By the way, that's set. They're using there.

That's the same room. Those guys play Dungeons and Dragons in every Sunday. Brad, you're an investor in this company. Is this virtue signaling or is it reality? Is this a good move by them to not release this model and be thoughtful.

Give it to a handful of people and just find all the bugs. It can before releasing it to the public and we've got a lot more issues to discuss about. I actually think they deserve a ton of credit here and let me walk you through why. The company could have just released mythos broken a lot of core things on the internet. Oftentimes in Silicon Valley, it would say move fast and break things.

In this case, it means just releasing the model to move further ahead of your competition. But here, the company realized it would wreak havoc. They ran their own vulnerability testing. They saw that it would allow offensive hacking and people to expose browsers and browser history, expose credit cards, you know, on the internet.

So, you know, what I like about this is they didn't need government to hold their hand on this. We have plenty of government regulations. They know it's in the best long-term interest of the company in the industry. You know, so they set up project glass wing.

It's an AI driven, you know, kind of cyber coalition, Apple, Microsoft, Google, Amazon,

JPMorgan, 40 of the most important companies and their goal is very simple.

Let's spend 100 days using advanced AI to find and to fix and to harden these software vulnerabilities before hackers exploit them.

Now, what I think this represents Jason is a threshold that we're crossing.

Mythos and Spud, which is going to be out from open AI any day now, which is the first Blackwell train model at Open AI. They represent the beginning of what I would call AGI models. These are models with massive step function improvements and intelligence and they're just too smart to be released immediately, you know, and by the way, there was nothing that said

that every time you finish a model, you got a immediately release it GA. So they set up this idea of sandboxing, building defensive alliances, you know, in order to move away from that regime. And I think it shows in Saxon, I've talked about this a lot. So I'm interested to hear what he thinks.

It shows you can trust the industry and market forces in coordination with the government. They were talking to the government about this, but they're not relying on some top-down regulation in order to do this. They laid out a blueprint that seems to me very pragmatic that now that we're at this threshold, we're going to sandbox these things.

I think that Open AI will end up doing the same thing. I think Google will end up doing the same thing. It's an aggressive way to keep the pressure on and when the race at AI, while making the trade-offs to protect safety.

So, you know, I think you're always going to have to make these trade-offs.

I think in this case, it was a great move by Dario and team and I think they deserve a lot of credit. Sax, when you look at this, we had a meal Michael on the program a couple of weeks ago, I might have been four or five weeks ago, and we had a very thoughtful discussion about, hey, if the government is going to have these tools, you know, an anthropic wants to withhold

them and, you know, what is the proper relationship there?

You have to think that the government, and then you don't speak for all parts of the government,

if you were just going to run through the game theory, they must have gone to the government and said, listen, this thing is so powerful, it can put together two or three hacks created novel attack vector, and this is incredibly dangerous. What if China has it? If this thing is as powerful as Dario says it is, then this is an offensive weapon as

well for us to take out, let's just pick, you know, a pressing issue, the North Korea's ballistic missile program. This is equivalent the way it's being described as the Manhattan Project, perhaps. So what are the chances to par question for you, Sax, that China already has this and is using it, and do you think Dario is doing the right thing by regulating themselves?

I think anthropic has proven that it's very good at two things.

One is product releases, the second is scaring people, and we've seen a pattern in their

previous releases of at the same time they roll out a new model or new model card, something

That, they also roll out some study showing really the worst possible implica...

where the technology could lead.

We saw this last year about a year ago, they rolled out this blackmail study where supposedly the new model could blackmail users. There's been a whole bunch of these things. Actually, I went back to Grok and I just asked, hey, give me examples. We're in traffic.

It's a pattern. It's a pattern. Okay. It's a pattern. Okay.

These guys, I'm not saying it's not sincere, but they have a proven pattern of using fear as a way to market their new products.

If you think back to, again, my favorite example is this blackmail study where they prompted

the model over 200 times to get the result they wanted, and that result was clearly reverse engineered, and it got them the headlines they wanted. I would say the proof that its reverse engineers were now a year later, there's a bunch of open source models out there that have the same level of capability that that anthropic model had, and have you seen the examples of blackmail in the wild?

I don't think so. So in other words, if that study were true in the sense of being a likely outcome of that model, I think you would see examples in the wild of that behavior, and we haven't seen any of that in the past year. Now, let's talk about this specific example with cyber.

Yeah. I actually think that this one is more on the legitimate side.

I mean, look, the reason why I bring this up is any time and frappic is scaring people.

You have to ask, is this a tactic, is this part of their chicken little routine, or is

it real? You know, are they crying? Well, for not. I actually would give them credit in this case and say, this is more on the real side. It just makes sense, right?

So that as the coding models become more more capable, they're more capable of finding bugs. That means they're more capable of finding vulnerabilities, and like one of their engineers said, that means they're more capable of stringing together multiple vulnerabilities and creating an exploit. So I do think that over, say, the next six months, we're going to have this all at one

time period of catching up where AI-driven cyber is going to be able to detect a whole range of bugs that maybe had been dormant over the past 20 years across a wide range of systems. And so I do think that there is real risk here, and I do think, therefore, that having this pre-release period makes a lot of sense where they're giving the capability to all these software companies that have existing code bases to use the tool to detect the vulnerabilities

themselves so they can patch them before these capabilities are widely available. And by the way, it won't just be anthropic that makes these capabilities available. We know that, like, let's say the Chinese open source models like Kimmy K2, it's about six months behind. So we have a window here of maybe six months where we're still in this pre-release period

where I think companies that have large code bases can get advanced access to this model.

And I guess opening AI is going to release a similar thing in the next few weeks. I do think that every company or IT department or CSO that is managing code bases should take this seriously and use the next few months to detect any, again, like, dormant bugs or vulnerabilities and rollout patches. If everybody does their job and reacts the right way, then I do not think it will be the

doomsday scenario that an anthropic is sort of portraying, but that's one of these things where the fear might end up being a good thing in order to drive that, in order to drive the correct behavior. So sure. I also think this is going to work out fine, but you do need everyone to pay attention.

Use the capabilities, fix the bugs, then we're going to get into a big arms race between AI being used for cyber offense and AI being used for cyber defense, but it'll be a more normal sort of period. Trim off. We have Dario and a number of the participants here taking this super seriously, they're

making a big statement, Zach's very nuanced, I think, take there.

What you're taking on, how do these companies have it both ways? Hey, this shouldn't be regulated. This should be regulated. If this is, in fact, a cataclysmic, oh my god, they're going to hack everything. What if the Chinese have this right now?

That would speak to more government, either coordination, regulation, or some kind of relationship between the CIA, the FBI, for domestic stuff, and these companies because there it is a non-zero chance that the Chinese have an equal capability here. We're assuming they're behind, but who knows what they're doing behind closed doors? So what you're taking on this is that the boy who cried well for is this, the real deal

now. I think it's mostly theater. Okay. In February of 2019, when Dario was still at open AI, they did the same thing with GPT-2.

That was a 1.

2026. But at that time, this 1.5 billion parameter model was supposed to be the end of days. And it was supposed to unleash this torrent of spam and misinformation. And that was the big bugaboo at the time. And so what happened?

They went through this methodical rollout over six or nine months. They started releasing the smaller parameter models and then they scaled up to the big 1.5 billion parameter model. And at the end of it, it was a huge nothing burger. If you actually think that mythos is capable of doing what it says it can do, two things

are true. One is a very sophisticated hacker can probably do those things right now with Opus. And two, if these exploits are this easy to find, whether you use Opus or whether you

use mythos, the reality is you'd have to shut down the internet for about five years to

patch them all. So when you see like a large multi-trillion dollar g-set bank, it's a bit of theater. Why?

What do you think they can actually accomplish in two months?

Do you actually think that if there is these vulnerabilities, it's all going to get fixed? Let's give them six months. Let's give them nine months. But the reality is that capitalism moves forward. The funding needs moves forward.

And the need for these guys to build adoption moves forward. And that's going to supersede what this is. So I do think that sacks is right that they have figured out a very clever go-to-market muscle here and a go-to-market motion that activates hyper-attention and hyper usage. And so I give them tremendous credit and I'll maintain what I've maintained before.

And the topic is shooting the lights out right now. This is like Steph Curry going bananas. From everywhere on the court, these guys are hunting threes, play jumps in. It's all in that. Okay, so huge kudos to my topic.

But we've seen it before. We saw it when these folks were the principal architects at OpenAI or now seeing the same playbook here.

I think we'll look back and I think what we'll say are these two things.

One is if we're really going to patch all these security oils, we need to shut down the internet for some number of years, honestly, literally years.

And the second is an advanced hacker can probably do this today with opus if they really

want to do. Okay, hey, Brad, I got a, I'll get you in here for the last word. I'm going to go with, yeah, maybe they did cry wolf before. But based on what I see with these models advancing and using them and I'm using a lot of the open source ones right now from China, I think that this is like code red kind of

moment. This is Defcon. We should be taking this deadly seriously, and I think these companies got to coordinate with the CIA and this is equally a defensive as offensive opportunity. Do you think this is the nationalization of AI now?

No, actually, I don't think it should be nationalized, although I did see people sort of insinuating that. I think these companies need to build a group rad that work and coordinate with the CIA. I assume that they're already doing this.

I'm assuming Emil Michael and Trump and everybody have these people in a room and that

they've given the Defcon and said, hey, how can arc government use this to stop bad actors? And this is already being coordinated with the CIA and the FBI. I am 100% certain of that, that Dario went to them and said, look what we found. This is the real deal.

I'll give you the last word on this Brad, since you're an investor in both of these. You know them quite well. The frontier model forum, which was put together in 23, is cooperating on anti an adversarial distillation stuff as we speak, right? They don't want to make it easy on, you know, so Google and an open AI and anthropic, their

coordinating on this stuff. You know, there are times where I've pushed back on anthropic because I thought it was, you know, perhaps regulatory capture or something else. This is very different in my mind, right? He could have easily, Dario could have easily come out and said, oh my God, we passed

it threshold, we need to have a government moratorium. Remember, even our friend Elon called for a six month moratorium in 2023 because a civilization risk. This guy didn't do that. Instead, he said, okay, what should we do?

I'm going to get 40 of the leading companies together. We're going to spend 100 days sandboxing hardening the systems and then we're going to keep pushing forward.

What do you honestly think is going to get accomplished in a hundred days?

How many PRs you think are going to get pushed to the core structural internet in a hundred days? What's the over under number? You're going to say zero. My answer to that is I'll say like 10,000, but it's going to be a meaningful thing.

But if it prevents your browser history from being released, everybody in the world Jamaat, that may be something that you're willing to, you know, let 100 days pass on. I think you got Jamat's attention when you said browser history. We're about to dickpicks. Jamat, he's going to release them himself.

I got Jamat's like, hey, Chinese hackers, you're on my dickpicks, please put ...

We have to be out there complimenting when they're doing the right things, a reliant on

the market rather than running to the nanny state and saying do more of this. So this to me was just an example of a good balance. I'm sure we're going to have plenty of debates about this in the future. But you know, this is one I would like to see more of. This is why to use your work, Jake, I tried to have a more nuanced take is because we

have no choice but to take this seriously. Whether it's total theater, whether it's fear mongering and they do have a pattern around this, we can't take the risk, right? And it does logically make sense that as these models become more and more capable of coding, they're going to get better at cyber and there's going to be that one time period where

you're moving from free AI to post AI and you need to patch for that.

So my guess is we're going to see a lot of patches over the next few months. I think that that will resolve the problem. I think this is a case where I'm going to give them the benefit of the doubt. I think that I've fertilized them in the past, I think that blackmail study was embarrassing

to level up being a hoax.

But I think in this case, I'm going to give them credit and say that I think that it's legit. So it's not the anthropic hoax, this could be legit. I, you know, looking at this. We have no choice but to treat it that way.

Of course. Yeah. I mean, even if two things could be true at the same time, sacks, they could have used this tactic before. It could be performative, like the video with the dramatic music in the background.

It does have a little bit of drama to it and the way they presented it is very dramatic. But it does make logical sense that the one company that made the bat on code bigger than anybody else would be the one who would discover this quickest. And you know, in a hundred days, that's a pretty good, that's a pretty big advantage. Oh, this is the hackers.

Okay. I think one more point there.

The most important thing that people haven't talked about here is the amount of code being

pushed right now because of these tools is 10 x 100 x in most organizations. So we need to have this type of security embedded in these new coding tools to do it in real time. That's the opportunity. There should be real time correcting of this.

If this is real, they pick the wrong companies. Meaning, there are energy companies, folks that control nuclear reactors. There are airplane companies that are flying hundreds of thousands of people in essentially manufactured missiles of like streaming gas going at 500 miles an hour. None of those companies were the ones that were included in this.

And so I think if you really thought that this was end of days, at a minimum, we can

agree, maybe we should have expected the circle at Dutch. Well, maybe those are customers of the ones that are including here. Anyway, this is a really important story. We'll obviously track it in the coming weeks to see what turns out to be reality. And Dario, do come on the program at some point, Brad, well, you get Dario to come

on the program. I've invited him like three times. They got his phone number. He's ghosted me. I don't know why he's ignored you.

I literally got a introduction from the number, like one of the number one venture capitalists in the world is on the cap table very early. He just won't respond. I don't know why. I would tell you, Dario's podcast with Dworkish, who I think is an excellent podcaster.

I've listened to that three or four times, taking notes every time. It is a really exceptional piece, really exceptional piece of work, but I do plan them. All right, let's keep moving. We've got a lot on the job. You guys may once again be tired with your affiliation with us.

For you, I mean, I don't care. Literally, I've got friends on both sides of the aisle. I have friend F-ores, you do. Even J-Cal. Even J-Cal has friends everywhere.

Let me ask a broader question here. We're just while we're on the topic of anthropic. There was a really interesting story or tweet, I guess, you could say, by the founder of OpenClaw that Peter. Peter.

Yeah. What's his name? Peter Steinberg. Steinberg. Yeah.

We're now a coder-created OpenClaw, which is kind of the thing that launched the soul agent era. Now, you, I guess, you could say. Any event, he said that anthropic was cutting off his access to what's it was to bought. Is that the next topic? This is on the docket.

It's a little bit nuanced. Everybody using OpenClaw would take their $200 a month subscription to EndPropic, which was essentially, like, people were using more tokens, and it's an average. The people from OpenClaw, it is very verbose, and those people are 100x the usage of the average subscriber.

So he said, "You can't use your 200, you have to use the API.

You move from the $200 a plan to the API at a zero to your token use." So or more, and so they essentially angled OpenClaw, and then 10 days later or less, they released or announced their new agent technology, which is, according to them, a safer better version of OpenClaw.

Hey, all's fair in love and war, and they have basically shot a huge can in t...

of OpenClaw. Well, can you explain that exactly? So I think you're right that they systematically copy, feature my feature of OpenClaw,

incorporated that into Clawed, and then the CUDA Gross was basically cutting off OpenClaw.

The oxygen. Can you just explain exactly what they did? Okay, very simply, when you buy a subscription to these services, they have blended your usage across many users. So there's nine out of 10 users, less than the tokens.

They're paying for and the top 10% use much more. When OpenClaw became a phenomenon, the number one open source project in history on GitHub, with all of this usage, people in crazy, and you heard me talking about how crazy I went for it. Those people with the $200 subscriptions were using $2,000, $20,000 worth of tokens.

So they said you can no longer use your subscription to either your professional or enterprise subscription at $200, and plug that into your OpenClaw. You now have to go to the API and pay per usage. So no more, like, unruly use and thropics own agent harness, or you've part of the bundled flat rate.

You can assume that that's what they'll do, which if you were thinking on an anti-trust level,

might be token dumping or price dumping.

I'm not saying, like, I'm ratting them. No, it's like bundling, isn't it? Well, price dumping or bundling, when you price something under the market price in anti-trust, that would be price dumping, right? And if you were to bundle, it would be like the bundling issue.

Critically implorting, you can use OpenClaw via Cloud API. And every company has a right to set the price for its products. It's just saying that you were under their current regime. They were selling dollars for $10,000 via OpenClaw, because these were such power users. Now they're just saying we have to price this rationally, but we're happy to have you guys

use the API. OK, but Brett, when you use the OpenClaw competitor that Anthropic now offers, are they subsidizing that? Are you paying it? I don't know yet, because it's in closed man.

So in other words, what I'm saying is if they charge for API usage, their own first-party

agent harness or system, then that would be Apple's to Apple's, but if they end up charging the bundled flat rate, let's say for their stuff, but then charge the meter rate for third-party stuff, you could make a bundling argument. Sure, sure. And you could say it's anti-competitive assuming that Anthropic has dominant market share

in coding, which I think most people would say they do at this point.

And assuming that it's the same product. I mean, the reason most enterprises will probably use the Anthropic version of this agentic product is because it meets all of your security parameters, right? So Altimeter runs a lot of stuff on Anthropic. They're already integrated within our data warehouse, our data lake, things of that nature.

So just letting OpenClaw lose on the Altimeter data set would not be wise. And so it's a different fundamental product. No, I get that. And I think that Anthropic has a huge advantage. Let's say, cloning OpenClaw and just building it into cloud.

I'm not denying that. To me, that would be the reason why they don't need to do price discrimination is because there's already a very good reason to use the, let's call it the bundled offering on a featured basis. But the question I'm specifically asking is whether they're giving themselves a price advantage.

Because I think you're just giving the most generous interpretation. You're taking a more cynical one. I'm with you, sacks. I'm 100% on the cynical side. OpenClaw is so powerful.

It's got so much momentum that not only is Anthropic trying to ankle it.

I believe when Sam Altman bought it, it was, and he didn't buy OpenClaw itself.

He hired Aquahired Peter. I believe it was to subvert the OpenSource project to get Peter's next set of genius ideas inside of OpenAI as opposed to letting him go there. People are going to say, I'm a conspiracy theorist. But this is the number one focus.

And let me just give you a list of who is trying to kill OpenClaw slash compete with them. Obviously, you have Anthropic, but also perplexity computer launch. It's awesome. I've been using it.

Anthropic has this cloud managed agents. They drop that on Wednesday, April 8th yesterday, today's Thursday when we take you guys listen on Fridays. And then you have Ermez agent that was released on February 25th. That's also open source and very good.

So that's in the open source camp. Ali Bob is coming out with one that's going to be based on their Quinn model. And you have Elon who said he's got something called rock computer coming out of macro hard, which is a play on words from Microsoft. In addition to that, Amazon and Apple are preparing new releases of their retard maxing

assistance, Alexa and Siri. That will be less retarded in this new version. And then nothing out of saute and Microsoft yet. So the number one goal I believe in the large line, which model frontier model space

Is to kill this open source product.

No, I mean, because come on, like why is their building multi-functioning agents that can

move from answering questions to actually doing something for you?

Like you've got to do that because that's what consumers in enterprises once.

It doesn't mean that it's about killing open cloud, just this isn't obvious thing. They have the right to do it. But this is a giant movement to stop it because this is the equivalent of having an open source Android like player in the market. And that could be incredibly disruptive.

These, I believe open source is going to win the day on the large line, which models and take 90% of the token usage. And I think the entire frontier model space could be undercut by open source. And I think they realize that SLM's, the smaller line, which models that are verticalized now, that will run on, you know, desktops and laptops and is even starting to run on the

top ones. That is their biggest competitive threat. And I hope it happens that all do respect to your investments, Brad. I think this technology and the interface is, you know, he plays bats.

But I think it's imperative that the agent level, which is essentially your entire life,

you don't give that to end profit, you don't give that to open AI. That's your entire business, your entire life. It is foolish for you, Brad, to give your entire business and all the knowledge you have to end profit through that, unless you're just doing it to boost your investment in those companies.

But I would be very concerned if I was you with putting all of your knowledge that you earned over a lifetime into any of these large line-mich models. All right, Jake, let me ask you guys a question. Thank you for that in passion. I'm not a log.

I'm not talking to my Ted Talk. I'm not talking to my Ted Talk. Yes. Thank you for that Ted Talk. I have a yes no question for you to you.

Do you believe that Anthropic has dominant market share in coding right now?

Yes, no. No. In coding? Yes. Just the lead.

I think it's a trillion dollar market and these guys have less than 10% of it today. So it's hard to make a case that what we're in a coding tokens do you think that Anthropic is providing the market right now? Great. And 50%.

That's called dominant market share. I don't know about that. 50% of the market. You've got to look at what that is. You've got to look at the Tamins.

You've got to look at the Tamins. You've got to look at the Tamins. You've got to look at the Tamins. You've got to look at the Tamins. You've got to look at the Tamins.

You've got to look at the Tamins. You've got to look at the Tamins. You've got to look at the Tamins. You've got to look at the Tamins. All right.

So I'll be the tiebreaker before we move on to the next slide. I'm not saying it's a permanent condition. Okay. If you're telling me that today Anthropic is delivering over half of the coding tokens, that's clearly a dominant position in the market for coding.

It's an early market. It could change. If I were representing them, David, I would say nine months ago, everybody called us, you know, out of the game. We were being destroyed by open AI in three months.

Now people are saying we have dominant market position. This is the fastest changing most competitive market in the world.

I think it'd be very hard press to walk into, you know, some district court make the

case that these guys have somehow already formed a monopoly against Amazon, Google, Microsoft, Open AI, et cetera. Well, I'm not saying it's a, it's already a permanent monopoly, but I am just asking about market share. And I do think guys all agree with the market share.

Let's get to market share. They probably have 50 to 60% market share, because I think codex is actually quite broadly used as well, but that relies the more important point, which is AI enabled coding. I think is still 5% of the broad market. So it's kind of a nothing burger.

Yes, they're leading, but they're leading in something that isn't that big yet. Now you would say, how could it not be big? And what I would say is because most of the stuff that's being written is still white sheet denovo code. And I think the ugly truth is I don't care what model you have.

Not the long horizon ability for any of these models to actually build enterprise great software is still s h i t sh and that's the actual lived experience, not for me, but when

I call on our customers, half a trillion dollar banks, $100 billion insurance companies.

None of these guys are like, wow, it just works out of the box. It doesn't work. So most of it is still hand-toon. So until I can honestly tell you that we can point a model at this with the right guard rails, which I can't today, what I would say is it's a small market that will become large

as these models become better. But we are in the world where we have 50 years of accumulated tech debt as a world. And I suspect when you enumerate the number of lines that that represents its hundreds of trillions of lines of just pretty marginal mediocre code to bad code. On top of that, we have all these legacy languages.

I'll tell you one of our customers, they have to go and get 60 year old pensioners to come into the office to interpret, quote, no, I'm not job. This is all for trend, this is a $100 billion a year revenue company.

That's how they solve these problems.

It's not. The focus just solves it. So I would just keep in mind that most of the tech debt in the world that exists 99% of it is still poorly addressed by these models. We are untying this body and not, it's going to take decades to do it right.

So all the breathlessness about all this other stuff, I really think it's not where the

money is, it's not the big time stuff. And you can tell me, oh yeah, it's going to be the future.

And I would say, tell this business that's a $100 billion a year of revenue and 50 million

billing relationships that all of a sudden you're going to open claw your way to a solution. It's bullshit. Not to say that you can't have a great keep of stuff. And not to say you can't do some useful stuff in trickery and you know, have a good knowledge base.

I'd like that too. But the core things that your lived experience sits on today is a massive tech debt that will get very slowly replaced. And that's just the reality of life and there are competitors that are extremely disruptive. I'll tell you about one, we talked about bit tensor towel on this program a couple of weeks

ago when we had the Jensen interview, you brought it up actually to my, there's a project that's subnet 62, it's called Rich's AI. And what they're doing is a competitor that is not only open source, but anybody can contribute to it.

They spent about a million dollars in towel like rewards and in 45 days, they hit 80% of

what clawed for is and they did that in under 45 days. The way that works is they give rewards for people who, and they can do this anonymously, make that coding product, which is by codex or cloud code better. That flywheel is racing right now with participation in the same way Bitcoin is. So you're going to see a lot of open source and these crypto open source combinations.

And anybody who's not investigated this, I highly recommend you investigate this. I do think you're right about one specific thing. I would put zero, literally, the probability zero of any important company worth anything more than a dollar, having an outsourcing their production code to an open source project. That'll never happen.

However, what will happen though, is when you look at the cost of training this 10 trillion parameter model on black wall. And when you look in the future, let's just say in six or nine months that at 15 or 20 trillion parameter model is going to get trained on Vera Rubin. I think Jason, where you are right, I have zero and just to be clear, I have no investments

in this at all. I'm just going to be so clear. I'm just observing because another project other than BitTensor that someone brought up to me is Venice, the concept of open source training and orchestration is a hugely disruptive

idea, which is the complete orthogonal attack factor to this idea that you have to raise

tens and tens of billions of dollars to train your models. Because if the capital markets run out of 10 and 20 billion dollar checks to give people, the only solution is to be totally disregarded. So I tend to agree with you, Jason, that there is going to be at some point a very successful open source project for pre-training.

Absolutely, will there never ever be an open source way where a real company that has any skin in the game says, here, guys, re-engineer my code base as an open source.

First source project, never going to happen.

Yeah, I think the coding tool is well. And if you look at the history of open source, Brad, you actually I think had a lot of bets in this space, Linux, Kubernetes, Apache, Postgres, like Terraform, like these open source projects are deep inside of enterprises, deep. And who is sitting here 15, 20 years ago, the same argument was made.

Nobody will ever adopt these inside the enterprise you got to go with Oracle, whatever. And fair enough, many people do.

But I think this is this $29 bridges subscription to do this versus 200.

It's starting to take hold inside of startups. And that's where I always look at the tip of the spear startups, love to, you know, use open source products. I think this could be the next big bang. But listen, I invest in things that have a 90% chance of code is zero.

So to your own research, no crying in the casino. Can I just make up a few points? So just quickly. So number one is, with respect to this market for code, for code tokens, whatever you want to call it, it might be 5% today, meaning 5% of the code's AI generated versus human

generated. I think it's going to 95%. I mean, I've been any amount of money on that. The only question is when, probably over the next few years, that's point number one. Point number two is it's possible that if you're the early leader in coding as a AI model

company, let's say you have 50% to 60% market share, you have the most developers using it. Therefore, you have the most access to code bases. You might get the most training tokens. There is a potential flywheel there where you can see the early market leader consolidating

It's lead because it's generating the most code tokens as getting access to t...

code. Now, I'm not saying for sure that's going to happen as possible that the other guys catch up.

But I think there is a possibility of a flywheel there and strong, I guess you call it,

data scale effects, things like that. So I do believe that the market for code tokens could be monopolized. Third, anthropics revenue run rate is based on what I can tell and what's been publicly released is the fastest growing revenue run rate at scale that I think we've ever seen. We perfect.

The next story.

Okay, maybe pull up the tweets, but this thing is ramping at a rate we've never

seen before. Yeah, because we get it that a second, but this one last final point is I think it's pretty clear that where we go for here is agents and coding gives you a huge step up on agents because, you know, one of the main things an agency to do is is right code to be able to enable them to please task correct.

And so if it is the case that coding is the huge market that's going to be dominated by one or two companies, and then that leads to another huge market, which is agents.

My point is just, I think all these companies need to behave in a very clean way and not

engage in tactics that later the government might say, you know what, that was anti-competitive. Everyone should just, I think, play fair, do not engage in discrimination against other people's products, engage in fair pricing. I'm not accusing anyone of breaking any of the rules, but what I'm saying is that eventually the government's going to look at this market with a benefit of 2020 hindsight, and I think

everyone should just basically, you know, keep a nose clean, keep a tight tight tight tight tight is right. I think that's an excellent point. Let's talk about the revenue ramp of anthropic. This is just unprecedented.

Anthropics revenue run rate has topped 30 billion with a B. Early 2023, they turned on revenue,

they started charging for API access, end of 2024, they're at a billion dollar run rate, February 25, they launched clock code. That was the Starter's pistol, mid 2025, $4 billion run rate, end of 2025, $9 billion run rate, just a couple of months later in April 30 billion dollar run rate, yes, that's right, triple.

And the way they did this is enterprise customers are a major part of this spend. Mario announced a couple of months ago that there's over a thousand enterprises paying over one million annually. This is truly mind boggling when you think about it, because those are the most coveted customers in the world, these are the big fish that you just, when people are running enterprise

software, they dream, slack dreamed of getting these million dollar customers sales for

its dreams of getting these million dollar customers, Brad, you're an investor, I guess

Sam famously on BG2 asked you to sell your, opening, I stocked back to him, you didn't, you demirred, but you're an investor in both, how shocking is it to you to place both of those bets? And then see one of them come from so far behind, you know, Chatchee PT has 900 million users.

I don't know if they've passed a billion officially yet, but they are the verb, right? They're the Uber, they're the Xerox, they're the Polaroid of AI. But they didn't go after the enterprise, Dario made that, and Dario worked, who was the co-founder of Open AI, he left, and according to the New Yorker story that came out from Ronan Farrow this week, he was basically left because of his disgust in working with Sam

Altman. You're thought, right? You know, before we go down the Obed AI, Brad, a whole, let's just really contextualize like what's going on here. You know, I have this additional chart, you showed one, you know, they added four billion

of revenue in January, 7 billion in February, 11 billion of annualized run rates, or 10 or 11 billion in March, just to put in perspective, that's data breaks, plus power into your combined that they added in a single month, right? So we started with everybody at the start of the year, ringing their hands, including, you know, girly and others saying we're in a big bubble, asking whether the AI revenues would

show up to justify all of this investment. And bam, you have the largest revenue explosion in the history of technology. So the company's plans were to end the year at about a $30 billion exit run rate. They got there by the end of March, right? And I suspect that it's continuing in April.

So you have to ask what's going on and what's the big so what?

The first thing for me is that model and product capability just hit this threshold. We talked about earlier near AGI, whatever the hell you want to call it. And everybody like altimeter said, damn, this is so good. I have to have it. This is no longer about my IT budget.

This is about labor augmentation and labor replacement. And by the way, cowork is growing even faster than Claude Gode at the same stage of development.

What it showed is we have a near infinite tam.

It turns out that the tam for intelligence is radically different than anything that we've seen before.

And I think the best example of this, right?

This is millions of self-interested parties, consumers, enterprises, 1,000 now over a million

dollars, right? It's not that there was some great go to market and anthropic that all of a sudden, you know, they snuck up and blew everybody away. No, it was companies demanding the product. They're getting throttled on the product.

Why? Because it's so good. It makes them better at their business. We are all self-interested actors. And one, a million to those people are all making the same decision.

There's a huge talent that tell here, is that the tam is as big as Dario and Sam and others have been saying. We knew intelligence was going to scale on the exponential. The question was whether revenue will scale on the exponential. And that's what we're seeing.

And remember, they're doing this with only one and a half to two gigawatts of compute. Right? These guys are massively compute constrained. They're each going to be adding three gigawatts of compute this year. So that will unlock, they would be growing even faster but for that.

And then Jason to your point about the open source models that we all want to be a part of the solution. I've talked to a lot of big companies, 65 to 70% of their token consumption is open source model. Are these cheap, Chinese and other tokens?

So these revenue ramps are happening while the world is already using open source. This is not frontier only. This is frontier plus open source. We're going to see massive token optimization over the course of the year. But what happens on this Javans paradox is the unit cost of intelligence is plummeting.

Not the cost of tokens. The unit cost of intelligence is plummeting because the capabilities of these models is so much better. I look at what it does for all teneter day in and day out. I talked to a major company yesterday.

They're on a run rate to do 100 million of token consumption this year on about five billion

dollars in opx. They think that we're now nearing peak employment in their company, but that they're token their intelligence consumption, okay? Let's not call token consumption, right? Because tokens may go up a lot, but their intelligence consumption is going to go up a lot.

So I would leave you with this. We're early to Chamospoint. We have low penetration of the global 2000. We have low penetration of the use cases. We have low penetration of within the use cases that they're already using and the models

are only getting better.

So I think when you look out toward the end of the year, I would not be shocked if you see

anthropic exiting this year at 80 to a hundred billion dollars in revenue. And by the way, doing it at the same time, the open AI who is also on the wave, they'll

be releasing an incredible model in the next, imminently.

They're going to be on that wave and you're going to see an inflection in their revenues as well. Chamospoint. Question one has been answered. The question of, hey, does this stuff actually have utility?

That went from a question mark to an exclamation point of course it's got utility. People are getting value from it. It might be variable. Some people get more value than others. Number two, the revenue ramp was a big question.

Now that's turned into an exclamation point. The final piece of the puzzle that you brought up many times is can this be profitable? And these companies are burning through a large amount of cash. So what is your take on when these companies can get out of the J curve? We talked about this.

I think three episodes ago, I estimated like we're going to be looking at $4,500 billion in investment into these data centers at a minimum. And then they have to climb out of that to get to profitability. So what are your thoughts on these becoming profitable companies?

Do you remember that investor that published this list, Jason, where he put all of the terms

you talk about when one of the terms you can't talk about is profit? It's a list where it's like, if you can't talk about free cash, you talk about EBITDA. When you can't talk about EBITDA, you talk about margin. Can't EBITDA. When you can't talk about that, you talk about revenue.

And then when you can't talk about revenue, you talk about growth revenue. Bookings. So you can kind of figure out, I think, where we are in any part of any cycle by just indexing into what does everybody talk about. I think where we are is we are between growth revenue and net revenue.

That's where the discussion is. Okay. There was another article I think today, and I think maybe it was the information that tried to categorize and distinguish that anthropic presents growth, open AI presents net, they're different.

We don't know what the various take rates are. So they're saying that there's a difference. If it's not true, there's been no clarity provided by these companies. So at a minimum, you have this confusion, where there's the breadthless talk, then there's

People that don't even know the difference between actual, recognized revenue...

rate revenue.

And on the bottom, I mean, so we're definitely there, okay?

We can quibble about the details, but we are not at the place where people are like, oh,

here's your steady state, you know, free cash flow margin, and here's what you're EBITDA.

We're never, we're, we're years from then. They're going to have token, maxing EBITDA. Like community EBITDA at the, we want the thing that we need to understand is how growth margin-negative is this revenue growth. We don't know that.

And at least we don't as outsiders. Brad might know. Brad may know. I, I would tell you, think about this, what are their big cost inputs? The number one cost input is a cost compute, cost to compute, right?

I just told you they only have a gigawatt and a half a compute, and they have that gigawatt and a half a compute, whether they have a billion in revenue or whether they have 80 billion in revenue. So you might actually expect to see these companies, their gross margins are exploding higher.

Like the fastest increase in gross margins, I've probably seen out of it in any technology company. So this is not gross margin-negative, you're saying, no, definitely not gross margin-negative. And what I would tell you, it must be hugely profitable, well, you may see accidental, why I call it accidental profitability, they may not be able to spend this revenue fast enough

to chip off on compute and remember, it's only 2,500 people. Google crossed this revenue threshold when they had 120,000 people. These guys have 2,500 people.

So the only thing you could really spend money on, right, is compute and they can't stand

up the compute fast enough. But I want to, none of this puts to me then to be honest, because if you were on a threshold of 90% plus gross margin, I'm not saying it's there, I'm not saying it's 90% plus. So I'm just saying it's gone from meaningfully negative 18 months ago to, you know, very, very positive.

I've seen room or something. So that's 10%. That's what you're saying. The trend is there. Let me just say this.

I think if you're an incumbent, you want the cost of compute to go down. I think if you're not an incumbent, so specifically who do I mean, meta, Google and SpaceX. I think those three people who have all three of them, well, sorry, meta and Google have a fortress balance sheet. I think by the end of June, SpaceX will also have a fortress balance sheet.

What they will want to do is they will want to make this a compute problem because they will control the conditions on the field. You already see this today. Yeah. Met as models today, what people's general reviews are, it's okay.

But the one thing that people say is it's incredibly performant. The model quality is okay, but the performance is great, which speaks the meta's huge advantage. They have a massive compute infrastructure. So if you're not open AI and anthropic, they'll want to make this a capital problem

because then they can win it. If you're anthropic and open AI, you want this thing to be as efficient as possible. I think where we are is very much in the early innings. And we're bumbling around talking about gross margins and, you know, revenues, we are not at profitability.

And what is true for Facebook and what was true for Google was irrespective of where they got to a billion who got f*ckers. They were profitable by year three.

And they never looked back.

I was there. I remember. It was glorious. The cost of building, you know, AI totally stipulate is radically higher than the cost of building retrieval at Google, right?

Like it's just a fundamentally more expensive problem, but I will tell you that there's a lot of thought out there about negative gross margins. I mean, Jason, you started this segment by saying they're burning through large amounts of cash. I think people are going to be shocked at the burn, how low the burn levels are at these

companies. Yes. They're authentic or open. And I would say it open AI as well, like they're, if they're on, you know, they do $50 billion this year again.

Just look at the number of people they have, revenue per people, it's pretty low. And the inference cost is plummeted, inference cost is down by 90% year over a year.

And so just finally, I want to make a respond to this point about gross versus net, this

tweet that Chamoth was referencing. Okay. So there's a certain percentage, a small-ish percentage of anthropics revenue, right, that they distribute through the hyper scalers. And like a lot of arrangements, whether it's snowflaker data, bricks or others, you pay a

commission, right, on on that. I will just tell you that you're talking single digit percentage of total revenue of these companies. The gross versus net thing isn't what's being reported, like the Apple's for Apple's is pretty easy.

And if you want to be conservative on it, take down anthropics revenue by, you know,

five to ten percent, which, you know, again, I don't, I think it's better to gross up opening an ice revenue. But anyway, you do it. I just think it's a distraction from what's really, what's really going on here. Happy.

Is that actually how many thoughts on this massive revenue ramp? Yeah. I mean, I want to go back to a point that Brad Bape is I think it was just really important and I want to just underline it, consider where we were at the beginning of the year.

What everybody was saying is that AI was a big bubble.

And the evidence they would point to was the fact that hundreds of billions of dollars was going into capbacks that needed to be spent on these data centers. And there was no evidence of significant revenue to justify that spend, where was the ROI? By the way, as an aside, the same doomers were saying that AI was in a bubble.

We're also the ones who were saying that AI was so powerful, it's going to put us all out of work.

And it's going to take over from humanity. I mean, in other words, they couldn't decide if AI was too powerful or not powerful enough. But putting aside that contradiction, they clearly were making this case that AI was this big bubble. There'd be no payoff for justification for this massive capbacks that's being spent.

And I think we're starting to see here. There is justification for it. We're seeing it just in this one vertical of AI, which is coding. We're again seeing the fastest revenue growth in history. It's utterly unprecedented.

And this is just one category or vertical of AI. We know that agents are coming next. And the enterprise adoption of that is going to be absolutely massive. So I guess what I'm saying is that this is early proof for, I think the thing that makes Silicon Valley special, which is we're willing to basically bet on things that just intuitively

on a gut level, we know are the next big thing. We're not that spreadsheet driven, actually. Silicon Valley believes that if you build it, they will come and is willing to finance that build out. And that's basically what's been happening.

Again, just the top four hyperscalers, $350 billion of expected capbacks this year. One is way, I think, Johnson said one trillion by 2030, so Silicon Valley, whether it's big

companies, whether it's founders, they're always willing to bet on this next big thing.

They're not like Wall Street. They don't need, you know, especially to tell them where to go. They know where the technology is going and they make their best based on that. And I think that there is going to be a big payoff for this. And I think it's the thing that's going to make our economy and the United States in general

remain extremely dynamic and in the lead on this thing is how we are willing to make those kinds of bets. And I think it's going to pay off big time. Yeah. Clearly, hey, Brad, you didn't answer my question about the vibes over at Open AI versus

Quad. Open AI is, I wouldn't say, really, but there's a lot of hand ringing going on. A lot of employees leaving, a lot of people who are wondering, like, is our strategy, the winning strategy of, like, consumer first, they shut down, so are, you know, unwinding the Disney deal and really trying to get the company focused.

And it's kind of like, I mean, listen, the New Yorker story was a bit of a rehash, I don't think we have to go into the blow by blow because we covered here for years ago.

But the truth is, a lot of the great founders, co-founders of Open AI and a lot of the great

contributors are now at anthropic and other large language models. And in the secondary market, Open AI is trading lower than the last valuation.

And anthropic is trading significantly above the $380 billion.

So maybe talk a little bit about this competition, this Microsoft versus Apple, this Google versus Facebook. Well, let's start with immense credit where credit is due. Anthropic was literally counted out of the game last year, right? And here they count them over the last 12 months, and they've kicked Open AI's ass over

the last 90 days, right? And what did Anthropic do? Anthropic made choices. No multimodal, no video, no hardware, no chips, no building data centers. They said, we're just going to focus on coding and co-work.

We think that is the path to AGI and ASI, they executed their butts off. They took the lead, 2500 people tight, pulling on the or in the same direction.

But I think you would be seriously foolish to count out Open AI, right?

And I think we're a peak open AI fund, and I'll tell you it starts with great researchers and great models. And I think when you see the Spud Model, they're about ready to release. I think it's going to be an excellent model shows that they're firmly on the wave. If you look at what's going on with codex, incredible ramp on codex, fastest ramping

model with 5.4, I think 5.5 or Spud, whatever we're going to call it's going to be an even faster ramp. Have you seen Spud? Have you used it? Have you gotten a preview?

People are using Spud, right? So it is being previewed. And so you're talking to people who've used it and what are they telling you? They're telling us that it's an incredible model on par with mythos, right? And that it's a very usable model in terms of how it's packaged.

I will say that back to David's point.

Now this is the most important point I think anybody can take away here.

This is not zero sum. The Tam of intelligence is dramatically larger than any Tam we've ever seen in our investing

Careers over the last two decades, right?

And if you're on the wave, which open AI is, you are going to be selling into the world's biggest Tam. They are going to build a very big company. I'm a buyer of the shares today, notwithstanding all of the vibes that you describe.

I think these companies are firmly on the wave.

They are jarred. They are sitting there saying, what did we do wrong, and how do we get our mojo back? They want to compete. It isn't embarrassing to people on the research team and the product team over there. So I'm not saying there's not a real awakening occurring there.

But I think that's what the case is. And by the way, to Chamosses point, do not count out meta, right? I think meta is absolutely in this game. Google is absolutely in this game. Elon is absolutely in this game.

And if you're going to have some stuff dropping shortly, that's going to be very impressive. And if you're on team America, the fact that we have five frontier models competing against each other. And David made sure they weren't throttled by excessive government regulation. We have mythos come out.

It's a self-imposed, safe harbor, you know, to harden our system. It wasn't a call for moratoriums or getting the government involved. We have the type of competition that's causing us to accelerate our lead against the rest of the world. We can't take our eye off the prize.

We've got to stop adversarial distillation. And we need to make sure that we're distributing our products around the world. But I view this as a really good for team America. Well said. And here is your polymarket IPOs before 2027, obviously space X at 95% cerebrous at 94%.

And hey, number five on this list, 51% chance that in profit goes out before the end of the year, 44% chance that opening eye comes out for them. All right. Here is the closing market cap for anthropic on polymarket, only $158,000 in volume. So chum off when you put in 400k, you're going to really tilt this market.

78% chance that it's above 600 billion, 19% chance that it doesn't go out.

So it's looking like this will be a decent investment for you, Brad. What valuation did you get into anthropic at?

The first invested, and I believe it was the 130 or 150 billion dollar round.

So this would be a seven X, five X for how timeter, L, please. Congratulations. I mean, listen. Again, there are lots of people who were there before us, and who are on the board, and who are going to be better than that.

Would you put it in 50? No, we've got billions of both companies. Billions in both companies. Oh, Lord. Yeah, I think there's this existential thing going on in venture today.

And David could talk about as well. I mean, people can't, they're extraordinary nervous about you. Look at the IGV stock index, down 30% year to date, down 5% today. All software stocks plummeting, right? Venture capitalists are terrified to invest money in anything other than these frontier

models and things like SpaceX or military modernization, finding something that's out of harm's way of AI, right, where you can count on the terminal value to Chamosse insights over the last few weeks is very difficult to do. That's why you see this crowding. So we've taken a barbell approach, right?

We've got a lot in what we think are the most important companies that are on the frontier.

And then we're betting with really small teams that we think have very defensible businesses in a world of, you know, AGI. But it's true. What happens to all these enterprise software companies to become P.E. takeouts, do they get consolidated, or do they just have to adopt these AI technologies and solve

this problem of, hey, the frontier model is just going to solve for whatever these niche software companies do.

I think the market is probably being a little too pessimistic with respect to at least some

of the software companies. Obviously, there's going to be big differences in the quality of the most of these companies. And so look, software is going to be a lot cheaper and easier to generate, but I'm not sure that was the competitive advantage of a lot of these companies. So there's probably a little bit of the baby being thrown out with the bathwater right

now, and there probably are some value buys and enterprise software. I think the interesting question here, and we've been talking about this for a couple of years in the pod, is just where you see the AI value capture being in terms of layer of the stack. However, where we started, it was really just the chip layer of the stack was where all

the value capture was.

It was basically in video, it was the first company to be worth multiple trillions of dollars

because of AI. And for a while, it looked like that's where all the value capture was going to be because open AI, for example, is losing so much money, and I thought it wasn't on the radar as much. Now we're seeing the way to second, it's not just the chip companies, it's also the hyper scalars are now benefiting and now we're seeing at the model layer, it looks like

in frappic and open AI, they're all going to be huge beneficiaries.

I think the next question is at the application layer of the stack, okay, wel...

all that value capture just get eaten by the model companies or are there applications that get turbo charged?

I guess you could say that Palantir is already one of them, right?

It's an application company that's been turbo charged by these model capabilities. Who else will be a big beneficiary, is it again, is it all going to be at the model layer or will you see an explosion of value at the application layer? I'm hoping, obviously, that it'll be at all layers of the stack you see beneficiaries. But to me, that's a really interesting question right now.

Yeah, what happens to Salesforce, HubSpot, you know, Oracle, right down the line, David, Chemoth, your thoughts here on the layers here and where the value is captured. It's too early to tell. And energy can kind of put into sort of data center as well, but that's obviously been a clear winner.

A little housekeeping here, liquidity, put a little tip fitting in here, producer Nick, got it at that. Is sold out? There's a wait list of hundreds of people, but it is what it is folks. If you snooze, you'll lose.

And top tier speakers are coming, it's going to be great. We'll get an update from Chmoth, but I think Brad, you're going to be joining us again. Yes, for liquidity. I have an update. That's probably not your headline or though.

I'm probably not your headline.

No, but you always score so high every event you've spoken out, you've been either number

one two.

I don't think you've ever dropped a three.

Go ahead, Chmoth. Make your announcement here. Dada-dada-dada. Not sent me an article from Wikipedia about peanut links when you guys don't have a sick breaking news showing me that I'm in the large category, top 5% she highlighted it.

Top 5%. Okay. Is that with Nanobanana? Oh, with that. She just texted dummy, it's clogged.

My apologies. Oh. All right. This much moth isn't afraid of the cyber is because nothing's going to come out that's more embarrassing than what he says himself on the phone.

It's like baseless. It's like guys. I got that. So I saw the agenda for this thing. It's incredible.

Congrats to you guys. I mean, like just the phon of being an app, all the poker, all the dining experience, this is five star all the way. Looks really cool. It's a mom level because Chomoth was, I dare I say, belligerent in his demands.

He said, this has to be six star or I will not show up Jake now. I said, okay, boss, get to work. And Chomoth, what do you got? And no mids. This is all elite.

And for the hundreds of people who are on the way list, I am sorry, but we have a capacity issue. We'll try to get you in for next year, but Chomoth give us some updates here. You have any updates that you want to share because you are running programming for liquidity 2026 up in Yon.

Look, it's going really well. Really excited to hear all of these great folks speak.

I think the next two will release today, Brad Dersner and Thomas LeFont of co2.

The co2? That's great, get. We also have, I think, three people confirmed for their best ideas pitch.

Really interesting folks, they each run between one and six or seven billion.

Awesome. Super star compasses. There's a new segment here. It's great. So right now, we have Bill Akman.

We have Andre Carpathy. We have Dan Loeb. We have Thomas LeFont. We have Brad Dersner. We have Sarah Freyer.

And more to come. We will announce more to him. There might be one or two surprises. J-count and a couple of surprises. We don't announce all the speakers.

J-cows got a couple of surprises coming. And if you didn't get in to liquidity apologies, you're on the way list, we are going to be hosting. The fifth annual all-in summit in Los Angeles, September 13th to the 15th. Absolutely.

It's actually going to come to that all-in.com/nations. It's actually should come to that. I've been advised that I can attend business, I can be in the state for business reasons. Okay.

There you go. Then we'll see you at liquidity and the sum. Perfect. That's big news. Now we just got a bunch of sackstands who are racing and now we're going to get

sacks at this is what happens every year behind the scenes. Sacks at the last minute says, oh, I have four speakers and I have 72 people who need tickets. And then the whole team has to do a fire drill, 48 hours before the event. Okay.

Here we go, guys.

We're going to go to the third rail here.

We've got to catch up on the Iran War. Here's the latest two weeks into a ceasefire, I've started just two days ago at the taping of this VP, J.D. Advance, friend of the pod, is a and some special consultants, we cough and friend of the pod Jerry Kushner are headed to Islamabad, the capital Pakistan, for talks this very weekend.

So while you're listening to this event, they are going to be working on the peace deal. Easter Sunday, Trump posted a truth stating, open the fucking straight and crazy bastards or you're going to be living in hell just watch, praise be to Allah. On Tuesday morning, Trump posted another threat on social media, a whole civilization

Will die today, never to be brought back again.

I don't want that to happen, but it probably will. Tweets were obviously discussed a lot over the last week, he gave him an 8 p.m. deadline at 630 p.m. protest announced on truth, social that he had agreed. President Trump had agreed to a two week ceasefire if Iran opens the straight. He also said, hey, listen, we got the straight.

We don't be a toll booth, but we'll take the majority of the toll and we'll split it with Iran. Here's the quote, we received a 10 point proposal from Iran and we believe it's a workable. It is a workable basis on which to negotiate and apparently Netanyahu took the ceasefire

to mean level Lebanon dropping 160 bombs in 10 minutes yesterday, sacks, you were out last week, everybody wants to know your position on the war, I'll hand it off to you. What are your thoughts on how on the two weeks ceasefire, and everything that's occurred up until this point? Well, look, I have to preface them about to say, which is I'm not part of the foreign policy

team at the White House, and the last time I commented on the war on this show, it's somehow made international headlines that Trump advisers says, X, Y, and I'm not a Trump advisor on this issue.

I think that'd be a fair headline to write if it was a technology issue, but this

is not. So, whatever I say is just my personal opinion, but then the media is going to somehow portray it or attribute it to the White House to try and create an issue out of it. So, I feel like I've limited in what I can say, except that to say that I think it's terrific that we have the ceasefire, I think it's great that there's going to be this meeting

and it's a lot of a bot to hammer it out, and I think what the president's accomplished so far with the ceasefire is it's a great thing because what happens with these wars is they take on a life of their own, meaning they tend to go up the escalation ladder, right? There's a lot of podcasts are discussing this so-called escalation trap, and supposedly

there are stages this based on historical patterns, and so I think it's actually very hard to pull out of these things, and I give the president tremendous credit for negotiating the ceasefire that we've achieved so far, and then sending the team tofully work this out.

Actually, my first trip to the Middle East was when you and I, maybe four years ago went,

thank you for taking me. What is your take on?

Where we're at here, I think we're just wrapped up weeks six of this, and we're going

into weeks seven. First, on March 4th, I tweeted the Trump doctrine in Iran, massively destroy all military capabilities, kill the people building lethal weapons to use against us, and get out. Reserve the right to do it again, if needed, zero efforts to build in Madisonian democracy, Iran's going to have to build what comes next.

And I think what the market has said, right? If you look back at last year on Tariffs, Jason, the top to bottom drawdown was about 15% on the Nasdaq, and today is down 22%. The drawdown in this period over Iran was only down about 5 to 7% on S&P and Nasdaq. So the market has said, listen, read, trust Trump at his words, he said he's not going

to get into an entangled war here. I think he terrifies the hell out of people with his tweets about destroying civilization and all this other stuff. But I think people, even though they don't like to hear it, they've resolved for themselves that when he says he's going to get out, he will, in fact, get out.

Of course, there is a lot of hand-ringing, but if you look at the markets today, we basically

bounced all the way back from where we were free Iran on both the S&P and the Nasdaq. If, in fact, we land the plane of JD lands the plane. By the way, on Lebanon, yes, they were bombing yesterday, but Netanyahu is now said that you're going to have direct government talks between Israel and Lebanon. So if we land the plane on these two things, I think it's off to the races in the market.

And by the way, while everybody's focused on Iran, stay tuned.

I think we're getting close to a deal on Ukraine, Russia, right?

Venezuela is kind of going seemingly very well. I think there's also going to be news on Cuba. You could envision a world. There's risk to the downside, certainly, I will stipulate, but you also have to pay attention to the risk to the upside.

If you land the plane on those things, heading into America to 50 July 4th, the market could really take off. All right. Well, let's maybe uplevel this a little bit and talk about why we're in this war to begin with.

And that's the big discussion among both sides of the aisle on Tuesday, the New York Times dropped and inside the room piece on how President Trump made the decision according to this report. If it's true, and some people don't subscribe to the New York Times anymore, or think

it's fake news, but how Trump decided to basically follow Netanyahu into this war in February

11th Netanyahu met with Trump at the White House where he gave him a four-part pitch on attacking Iran. J.D. Vance, according to the story, if it's true, disclaimer, disclaimer, warm Trump that the war could cause regional chaos and break apart Trump's maga 2.0, the Trump 2.0 coalition

We talked about here.

The big tent and that's turned out actually to be true. There's been a bunch of hand-ringing from Megan Kelly, Tucker Carlson, right on down the line. Rubio was anti-regime change, but he was largely ambivalent, according to the story about the bombing campaign, Susie Wiles, chief of staff, that she had concerns about gas prices

before the midterms, pretty good advice there. And General Dan King, Chairman of the Joint Chiefs of Staff, said this of Netanyahu's pitch quote, "Sir, this is, in my experience, standard operating procedure for the Israelis.

They oversell and their plans are not always well developed.

They know they need us, and that's why they're hard-selling.

If you put this together with Rubio's, walked back comments at the start of the war, we knew, this is quote from Rubio, we knew there was going to be in Israeli action. We knew that would precipitate and attack against American forces, and that's why we did it. I had Josh Shapiro on the all-in-interview show, and he talked a lot about this.

There is a big underpinning here, Chema That. The United States foreign policy is being driven by Netanyahu. Every Jewish American person I've talked to feels Netanyahu is not doing Jewish American and Jewish, the Jewish diaspora, any favors here by his approach to these wars. What are your thoughts on why we got into this and how we get out of it?

I mean, the person that decides is the President of the United States. So foreign leader isn't getting to call the shots in the United States. I think very practically speaking, the markets are effectively pricing in that this was a small blip for whatever people think that's just what the best prediction market that we have is telling us.

I think that's important to acknowledge that we're probably in the end game here.

And the second thing to acknowledge is if I was Israel, I would really be concerned that

unless I help find it all friend quickly, the risk that Israel loses America as a predictably steadfast ally could go down. And I think that that's problematic for Israel, far more than it's problematic for the United States. So all of that kind of tells me that we will find an all friend, a because I think economically it makes sense.

And then be geopolitically, I think Israel will want to make sure that this doesn't burn a longstanding relationship. Yeah, that seems to me to be the major issue here is Americans basically do not want to be in this war. Americans do not want a far policy being influenced to the extent they believe.

So I'm not putting my belief in here.

Just Americans believe we are being dragged into this by Israel and that Israel has

too much or Netanyahu specifically has far too much influence. And then people believe the anti-Semitism that's occurring here, Josh Shapiro gave me a lot of pushback on this. But all the Jewish Americans, I talked to say Netanyahu's causing with his actions in Gaza, Lebanon, Iran, he's gone too far, and it's causing the anti-Semitism we're experiencing

today. So you can make your own decisions about that. Any final thoughts here, Brad, on the American foreign policy being influenced too much by Israel? No, it's not a discussion about that.

I may listen, kind of like Sachs said earlier, I think that we will ultimately be judged

by the outcomes, right, and that everybody is an armchair pendant today on the approach that we're taking in these two different places. I think we could be on the verge of a massive transformation of the Gulf states. You went there with me, Jason, Saudi, Qataris, Kuwaiti's, Emirates. I've talked to a lot of them this week.

I think they're very hopeful and optimistic. I think you could bring Iran into the full but listen, I'm an optimist on all of this stuff. I just want to remind people doing nothing in Iran, had tremendous risks. Doing nothing in Venezuela had tremendous risks.

So it's not as though this was something that I think wasn't well calculated. But I think we have to let the cards be played and then history be the judge. But I think there's a risk in both directions, but I'm going to remain optimistic. It's actually you said in the Gaza situation, we should have a wide berth for criticism of Israel and Netanyahu, what are your thoughts on this belief here in the United States

now in this discussion that Israel's having far too much influence over the United States foreign policy? Well, I noticed in my feed today that North Tolley Bennett, who is a major, is really politician. He was a former prime minister, tweeted polling that showed that Israel was becoming very

unpopular in the US and he was expressing concern about that and expressing the need to

Address that or fix that.

So I think your certainty is really politicians, raising that as an issue.

I think that's probably a good thing. There it is. And it's really cool. Actually, I accidentally automatically translates things from foreign languages, in this case he brewing, it puts it in your feed.

So yeah, so here's not Tolley Bennett, former prime minister, saying this is a situation. There's a lot of work ahead of us to fix everything. Now obviously this is not Netanyahu, this is one of his political opponents, but yeah, I mean, this is something for Israel to consider and think about.

And I think that they would improve their popularity if they got behind the ceasefire.

And I have no indication that they won't, but there would certainly be a good place to start. I just as an aside, this autotranslate feature has done more for understanding across borders than anything I've ever seen and it is the most impressive tech feature I've seen released in years putting AI and marginal language models aside for people don't know what's happening

because of grock being really good at doing autotranslate. They've taken the pockets of the best of what's happening in Japan, what's happening in Israel, what's happening in France, and they're surfacing it, auto-translated, then when you reply as an American to somebody in Japan, they see it auto-translated as well, which has led to people who don't speak the same language, engaging on X in a very nuanced, fun, interesting

way. And that for as a truth mechanism is just absolutely extraordinary. I think this is going to have such a profound effect. Maybe Elon and the X team should get like a Nobel Peace Prize award for this. I think it's going to change, I mean, it would hate to be hyperbolic, but have you been

using this feature Chumath? Has it been coming up in your feed and which language is up in your feed right now? English. Okay.

So you're not part of the translation, then Brad is this hit your feed yet and which regions

are you seeing it? Definitely see it in the Middle East stuff and I've seen on Chinese, I've seen it on the Japanese super help. Let me tell you, base Japanese is all another level of taste. Wow, man, base Japanese makes like, Wednesdays and Alex Jones seem tame.

They're like, look at this group of people insert whatever group of immigrants you like. And they're like, this is unacceptable behavior, this is not Japanese culture. These people need to be get the hell out of Japan. It is wild folks. And if you don't have an X account, you are missing out.

Go to X.com and sign up for this reason only because you think about the velocity. Like journalists are not even taking the time to translate and cover what's going on in those areas. And this is happening automatically in real time. So you start thinking about what happened in Ukraine, if you had people in Russia and

Ukraine doing this and coming up with stations with each other, it would be wild.

You're like a such a good hype man, the problem is you hight buttered bread the same way

you hight been nuclear reactor. So it's also really tell, you know, what you're really hype because your level of excitement, the intonation is exactly the same. Yo, man, there's nothing better than a slice of great toast. I mean, if this is very, this in a way, it is like sliced bread, it's very simple.

But it is so powerful in the experience. Well, this has been it is true, X is better today than it's ever been. And remember, they have 70% fewer employees than they had the day Elon walked into the building. And so if they were ever a debate about this, like, and I remember every base in, oh, it's going to tip over.

Oh, it's going to be a crappy experience. So it's going to go the fact of the matter. Here's, we are a few years later, 70% fewer employees. And every other company in Silicon Valley is looking at that. I think for a lot of these tech companies, we've hit peak employment.

We're going to create a tremendous number of new jobs. But for the existing jobs, these companies are all realizing they can do more with less.

The key to beer just tweeted that they're about to go ham on these bought accounts that

ought to reply. Yes, those, those literally ruin my feed. That's why I went to subscriber mode in my replies, and it's worked out great. Yeah, no shout out to him, and to Chris Sochka, who was in tears at what happened to Twitter, you're going to be okay, Chris.

Sorry, you were okay. No more tears. You only let subscribers respond to your tweets. I do 50/50, sometimes I'll just let it rip and get chaos. And then other times, I have 2,000 paid subscribers, I give all the money to charity, like

30 grand a year.

And it's just wonderful to get to know the same 2,000 people out of my million followers.

It's kind of like having this little subset. So sometimes I'm like, I don't have time to deal with 100 or 200 or 300. You haven't realized? You have a million followers. That's incredible.

I mean, it's just, I mean, you have 2 million. I think sex must have a million, right? You have a million, right? Only a million. How many you have now?

You're getting popular. You've got a couple of million. You got a couple of million. What's your, oh, your Altcap, ALTCA? I have 1.4 million.

What do you got, Jacob?

So I was surprised to you. I think you have. I'm like 1.5 million.

I just want to cause me to get my real name, Jason.

I know a guy can find out. You're a 1.1, yeah. I made it to 1.4. I don't know how that happened exactly. I'm just having the number one podcast in the world.

Another amazing episode of the number one and Chumap has 2 million.

But that's only because he has just incredible moments of engaging with his haters. Oh my god, the, the, the reply is that Chumap sometimes drops are so great. I love Chumap. I love Chumap. I love Chumap.

I like them up. He likes them up. And then you had somebody who was like, oh my god, I was in the casino and you told me to bet black. So you bet black.

So I bet black and I lost my money. And so you're responsible. And then you pay for the kids college. He has 2 young girls. And so I, I funded their college accounts.

I thought that was hilarious. Just as obviously I'm very happy for him and his 2 daughters. I'm even more happy and how much it'll anger all these other goofball dorks living in their mom's basement. Yes.

Who literally have no take the take no responsibility for their lives?

And they should enjoy those hot pockets by the way for those folks in their mom's spaces. The hot pockets and the fish ticks are ready. Yeah. And you get one more hour of Xbox for mom.

All right.

Listen, we missed you free third, but this is the best.

Episode three years, and we will see you all at the liquidity summit except for the 400 people on the list. We're not going to get in. You got an email from the guys at Athena because we were just, oh my god. They, they're, they're going to hire like 500 new Athena assistance.

Yes. They had a thousand people after last week when we mentioned how much we love Athena. Go to that. But that's amazing. And those are like 500 hardworking men and women who are like working in the Philippines.

Sign up. Sign up. Great jobs. Sacks.

I'm going to get you a couple of Athena assistance as a birthday present.

That's how I'm going to get.

You're going to love this. Ah. Athena assistance of the best. Congratulations to my friends over there. All right.

Everybody. We'll see you next time. Love you boys. On Broadway tonight. Favorite.

See ya. Okay. Love you. Bye. Bye.

Bye. Bye. And David Sacks. [MUSIC PLAYING] And it's said we open source to the fans and they've just gone crazy with her.

You want me to ask? I squeak up in the water. Going on with you. That's fine. Why?

Why? Why? Why? Why? Besties are gone.

I don't know. That's my dog taking a mission right away. It's an axe. Yeah. [LAUGHTER]

Oh, man. My other day after we meet the athletes, we should all just get a room and just have a one-figure. Because there aren't as much as it's like sexual tension that we just need to release out there. What? You're that big for what?

You're that big here or a big here or a big here? You're a big here or a big here or a big here? We need to get more peace out there. I'm going on with you. [MUSIC PLAYING]

Compare and Explore