I'm here at Nvidia's annual GTC conference, and I'm going to interview for am...
stick with us. I'm doing all of you. Our episode is sponsored by the New York Stock Exchange.
“Are you looking to change the world and raise capital?”
Do it at the NYSE. The NYSE is a modern marketplace and a massive platform built for scale and long-term impact. So, if you're building for the future, the NYSE is where it happens. One of the great companies of the AI era is, of course, CoreWeave, they're building massive infrastructure for these hyper-scalers.
In some ways, Michael, Intrader, welcome to the program. The original hyper-scaler, you guys got in very early and secured, I don't know, which GPUs you'll end up getting, you were very early to this trend.
How did you get to it so early, and how did you build out this, you know, first, I guess
at the time, Neo Cloud. Yeah, so we didn't really start it as a Neo Cloud, and I was running an algorithmic hedge fund focused on natural gas.
“When you build an algorithmic hedge fund, once the algorithms are built, you're really just”
monitoring it and testing different pieces and doing all that, but there's also a lot of downtime. We got super interested in crypto, and, you know, we're pretty nerdy, kind of dig under the hood, and we started to get interested in this security layer. We looked at Bitcoin, and the mining for Bitcoin, and we didn't like it.
We just thought that there's some brilliant engineer that built the ASIC, and they're probably going to be better running it than we are. So we really began to focus on the GPUs, mostly because the GPUs were, you can mine Ethereum with them, but you could also do all these other things, and really so right from the start, we looked at the compute as an option to be able to deploy our computing power to different
use cases.
“And so, you know, began the company in 2017, you know, spent the first kind of three years,”
mining crypto went through a couple of crypto winters, because we had come from a hedge fund hour, you know, we have real chops in risk management, and how we think about capital and risk exposure, and allocation, and all of that, and so we were really careful around that right from the start, so we weathered crypto went to really well, and began to scale the company, and immediately started to look for other use cases that you could use this
compute for, because crypto was pretty volatile. Yeah, and crypto was a question mark at that time. Absolutely. Yeah, I mean Bitcoin was speculative, and there were many other specular projects. The only other people using this type of hardware, Quance, had a call, so researchers, so a
good way to think about it is like the progression of products that we kind of started to work
on. First was crypto, but we immediately moved from crypto to CGI rendering, and we built
projects that will let allow folks that were trying to animate render images, you know, kind of what makes the movies cool, right? Yeah, and we started to work on that, and then we moved to batch computing and started to look at medical research, and different ways of using the compute to be able to drive science, and we just kind of kept moving up the stack in terms of complexity on how GPUs could be used.
And ultimately, in like, call it like 2020, 2021, we started to really try to figure out how you can go ahead and use GPUs for neural networks. And that was not something that we knew how to do, and so we actually went out and bought a bunch of A100s and donated them to a group that was working on Luther AI, they were working on an open-source project, with the thought that these guys are taking the GPU compute,
because we're donating it, they can't really get pissed at us if we're not very good at it, and they're totally, and that worked out really well, because they can't complain about the SLL. They kept telling us like, you need more of this, you got to work on this, and that began to really give us an understanding of what was necessary to run scale parallelized computing. And that we went through it. I kind of feel like buying those initial GPUs was the tuition we paid,
so we learned how to run this business, and then one of the interesting things is all of those guys went back to their day jobs, because they were all volunteers working on this, they were like minded scientists, and when they got to their day jobs, they were all like, I want that infrastructure, it's built the right way, that's the way that researchers are going to want to use it,
and that launched our, our business. It was an amazing story, and so you went from crypto to these
Researchers into academia and deep research, what's the next card to turn ove...
Yeah, so what became very clear to us very, very early on was that the scaling laws were going to drive, and remember, this is really back in the, you know, 2020, 2021 before Chatchee B.T.
“Moment occurred, and we began to understand that like computing decamadetizes at scale, right?”
Like, when anybody can run a GPU, but can you run a cluster that's large enough to train a model that can change the world? And that's a different question, and so we really began to think about, like, how do you go about scaling up your delivery of this computing to clients, larger and larger clients? And that was the next card to turn is to think about it from a, okay, you know, there's a component of this that is going to lean into our ability to access the capital,
to be able to deliver our solution to the broadest possible audience, to the most sophisticated consumers of this compute. Yeah, and that was really the next card is thinking about it as a business rather than as a engineering project to be able to deliver the, the, the infrastructure and the software, and really everything between, you know, when you're thinking about what we do, we kind of live above the Nvidia GPUs, but below the models. Yeah, and everything in there,
all the software, the integration of software and operations and observability, and all the
“things that you need to be able to build a cloud that's purpose built for this one specific use”
case, right? So we don't, we don't do everything. We really focus on one use case, which allow to do web servers. Yeah, you got AWS. You know what, they do a great job. It's a great solution. It was a brilliant solution to solve a problem. We just looked at it and said, there's a new problem. And let's go about, let's go about looking at this problem and trying to come up with a solution to deliver a compute that solves that problem. And when did the language model start
dialing and calling you for, you know, capacity? Yeah. So our, our first, well, our first language
model was really a Luther. Yeah. But our first like large commercial was inflection. And so, you know, we work with Mustafa and, and, and inflection. And then we, we really diversified from there into the hyper scalars into, you know, open AI across the, the model, the foundation models across, you know, and, and just kept scaling and scaling with the belief that, you know, once again, the, the, the de-commodetization of compute, the ability to deliver a solution. And the
solution is building super computers that can change the world. And that's really what we began to focus on. That was the lead into training. And now the world has gone through, you know, this, this moment where we've moved from research into the productization of this. It's, it's, it's beginning to work its way in from the, the, the, the, the fringe of organizations into the core of what they do. And you can see that every day in the, in the amount of inference compute that is being
driven through, you know, our infrastructure layer, which is just massive, which is just like the answer to people are consuming it, not just building models, but they're deploying them and,
and utilizing it. I always think of inference as the monetization of the investment in artificial
intelligence. So when, when, when we, when we see our compute being used to stand up the massive scale of inference that's hitting our compute every day. And like, you know, inference is when people ask the model a question, it comes back with an answer. That's an inference. Or when you ask the model a question and then to go do something, that's inference. Right. And that's actually where you're, you're, you have the opportunity to really drive value outside of the model itself,
“but into the real world. And that's really exciting for us. That's what we like to watch. That's what”
I like to watch in terms of gaging the health. What chips are those? So, so really, you know, we are, we are the tip of the spear in bringing the new architecture out of Nvidia into, uh, into, into commercial production at scale. Yeah. And so when, when, you know, we were the first ones to bring the H100s at scale. We were the first ones to bring the H200s at scale. First ones with the
GB, uh, 200s. And now you've got the GB 300s. And one of the things that's, that's, that's amazing.
And really fascinating for us is, you know, people are using the bleeding edge GPUs to train models as the new architectures come out. And then they take those GPUs and they move them into
Different experiments and then over time, they move them into inference.
them in inference for a very, very long time. What is the shelf life of a 100 right now? Yeah. That's been a big debate is, I think, for your company, for Microsoft and I guess, Michael Barry,
“you know, who you must have known when you were a quad, you know, saying, oh my god, the whole”
industry is the sky's falling. And then we all know in the industry that people don't just throw this hardware away that they find uses for it. The street finds its own use for technology. So what's the reality of the lifespan of these things? So my, my take on the, the, the GPU depreciation debate is that it's nonsense, right? It's a debate that is being brought to the forefront by some traders that have a short position in the stock and they're trying to talk down. Look,
here's what we know, right? When, when we buy infrastructure, we're a success-based company, right?
We're a small company on a relative basis compared to the enormous companies that we're competing with. And so they come, our clients come into us and they buy compute for five years, for six years. Our average contract is five years. So any commentary by anyone, either inside or outside of the industry that this stuff becomes obsolete in 16 months or whatever, nonsense. Yeah, it's, it doesn't, it doesn't, in any way match up with the facts on the ground. The facts on the ground is they're
buying it for five years, right? And my approach to this has always been, if people are willing to pay me for it, it still has value. Correct. Pretty simple way of approaching it. We use a six-year depreciation. We believe that the GPUs will last in excess of six years. But we felt like that was a fair and reasonable approach to a technology cycle that's moving at this velocity. The
“A100, it's the ampere's. This year, the price has appreciated through the year. Why so? I think it's”
because one of the things that happens is as more installed capacity becomes available, you have new companies that come into existence that have new use cases that have different size models that are trying to build new commercial ventures that maybe you've been blocked out of the H100 itself.
I never had an opportunity to run on that. I mean, to make a very simple example for the audience,
like when you trade in your iPhone after three or four years, you're like, who's going to use an iPhone 12? And it's like, have you been to South America or Africa where you go to the store and you buy an iPhone 12 or you buy the Pixel 7 and it costs $50. That's still got great life left in it. Absolutely. Yeah, you know. And so look, you know, we find these amazing use cases, new companies that have come into existence or existing companies that have integrated
new models into their workflow that are able to use the ampers. And so they keep buying any GPUs that we have available. And once again, you know, the concept that a GPU is no longer relevant or commercially viable after 16 more 18 months or two years. Yeah, it's just, it just doesn't
“make it so far. I think sometimes people get caught up in Moore's law or in just how fast our”
industry is growing and that there's so much at stake that big companies are demanding the most recent products that doesn't mean that the lifespan has gotten shorter. It means the opportunity and the surface area of the opportunities got in much larger. Yeah, one of the things is is like, you know, the industry has gotten so much attention for the unprecedented scale of capital
that is coming to bear on this. Yeah. And because of that, there tends to be a incredible focus
on the companies that are building on these most advanced chipsets. And the truth that matter is is, you know, even within those companies, they have a long tail of useful life to provide inference horsepower, to work on other experiments, to do less bleeding edge activity, but still needs to be done. And yeah, I mean rendering comes to mind as well. Or yeah, we're making images on Nanobanana, like there will be a use for it. There is a moment at time where maybe the
compute to power ratio doesn't make sense. My, my expectation is is obsolescence will be defined by the moment in time where the power in the data center for me will be able to be repurposed for a higher margin than the existing infrastructure provides. And, you know, like I said, I fully expect this infrastructure to last in excess of six years, but the standard in the space has really been used with one exception, which is Amazon, which is, yeah, it's six years. That seems like
the right schedule. I'm not making it up. That's what everybody's using. Yeah. And the energy cost
Is the opportunity because, hey, it's just, we need that space.
And that might get re-sold that hardware to somebody else who wants it a hobbyist or something.
“Yeah. I mean, where it could be sent someplace elsewhere, they have more capacity when they”
can repurpose it there. But I, I, I kind of feel like, you know, we'll deal with that part of the business when we get there. What I know right now is it is extraordinarily profitable. It's very creative to my company to continue to keep the infrastructure that's been up and running that's been on these long-term contracts. And as it rolls off, as it's been in use for five years, you know, as it becomes available, I am still able to sell it at a higher price than it was
at a year ago. There's competition now. When you were buying these from Jensen back in the day, yeah, you could buy them and have them shipped. I would assume within 30 days or less. Nowadays, what's the weight like, even for you, a loyal old customer, and is there a bit of a battle? Is there politics to who gets the servers? Like, I see some like very big names talking about they got to get an allocation. Is it still a little bit crazy? What's it like to be in that
“category, having to buy something everybody wants? Look, you know, I, I think if it is an”
affirmation of the business that we're in, right? Like, the fact that we are attracting competitors, the means that the business is healthy, and there's a lot of people trying to deliver this service because they need for this infrastructure, the need to integrate the infrastructure, you know, into the software layers, to deliver it to artificial intelligence, either at the model level, or at the inference level, or at the application level, or whatever, you know, level of the
five layer of cake that Jensen's, you know, focused on the fact that there are more people coming into this, it doesn't discourage me. Yeah. As far as getting access to the GPUs, we show up like everybody else with a, you know, we'd like to buy a here's a PO when we're ready to pay. The one, what's the wait time like, and is it just really competitive or not? Because I talked to Jensen about he said, I said, how do you manage all these like big egos and names
and companies trying to buy stuff? And he said, well, they order it, and we give it to them in the order in which they order it. That's it really like that. It really is, right? Like, you know, he doesn't want to be in the position of playing favorites or Alex. Like, that just seems like a bad place to be with your clients or watching them off. Yeah. So you imagine that would, that'd be crazy. Yeah. I don't, I'm not sure that would be good for the long-term business. No. Yeah. So, so our,
our approach is, you might get some sovereigns coming in and saying, I'll pay double. Yeah, they do that with Ferrari's too sometimes. Ha ha ha. These are the Ferraris of computing. In a way. Yeah. But I remember, God, he's our, our our approaches to work with clients across the entire space to find opportunities that are really interesting companies that can fit into our contraction, contracting requirements, the way we're going to be able to go out and structure the debt that we require in order
to go out and and build infrastructure at this scale. And how does all that debt work? That is something that you guys specialize in. Corporate debt. I'm in the venture of business people are like, why should I be in venture when corporate debt pays so well? Corporate papers so good. I'm curious
how this fits in and like what interest rate people are paying on, you know, a billion dollars in
infrastructure. Where do they pay on that? Yeah. So, so core wave has really been the innovator around a lot of the financing engines that have come to bear on this. We did the first GPU based
“loans and like I think it's important or I'm going to try to explain this in a way people can”
understand. So, what we do is we go out and we find a client. Let's use Microsoft. You brought them up before, right? And Microsoft's off comes to us and says, we'd like to buy some confusion and we say, okay, great. We're going to sign a contract. Once I have a contract in hand, then what I do is I create something, it's not a particularly creative name. It's called the box, right? And what I do with the box is I take my contract with Microsoft and I put it in the box. I go to Jensen and I buy
the GPUs. I put it in the box. I take my data center contract. I put it in the box. And now the box governs cashflow. And it has a waterfall of cashflow that comes into it and goes out of it. And so the way it works is then I build the compute and then I deliver the compute to Microsoft and they
pay the box. They don't pay me, right? It goes into the box and the first thing it does is it pays
the data center. It pays the power bill. It pays the interest in the principle. And then whatever's left flows back to us, right? And so it is an incredibly well structured time-tested, pressure-tested vehicle
To be able to borrow money against client paper and all of the other collater...
which is why Core Wave, which is a company that many people haven't ever heard of, was able to go out
“and raise $35 billion and $18 months to build infrastructure at scale. But what's important”
understand is the economics in this box are such that within two and a half years of a five-year deal, we have paid for everything. The principles have been paid off. The principles have been paid off. The interest has been paid off. The return into the box is such that we are able to generate returns to our company at the box level, which gives the most sophisticated lenders in the world whether it's banks or private equity funds or whoever, confidence that they're going to
be able to achieve the one rule of lending, which is giving me my money back. Yes. And so it's better
when that happens. So they look at those boxes and they're like, wow, we're really confident
we're going to get our money. That maybe they want 10 boxes. That's correct. And if anyone box
“goes upside down, you can deal with it and it's not as acute. That's correct. And they don't”
cross pollinate. They don't cause our contagion across the boxes. They're all independent and discreet. One. And number two is, as you do this and as you show the lenders, how this financing tool and how this financing mechanism works, what they do is they continue to lend you money at progressively lower rates. And so when you think about our cost of capital over the last two years, we have dropped our cost of capital by 600 basis points. Wow, it is enormous, right? And so
you're seeing a company that is driving its cost of capital down towards where the hyper
scalars borrow, which will enable us to be able to be competitive with them over time. And we have been extremely militant and diligent about feeding watering and caring for those boxes so that we continue to have access to the capital markets in a way that allows us to build and drive our
“business. Meach has to say no, you have to say no to maybe some people who want to be in the box.”
Yeah, so we look at some deals and we're just like, you know, they want to buy GPUs for a year. And I look at it and say, I, that's not a deal that I can do because it's too short for me to amortize. Yes, the expenses. And so I won't do that, right? Like, once a day can go to another provider, maybe I want to take that risk on who has extra capacity. Absolutely, but our business is really built about around the risk management of being able to get to scale because in my mind,
during this period of disequilibrium, during this period where there were not enough GPUs in the world to provide the compute for all of the different use cases and artificial intelligence, the part that's important for me and for my company is to get enormously large so we can drive down our cost of capital so that we have information flow coming in from all different parts of the market, a large language models, high speed trading, search all of these things and they're feeding
their feeding information back into us that is letting us know what the next product we need to build is or where, you know, they need help scaling or what type of compute they need and all of that information flow is incredibly valuable to us. What were you tell us about demand? There's been reports of, hey, maybe the Oracle Starbase thing with OpenAI's been downsized or maybe not and then, you know, other folks, Microsoft is going big and Google is going big. Met is going big and
those people obviously have massive cash flow. Apple seems to be MIA. They don't seem to want to play. You've, you've named a lot of really big companies with really big balance sheets that have the capacity to drive a lot of demand. Look, I have been truly steadfast in this for years now. For, for, for four years, the depth of the demand for the service we provide has been relentless and overwhelms the global capacity of the world to deliver enough compute to enable all of the
demand for artificial intelligence to be stated. And that has been, we have been relentless about that. We have, like, nicks tickets during the Patrick Ewing era. Yeah. Like, they've got up to 50,000 people on the wait list. So if magically the wait list went away, if the constraint went away and we just had a large amount of GPUs available, a lot of energy available, a lot of days that are available, how much capacity would just all of a sudden come out of this. So, so, or would would be deployed.
I should say so. So, remember how we build our business through this box. And it's a five-year box.
So, if we had an air pocket, if, if demand were suddenly to disappear because...
breakthrough because of a war or anything, right? Like, like, the why from a risk management perspective
“does not matter. You have to prepare your company for the what happens if it happens. Yeah. And so,”
by entering into these long-term contracts, into entering into contracts with counterparties that have large balance sheets, you are, or we are protecting ourselves and our lenders. Yeah. So that we are confident and they are confident because you can see how confident they are by the rate that they're charging us continuing to decline, that they're ultimately going to get their money back. And that is the one rule of lending. Yeah. And so, you know, if just in terms
of the capacity, if you have an unconstrained in video, Jensen says, hey, order as many as you want. What would happen? So, um, the, it's also important to understand the constraints aren't just GPUs, right? Electricity, it's, it's power details, it's memory, it's storage. It's, it's networking, it's optics, all of the things. And there's various, there's various throttles that will limit the memory of the throttle right now, right? Oh, yeah, it is. Oh, yeah, it is. Well, how did memory
“become the throttle? If, um, memory and, uh, it has historically been a cyclical business, right?”
We have seen these waves of demand driving up the cost for memory and then it collapses and then it drives up. It's a very boom and bust business, a cyclical and it's nature because the fabs are so capital intensive that people invest in the fabs build a ton of capacity and then overbuild if there's any type of turn down. And that we've seen that cycle again and again. What's happening right now is the confluence of two things, right? One is is with all the demand for
artificial intelligence and the corresponding demand for compute and thecillary services around the GPU, the demand is through the roof. That's number one. Number two is is that there was probably an investment cycle that needed to happen back in 2023. Got it. That would have brought on the necessary fab capacity to be able to serve impossible to predict what's happened. Just with energy, it's
“impossible to predict what just happened. And now people are chasing energy. The data centers are going”
where the energy is. So that's not based on real estate. That's based on it's and where there's some wind. And anytime you have a, uh, the very capital intensive business like, you know, building fabs, you will get this boom and bust cycle, just like an energy. They overbuild. Yeah. And then, you know, fiber. Yeah. I mean, there's, there's, there's a lot of examples of that. Our approach in ways when you look at that, it's a beautiful aspect of capitalism that we're able to have a
boom bust cycle that we're able to weather it, right? If you think just that capitalism from first
principles, something like that happens. If we have too much fiber, it creates an opportunity for Google to buy it all up or the next person. What's the, the, the, um, um, you know, it does, it does, it does a lot of things having a boom bust cycle. It clears out the underbrush. Yeah. The strongest companies will be able to survive and take advantage of that. And it's so is the seeds of future business. The other thing that it does is you put that infrastructure into the ground. You put the
fiber into the ground, which became the backbone of how, you know, we watch movies every day and how we, you know, uh, communicate and how we help on a zoom and, you know, COVID and all of these things were based on that infrastructure that was available to be consumed. Yeah, people don't recognize this fact. If you, the, the premise of YouTube from the founders, who I knew Chad Hurley and his other partner, they basically had the realization at this curve, storage is coming down so quickly.
We could offer free unlimited uploads and bandwidth is coming down. So I guess we don't have to charge people for sharing a video online before that. If your video and viral, people are going to have their minds blown, but your server would turn off and it would say this person, you know, needs to pay their bill. Yes, because they were getting charged for carriage by the mega big going out. Yes. I mean, it looked and, you know, these, these, the business models change. Yeah,
kind of all and, you know, like you said, Moore's law and, and certainly Jensen will talk about the fact that like what, what is going on within the, the, the accelerated compute says Dwarfs. Yeah, Moore's law, right? And all of that is going to lead to more opportunity to build more companies that are going to do things like YouTube did, which has really changed the world.
Yeah. Yeah. I mean, the, the concept that, I don't know if it was like a million hours being
Uploaded every hour or minute, but at some point Susan Woodjackie, arrested p...
how much was being uploaded every minute, and it made no logical sentence. So it's realized,
“well, there's three billion people out, two or three billion people in the service and one”
percent upload or point one, ten bits upload. It's like, okay, one of the vows and people upload. It's a big, it's a big denominator, like, I was sitting on a panel with Sarah Fryer, CFO, yes, open AI. And she, everyone's in a while, she, she really puts out like interesting
information. And so she was talking about the cost of a million tokens when chat GB3 came out,
and it was 32 dollars in change. And now a million tokens cost nine cents. Yeah, right? And so you, you just see like like the incredible power of how the capital markets, how capitalism is fueling engineering and fueling, uh, uh, and it's the top, and become recursive now too. I mean, these models, if you say to the model, hey, make yourself more efficient, spend less money
“and lower the cost of tokens. It'd be like, okay, Captain. Yeah, I don't know if you saw Carpathians”
for Curse of things last weekend, but it's like now everyone's civilians who've never worked in a language model, don't compare science. You're like, I'm going to try to do something recursive this weekend. You know, it's one of the things that I, that, uh, talked to, you know, the other founders about, you know, and it's like, when you think about some of the things that AI does, right? It's lowering the barrier to operations. So if you have a good idea or a great idea, you can open up your model,
and you can tell your model. You can vibe, go to it. You can do all kinds of different things,
and create things that never existed before. That's amazing, right? Like that's bringing
down this incredible barrier that kept human creativity contained. And now all of a sudden, this whole new vector of, uh, uh, you know, medical research or different approaches to, you know, baseball cards or whatever you want. If you've got a great idea, if you've got a new creative idea, that's the valuable kernel right now that allows you to build new things and to create new things. And I just think that's incredibly exciting. Like you're bringing the minds of eight billion people,
a tool that allows them to overcome what was insurmountable for forever. Humanity. Yeah. It's a bright new future. Michael, uh, appreciate you sharing the, uh, uh, information with us and the vision. I am really delighted to have our event screen of us on the program. Thank you for having me or Jason. It's so great. I want to go through three stages in which I fell in love with your product. The first phase was I could go and pick my language model. If I wanted to use open AI,
if I wanted to use cloud, whatever it was, that was like a real unlock for me. And on the sideboard, sidebar, I noticed you had done essentially like what Yahoo, didn't the early days, finance, sports. And when I put my Nick came up, it gave me a live version of that. When I pulled my stocks up, it summarized the news in real time. And I was like, wow, this execution is great. And I kind of made you my front door to different models and it made it easier, Michigan. Then you came out with the
comment browser. And I was like, Holy cow, I can give this a series of instructions. Go to my LinkedIn, find everybody from this company, put them into a Google sheet and boom, you are the first out of the gate with that. And then just the last couple of weeks, I had been claw-pilled and using open claw, but you came out with computer. And I started using computer and boy, it's good. It's a really strong start. Allow me to do repetitive tasks. Very similar in some ways to co-work from clawed or
basically an engineer or developer using it. So are these the evolution of the company and I should think about it that way? But how do you look at complexity now? You have a very loyal fan base.
“You're making a lot of money. I don't know if you disclose it, but I think it's hundreds of”
millions to billions. You can tell us. But what is perplexity in the face of, wow, claw's having a great run, opening eye still doing strong, rock doing very well, Gemini coming on strong. There's like six or seven of you. And you just have to be one of my top two's right now. Thank you.
So tell me. First of all, thank you. Thank you so much. Perplexity has always been built for
people who are always looking for the extra edge, the curious people. So it's very natural that you are one of our power users. One common theme for us for the last three and a half years
Is accuracy.
want to give somebody answers, accuracy is very essential for building trust. This only then the
user is going to ask the next set of questions. It turns out it was a great idea of to give AI access to the internet to be accurate. So that's the proplexity ask product. It turns out it's a great idea for AI to have full access to a browser so that it can be accurate when you task it to go do something that you would do yourself on a browser, a gigantic browsing comment. Now the last face is it turns out it's a great idea for AI to give it be given a full access to a computer.
So that it can do whatever you do on a computer on its own, essentially becoming the computer itself and orchestra of everything AI can do today. Every single capability each individual AI
model has, be it GPT or Claude or Gemini or anything else and orchestra of all those capabilities
“that's what Proplexity computer is and all these sub agents that are running inside computer”
are the musicians. The models are essentially the instruments and they're like hundreds of models out there each having their own specialization, some are good at coding, some are good at writing, some are good at multimodal visual synthesis, image generation, video generation audio, but what matters is the end out with the music you play. That's the work AI gets done for you and that's what Proplexity computer is. AI itself is the computer now. Still lives inside of a browser.
Have you considered giving it desktop root access? It feels like the next place is going, but
that comes with a lot of security issues, a lot of trust issues, as you mentioned, trust is
paramount, getting the right answer is what builds it, but also not getting hacked and not having it delete your files. So how do you think about root access to my Windows machine? Obviously IOS
“they won't let you, but with an Android phone it would let you. So do you have that in the works?”
Yes, so we now something called Personal Computer, Proplexity Personal Computer. That's essentially going to take all the trust and reliability and the server side execution of Proplexity computer, but synchronize it with your local computer so that you can use it from your phone. And we're going to do this with the Mac Mini, where you synchronize your computer with the Mac Mini. So that becomes your local server, all the Asian orchestration that has to do with your local
private data will run on that local orchestration loop, that run time with the Mac Mini. Not on your servers, not on in Proplex. Exactly. Yeah. It could still ping frontier models if it needs to, with your permission, but it will be orchestrating everything on your local hardware. Yeah. And if it needs to run on the server side hardware, if you don't want very complicated, long running stats to be running on your local hardware,
you can delegate it to run on your server side computer, which is again, only accessible to you and you alone. So that may very going to bring the perfect hybrid of trust for the hybrid between local and server side. And you'll make it easy to do. It just be abstracted. Exactly. It's all one executable boom time. It's like open cloth for dummies. Right. Nobody needs to learn how to use it. Nobody needs to manage API keys. Nobody needs to
manage separate billing across like hundreds of different services, figure out what you can give access to and not access to. We take care of that. So there's a Steve Jobs we have doing it, you know, into an integration. And how do you think about local models? I have started running Kimmy 2.5 on a Mac studio. It's not as good as clawed or Gemini or Grock, but you can probably do about 80% there for free. Yeah. Essentially. Yeah. And so that's quite compelling considering
some of my other bills clawed and and stop forgetting expensive. So do you have one of those? You started testing on your local Mac studio. I assume you have a Mac studio and you're doing this yourself. Yeah. Or now I don't know if you saw Dall and Nvidia announced a giant workstation. Is it a 3,800? Something like that. It's 750 gigs of RAM. So what do you think about the desktop going back to workstation/server status? I think it's very promising. My prediction is
“initially started off as a sub agent. So whatever you need to go like your tax returns, your personal”
photos, your emails, your calendar. All that stuff, those local apps, personal notes, very personal notes. You could make sure that the models that access those tokens would be running on your local hardware if you want to, if you had that privacy conscious. And more complicated stuff that accesses your data is already on the server side. Example, your Google calendar, yeah, your Gmail. This is personal
Data still.
your Google workspace connector. And that could run on the server side. Because anyway, the data is on the server. It's not even lying on your device. So that sort of hybrid orchestration is very, very ahead of to. I don't think it's a dichotomy between fully local versus fully server. It's all about choice. And anyway, when you're on your phone, you don't care actually with server that workloads running from. Because it's not going to be able to run on your phone anyway.
The chips need to exist on a max studio or a mac mini and all on the server or this new
“Dell that's coming out. And I really think the idea of spending $10,000 on a powerful desktop”
will appeal to people if it lowers their $500 a month. Yes, flawed bill. Yes. This is an incredible
savings. Plus, you get the benefit of privacy and not educated in the language models on your personal data. Yes. And it's going to be like, you're buying a refrigerator. You're your internet mode of like the cost for these will eventually go down. Yeah. But it's not going to feel like you're wasting your money. Every, every home has a lot of other sensors. Yeah. That runs your home. That'll also be part of this orchestration loop. Yeah. So that's where it gets exciting. Because
now you can just dictate something to your phone. And that can control your entire home. So that's the dream that everybody has on all that orchestration loop can run on your local hardware. No problem. And I'm curious what you think of the operating system. What's eventually going to be the operating system of this workstation AI is the operating system. Like earlier in the traditional operating system you've executed programmatically. Now you start with objectives,
not specific instructions. Right. You come up with a high level objective. Go build this website for me that takes all the transcripts of all in podcasts and tracks the stock price just before the podcast and after. Yeah. And chartered for the max seven. Yeah. And chartered over time. You can that's objective. But individually it's running a file system, a code sandbox, access to the internet.
“It's having like its own HTML tools and like yeah. So I think that's basically where, you know,”
models systems and files and connectors are all coming together. You would think of that as an OS. Except you're operating at an abstraction about that, where you're thinking in terms of objectives. Yeah. And does it need to eventually become its own operating system in your mind? It could be like people could think about it. It's like, yeah, I have my complexity computer running all the time. Whether it essentially runs on Linux machines right now and every server side
computer is like Linux machine. Yeah. So I think Mark and recent we did this right after our release
that turns out Linux computers was the right idea. Desktop Linux computers are finally going to work.
Yeah. I mean, they're stable. Yeah. They're customizable. Exactly. And you're not at the mercy of Apple's desire to contain the experience or Microsoft's surface area as for hackers. Exactly.
“You build something rock solid and it does feel like Linux might actually become the”
correct the eventual winner. It may not need to have a front end. Right. You could you could access the Linux machine on your phone. Right. To be running iOS or Android doesn't matter. Right. The actual valuable runtime is running on Linux on the server. You've done great as a consumer company. A lot of love there. Now I'm starting to see corporations with computer in starting engaging. In fact, you'll be happy to know this last week. I took two people in my back office and I
said stop working on an open claw. Your job is to do the back office automation at our venture firm only using perplexity. And they were a perplexity computer. And they were like, oh, okay. It doesn't talk well and slack. It doesn't have an agent and slack. I was like, it will. I'm going to see our mint. I'll talk through about that. So we need a really strong slack connector. It's already out. It is. Okay. Great. Computer exists as a slack button right now.
Okay. That you can add to your slack workspace on the enterprise plan. And our entire company works like that. People are talking more to computer on slack. In our first volley, we were sending reports in, but it wasn't interactive. That's perfect. So now you've got your company going
in two different directions. This incredible consumer run. You have how many people are using the
product every month? Several tens of millions. So tens of millions of people, that's very much similar to the trajectory of the Google and Yahoo consumer business. Now you've got corporate. How are you doing on the corporate side? Thousands of companies. The fastest growing business for us. Ah, it's growing faster than the consumer and revenue. And things like computer and lock
Entirely new possibilities.
max customers who are on the highest tier of enterprise. Explain what that is. What is it cost?
“200 a month per person. So there are two tiers. One is the enterprise pro, which is $40 a month.”
And there's the enterprise max, which is $400 a month. And that and computer after you run out of your credits, you would pay for the tokens. You would pay for the usage. Are you making money on the $400 a month, $5,000 a year one or at this point time or people? It's so crazy. Are one thing that complexity has is every revenue we make. Unlike certain other upper companies, every revenue complexity makes has positive gross margins. Got it. Because we're not just selling tokens. Right.
Most of our revenues recurring because people are paying a subscription fee. And because we route through multiple different models, we're very efficient in terms of how we spend on the tokens. Because we have all this advantage with drag and orchestration and search, we don't actually need to blow up the context window of the models. Yeah. As a result of that, we have positive gross margins
“on all the revenue in the every single penny we make. We make profits on that. The overall”
the company is still yet to be profitable, but we're working towards that. You've had the opportunity to exit. Lot of rumors, Apple, other people were like, hey, this is a great team. How many people on the team now? About 400. Yeah. You've got a very coveted team. You obviously understand consumer. You obviously understand business. It's a product driven organization reports are you declined. But the world's getting hyper competitive here. How do you keep up as a 400 person organization
when you got Sam Altman over here raising a hundred billion dollars? You know, and then you have Elon putting data centers in space and merging with SpaceX and Twitter. You have Google with unlimited resources, Amazon getting in the game, and obviously Gemini, a very strong product and Google. Really good at consumer. I think we'd all agree Facebook and Meta haven't figured it out yet except maybe for serving us better ads, but they haven't figured out the consumer case yet,
but they'll copy it. They always do. How do you look at the playing field because the degree
of difficulty this isn't playing checkers or this is like playing against the 10 best chess players
“in the world. That's what you have to do every day. So how do you think about it? Long-term and”
independent company. Do you think you'll need to join forces at some point? Well, why didn't you take the deal? This deals were incredible that you got offered. So one advantage we have that all these companies you mentioned don't have is the multi model orchestration. We're like Switzerland. We don't have to have one horse in the race. If GPT wins, Gemini wins, quadrants, Lama wins. It doesn't matter to us. Or even open source models, Kevin. No problem.
You have them on the service. Deep sea can kimi. We have kimi. We have nema tron and we have a lot of usage of quen. Ali Baba quen. Yeah, silently under the hood. So for us like that advantage of being able to take the best in each model and give the user the orchestra of everything they can do. I don't think any of the companies you mentioned can do that. Right, nor would they. Nor would they. It makes no sense for them. It would be an admission that all the data centers and
capex they've built out. Me still couldn't produce them the best model. And Dario C. of Anthropics said recently in an interview that models are specializing towards the beginning of last year. People thought models are going to commoditize. But there was the end of last year. People models started specializing. Even within coding, claw code and codex have very different capabilities. RIOs engineers love using codex. Our back engineers love using claw code. Yeah. So even with
an specialization like coding models have their own unique specialties. And there are many other use cases outside coding where different models are good at different things. Which means the orchestra conductor has no one model to the horse and the race can win. By providing a very
unique value and service to the customer that each of these amazing names that you mentioned
cannot. And so you're buying chokins wholesale from them. And then you'll charge customers to do it. Or do you think it's all? We're going to take care of all their orchestration. Yeah. So you're not to manage tokens across different models. Because I authenticate a couple of my different accounts, my pro accounts, interproplexity. But does it? I don't have enough knowledge to know. If you are abstracting that and people can just search across them. And it's part of their
proplexity subscription. No. And I'm bundling subscriptions from into other AI's. Yeah. We just ping the models directly. Got it. What you get in us is the proplexity or orchestration. Got it. The harness. Right. So when models are kind of specializing, there's a bigger value in the one who
Knows how to build a great harness that can take the best in each model.
today or do you still have the dropdown? Somebody's got a pick. It definitely auto routes the best
model for each prompt. But we also give users the flexibility to pick whatever model you want. What do you think of? I've seen a bunch of startups hack this together. But doing the same query across multiple. We built a single model council model council. Yeah. So that's one of the one of the modes and proplexity where I saw Jensen saying one of the interviews that he he puts the same prompt in five different AI's and sees what each of them says. Yes.
Everybody does that. Yeah. But then you still have to apply a biological compute to read every answer. Yeah. Then figure out where they're deferred. Like talking to five lawyers about your trust or your items and doctors find different doctors trying to figure it out. Exactly. So the model
council is a feature we built where it would not just give you the answers of each model. But it will
tell you exactly where they agree where they disagree and where the nuances are. And that's in the interface. Yeah. I don't know what's there. It's there. I mean you you release product at a pretty
“great cadence. Yes. Where did you learn that? And what's your philosophy of shipping product?”
Our philosophy is like speed is our mode. Like you know again, one of the things that big companies cannot do is move at the speed we do. Serve customers at the speed and quality. It's very hard to maintain its quality speed and trust at the same time. Yeah. Like Apple takes a long time to ship anything. Right. It's very worried about people not trusting them. Yeah. And so some companies are bureaucratic and they just take forever to ship something. They don't maintain
what they ship. They may make a big deal about an event. But nobody even knows how to go and
use that feature. Yeah. They get abandoned. Exactly. So proplexity has those advantages with being very small. And towards the end of last year, we found that like AI coding bills have made it much faster for us to ship things. But just honestly, one of the reasons why we build computer because now even not engineers are shipping code here by just being a slackboard and asking you to fix bugs. Yeah. So the iteration has just been like exponential. The moment I had where I became
claw-paled was when I was working with it. And I was like, hey, I want to build my network. I know these 20 people in Japan. I had dinner with them during my recent trip. I want to know who they know. So check out LinkedIn other things and who they're associated with and make me like a mind map of it. And then the next trip I want to meet with the next circle of, you know, those connections. So I started asking, okay, I got the results. That was like great.
“And they said, where do you want me to put them? And I was like, well, where can you put them?”
And it said, well, I can put it in a Google sheet. I can put it in a notion table. I can put it here. I can give you a PDF. I can give you a CSV file or I could write you a CRM. And I was like, yeah, sure, make me a CRM system. And it means your system. And I think that becomes, and I think maybe one out of a thousand people working with AI have had that experience. Yeah. Maybe it's one in 10,000 where your agent says, I'll make you bespoke software.
Yeah. Have you had that yet? And do you see that as a part of computer that when a person needs a spreadsheet, you don't launch Excel or Google sheets, you just pop up a spreadsheet. Yeah. Well, we have a board meeting tomorrow. Okay, I'll come. And so I'll pitch it to the board. Sure. Our computer computer made the memo. Oh, wow. Yeah. And we had a partner meeting to pitch a partnership idea. And earlier, we would have a design team do the whole deck. Yeah.
Computer just one shot at it. I had a press briefing with a bunch of journalists. My comments person. Sorry about that. Oh, brutal. And then my comments person would usually give me a memo. What does say? Yeah. Computer one shot at him. So it's crazy. It's crazy. The context is so good because the memory's getting better. Yeah. So it's like, I know that journalist from the last time. Yeah. I know the board meeting have all the previous decks. Yes.
“When did that happen? I think it, it happened with Opus 4.5. The entrop or 5. Yeah.”
4.5. That was an inflection point when models were start being amazingly good at orchestration and reasoning and two calls. And Claude code brought in this new idea. And AI that everything can happen inside a sandbox, a console, a terminal with access to tools, where tools are just command line tools. Yeah. They don't even need to have graphical user interface. So when you did that and when you organized around files and sub agents and skills and CLIs, the model
started becoming becoming very good at handling the context. So the context window no longer became a problem. It just put whatever necessary into the context whenever it wanted to and dumped them away when it wanted to. Yeah. And that made it suddenly so good at doing very long orchestration tasks. You know, it's pretty crazy. I have every episode of this week and start up all the transcripts
Then all of all.
it, I wanted you to download every all in podcasts since the beginning. And I want you to take a mention of all the public companies they mentioned during the episode. Yes. I want you to have a histogram of the counts. And I also want you to chart it across time. And then I wanted to analyze the impact on the stock price and the sentiment of what we said. Exactly. And it did. Like it clearly said. Are we moving stocks around Google's stock going up? Yes. Prior to that, you
guys were talking a lot about Google. Yes. And clearly I said I made a bet publicly on the bank
“and said I am buying a bunch of Google because I believe even though they're behind it's because”
they're too precious. You were kind of mentioning a company that might be too precious at times. It doesn't release. Yeah. I was like, that's that company. They need to release more. Yeah. And I told Sergey, I was like, like, give us the good stuff. Yeah. No, he's already given us the good stuff. It literally gives you the timestamps of every single. And then I can go click on it and actually hear exact moment. Yeah. Sweet. Yeah. So that's when I was like damn, like this, I would have
had somebody do this as a week-long project. It would have been 10 hours a week of the researcher. I'm experiencing the same thing. When I do research notes, I've created my own mega prompt. Yeah. And it will go and like tell me where you worked before and who's in your circle, who your competitors are, who your friends are, blah, blah, blah. And then go find, I try to find old podcasts is one of my secrets. If you're an interviewer watching, I try to find what was the
person talking about five years ago, 10 years ago, and then over 10 years ago. And I've gone into interviews now with Michael Dell and talked about things he was talking about in the 90s. Yep. And it finds me some ancient stuff. Like you would pay a researcher producer, you know, 70,000 dollars a year, 80,000 dollars a year to do this, and they would have done a third of the job in 10 times longer. Yeah. It's really gotten weird just in the last six months. Yeah.
I think the next six months looks like I think the dream that what we're going to try to do is help businesses run as a ton of this is possible. You know, everybody talks about this AI is going
to create this one person, one billion dollar company. Some people say it's already happened because
people pay researchers like 1 billion, but it's not really moving the GDP by 1 billion. It's not
“truly creating new value. So the best way to do that is to actually help a small business. People”
who would otherwise drive Uber for like, yes, extra passive income to like buy like a Mac mini, set up perplexity personal computers, run their business on that, all like run it on the server, doesn't matter. And actually make real money. Yeah. Hundreds of thousands are even millions a year. And uh, grow it. Have computer have gone run your ad campaigns on Instagram or Google. I mean integrated with SCM and SEO tools, fine new users and uh, integrated stripe charged them,
ship new features, have your own like inner calm integration for customer support. And like have this all working. Well, you can be sipping wine and Napa. That's the dream that, you know, it's so it feels awesome to say everybody thinks it's already there. It's not there yet. Someone has to do that hard work. Yeah. That's what we want to do. Yeah. It's a great vision because when I watched startups 20 years ago, there were so many checkboxes they had to do. I have to find
an office space. I got to put up a bunch of servers. I got to hire an HR firm. I got to hire a PR person, all this stuff. And now I talked to young founders. They got a three person team. They've come out of A 16Z, my program launch accelerator, whatever it is, why commentator. And I'm like,
okay, you're raised to half million, you're raised to million, who are you hiring? And they're like,
I don't know if we need to hire anybody. I'm like, if you could hire somebody with your hire, they're like, well, I do my own HR, I have this partner. And they're I'm like, how you do it, hiring anyway? And they're like, well, I put out an ad. And then it sorts and ranks, they're candidates. And then it emails the top 10, ask them a bunch of questions. And then I meet
“with the last two. And I'm like, that's what a recruiter did. Like, the entire recruiting”
job has been abstracted. And like a tool like computer is going to make that easy and fast work to do. A lot of connectors, a lot of specific workflows. People don't want to like learn how to write like, you know, essay long prompts. You know, it needs to be so quick and fast and autonomous. You just set it up and done. And you have an idea. You can turn it into a business and start making money.
Yeah, it's, it's an incredible future. And it feels like right here, do you, how do you think
about job displacement? Because you're actually making the tool at enables people to be a solo entrepreneur and get to a millionaire revenue. But it's also the same tool that doesn't require them to hire. And we've had this debate a million times on the podcast. Do you, I'm wondering if we like me,
You have moments where you're like, oh my god, this is really terrifying.
are going to lose their jobs really fast. Yeah. And then, oh my god, you can learn any skill you want. And all the things that were hard are now easy. Yeah. I go back and forth. I'm 70, 80 percent. Yeah. Super positive about this. But I do worry about like 20 percent of the time. I'm a little
worried. Yeah. Where do you sit? I mean, America has always been about like entrepreneur or
entrepreneurship, right? Like, yeah. We've been about like trying to build new things, discover new things,
“go explore. I think this whole like Henry Ford came and built factories and brought in jobs and”
things like that and like put people into a box. But I think the reality is people most people don't enjoy the jobs. They're doing it for hate them exactly. So there's suddenly a new possibility, a new opportunity to go use these tools, learn them and start your own many business. And if it pays for your needs or your multiple years and let's you have a high quality life and good work life balance and through feeling of agency and ownership and passion to like get your ideas out
there. I think that is even if there is temporary job displacement to deal with. That's a lot of glorious future is what we should look forward. I think you're exactly right. If there will be some displacement, but then there's also going to be so many opportunities open up. And it requires the individual to not be passive exactly. They have to be rugged individualists. They have to be resilient. Yeah. And they have to be resourceful. And I think once you start playing with these tools,
“that's what happens. Exactly. You all of a sudden fail. Things are the best in you if you truly”
are in a good space. Yeah. Yeah. And then today, a comment for iOS is out. Yeah. I'm a comment, super fan. I record everybody. You were nice enough. But I emailed you. I was like, can you send me some licenses as you said? You don't remember. You sent me a bunch of licenses. I said everybody put
this on because it was $300 a month when your first came out with the common browser. Now it's free.
I think for all users, highly recommended, highly recommend getting a pro account. It's only 20 bucks a month to get into pro flexibility, which is a joke. So you can get on board for nothing, less than a dollar a day. But what is iOS allow me to do? And how does it connect to computer? Is that some other thing I'm having? Yeah. Claude code computer. There's not a good enough integration with this mobile device. Yeah. Yeah. So computer is already on the Proplexity app. So you can just talk about the
computer and start users. Yes. Comments uniqueness and Proplexity for the company and the strategy is the fact that you can control the browser. So the browser also becomes a tool for computer. Yeah. Just like a Google workspace and all these other things. Until the whole world is organized around CLI's and tools. Yeah. That's still a lot of tasks we have to do manually on the web. On the browser
“open tabs, full of farms, click on things, upload stuff. All that stuff if you want to automate”
you need a browser. You need any that can natively control the browser. So that is comment. And that's why no matter how many other tools in the market exist like open claw or like clock work. Yeah. Executing tasks on a browser on the server side along with all the other things is something uniquely Proplexity. Yeah. My dream is that you'll create an Android app that roots my Android phone. Yeah. And that you just take over and see everything because one of the blockers I have
now is some of the websites have gotten a little personality. Yeah. I don't want to mention too many about Reddit, LinkedIn. Yeah. And like they're just I am a great Reddit user. I'm a great LinkedIn supporter. But sometimes like I need to get my in-mow from my LinkedIn. And I just need to you know find seven people a company. Is there going to be a solution between the LinkedIn and Reddits of the world and the Claws and Proplexities? Yeah. How is that? I mean, there goes
a solution going. You don't have to speak about any specific ones unless you want to. Yeah. But it feels like there's got to be a solution and I'm willing to pay for it as a user. I'm willing to play Reddit to allow my bot to show up and behave properly. Yeah. Well, I cannot speak about any particular company, but yeah, we are happy to work with anyone, right? So I think with comment or ideas to give people the flexibility to set things up on their own. Yeah. And any
official APIs that anyone's willing to offer, we're always happy to put that as part of computer.
Here's what I think should happen. Let me see if you are great. And this is your Steve Hoffman at Reddit. I go on Reddit. I do a pro account for 20 bucks a month. And when I do that, I can authenticate whatever tool I want to do a series of well-behaved things a certain number of times a day. Yeah. So it's not unlimited. I'm not going to scrape the whole site, but I would like it
To just let Proplexy or computer go and just tell me, hey, what are people sa...
in startups and all in subreddit, summarize it for me, so I get the customer feedback. And I would
literally name my agent. And I would say, I won't post on my behalf. It won't vote on my have. Just needed to do a couple of little read-only things. This would be an easy solution or LinkedIn. I would like if you I have I already pay LinkedIn like 50 bucks a month. Like they should just let the $50 a month one work with computer. Yeah, absolutely. I mean, okay. This is for Saty and Adela. Let LinkedIn work with Proplexy and the other players. And we'll pay
you extra. Perfect. It's a revenue stream. Don't you think API access for customers is a revenue stream? I think so. I think I think fundamentally giving users a choice and setting it up as a windwind for both the business and the user. Yeah. It's very the world should head to. Yeah. And I would say the same thing applies to any any website in the world. Like if you want an AI to
“use it on your behalf, it should be okay for because that's what the user wants. I mean, I have”
paid New York Times subscription like let me go in there and do you know, whatever 100 searches a day a week a month, whatever they choose, but that would make the subscription that much more sticky. Exactly. All right. Arvind. Love the product. Anybody at home. It's just tremendous. Go learn computer and get the common browser. It has changed my business for the last two years. Love the product. And we'll have you back soon when you want your operating system and come up with your
own server and desktop server. But business is the focus. Yeah. Yes. All right. Thank you. Great seeing.
Thank you. We have an amazing guest. Arthur Manches here. The CEO of Michel AI. How are you doing, sir?
Great. Thank you. And so you're here at Nvidia's big conference, big announcement. You're going to be working with Nvidia to build models to open source them. What is the big announcement here?
“Well, we're announcing that we are going to be training the next generation of frontier models with”
event video. Did something that we've been doing before with Nvidia with Michel Nevor something we did like 18 months ago. And the point for us is really to be able to produce the best open source models out there so that we can actually use those assets to specialize them through products that we do for our customers like forge that helps us customize the models for the enterprise we work with in engineering, in physics, in science, in making them better, except in languages when we walk with
government and etc. And and Michel obviously based in France, you're the leading AI company there. What's it like running the company and building a large language model in Europe? Obviously there's regulations and all kinds of considerations. Privacy, the French are known for protecting privacy in the United States. We're known for taking it away. How is the landscape there? And what do you
“have to deal with there that maybe you wouldn't have to deal with in America and what's the pros and the”
cons? Let's say first we have 25% of all business in the US and 25% of our researchers are
actually here. So I actually spend a lot of time here as well as in France as well as in the UK and Singapore where we are. So of course it's different markets. It's markets where you have language which is a topic where there's much more manufacturing is a bigger piece of the cake than it is here. And I'd say our strength has been to also work with European companies that are a bit lagging behind and that wants to adopt the technology to leap forward. And we've been able to do that through a
forward deployment engineering engagement for our forge product for our studio product that allows to deploy agents that do end to end automation. But on top of that the thing that we have announced today like forge is something that is actually being used today with customers in the US because they come to us with needs for post training for making more specifically good at financial services and what's happening is that we have this product and we can bring the models to specialise
them as well. And so your belief is specialized, verticalised models, healthcare finance, engineering, different verticals will win the day or a global model win the day that does everything. Well you need general purpose models to do the orchestration parts, etc. But at some point you enterprises sits on a lot of intellectual property on a lot of signals coming from physical systems, home factories, from tools. And it's actually not trivial to connect those systems, to connect those
data, to models that are close source. If you have open models you can actually add new parameters, you can make a lot of deeper things that you cannot do with close models. You can also, and that's something that we do, we not only do we work at the model side but also at the orchestration side, we see it with subject matter experts to understand their needs and we build business applications that are fully bespoke to their needs by modifying the models. But also modifying the harness
on top, etc. So we believe that eventually building an open source technology is a way to save
Cost is a way to have better control, because you can see the thing on every ...
on your hardware if you want, you can deploy it on the edge if you want. And eventually,
“from a customization perspective and from leveraging your decades of IP that you've been”
accruing in financial services in a heavy manufacturing, like companies like HTML, for instance, they do benefit from working with us because we take their data and we build models that are specifically good for the whole process. And this training data, using experts to come in and refine a model, most people don't know this business that well, but this has become a very large part of the industry. Obviously, scale AI was doing it. They went to Facebook, lost a lot of
the customer base, who didn't want to set their data, I guess, over to Meta. We're investors in a company called micro one that's doing pretty well in this space. There's other folks doing it explain to the audience what you're doing specifically for companies and how this training works in a verticalized way. And then how you silo that data, because if you're working with one customer in aerospace or fintech, they might have a need set, but they may not want that training to go
“to a competitor. I can use a few examples. I think overall, the data segregation is super important.”
And the way we have solved that is through a portable platform. So our technology is a set of services, a set of training tools, a set of data processing tools that I can take. And then I can put on the infrastructure of my customers. So suddenly, from an IT perspective, and when we talk to the CIOs, they realize that from security perspective, the flow of data doesn't go, there's no data flow coming back to mistrust, because everything stays there. Now,
the way we then use that technology that has been deployed is that we're going to be working with the teams that is doing image, scanning, and default detection with my seven funds. And we're going to be sending forward deployments engineers, scientists, they will PhDs, they know how to train models. And they spend some time with the subject matter experts. They can explain how an image is being detected, how do you detect defaults, etc. And based on that, we're going to work out
what kind of data needs to be used to train the models that it's going to solve the task in itself. And so we send the technology, typically we send a little bit of scientists, because you do need that expertise transfer and that knowledge transfer in between our teams and the vertical experts. And then we make sure that eventually, our team, no longer needs to be there to retrain the models to get more data access, etc. So that combination of data segregation, expertise transfer,
knowledge transfer is the one thing that makes us quite unique and allows us to serve the most
critical use cases, the most critical processes in industries that actually need to take the
of that and put it into models for it to work. Yeah, this seems to be once the entire open web, what was available, legally, gray, market, etc. I wouldn't have you comment on that controversy, but we kind of exhausted what's in the open crawl. Yeah, we have. And it's time to actually either make synthetic data or actually use experts. Do you believe in synthetic data? And where does that work and where does it fall? We use synthetic data as a way to warm up the models. It's a way
to actually be quite efficient at the beginning. If you have a large model and you want to try in a small model, then you will use your large model to process and to produce a lot of synthetic data at the beginning. And then but eventually you do need to have human signal. So the human
“signal is something that is always a bit costy to acquire because you need to talk to the experts,”
they need to give feedback to the machines. And so at the beginning synthetic data allows you to do the compression, to further compress the models, that the end you do need to go and get data that is produced by humans. So yeah, it's a way to have, it's mostly an efficient way of training models. You have bigger models that are used as teachers for smaller models,
but it's not enough. And so you also need human signal. Arthur, we've seen an incredible
explosion. We're sitting here on AO 52 after OpenClaw, the year of our Lord 52 days. When you first saw OpenClaw and saw the reaction of hackers, founders, startups, CEOs, just the amount of energy and it racing to the top of GitHub with the most number of stars and likes and all these contributors, what did that say to you as an executive in the space which has been grinding on this for many years? What is that OpenClaw moment mean?
All right, it's resonated a lot with what we are doing with our customers because pretty quickly enterprises realize that if they wanted to make some gains with artificial intelligence, geneticity, they would need to automate full processes and to automate the full process as an enterprise. Well, you can use OpenClaw but it's going to be, it's actually not really enough because you have that problem, you have governance problems, you can't observe the process that is running
and you can't control it in many cases when you run a KYC process, if your HBC for instance,
one of our customers, you will want to have deterministic gates that are going to always do the same
Thing in a way that is observable and that you can guarantee the CAO that it'...
go through these gates and that's not something that OpenClaw is providing because it doesn't
“have the kind of primitives that you need to work on collective productivity, observable productivity”
and to work on mission critical systems. On the other hand, the autonomy it gives and the autonomy
brings to people that are just individuals that are hacking together things is a way to also show to enterprises that if you set up the right control plane, if you set up the right sandboxes, if you connect to the right data sources, if you make sure that your access controls are well-respected, then you can actually unleash the power of agents doing things for your employees and that's going to work, work on the platform because otherwise you will not be at ease when you're sleeping.
It is definitely something you have to be thoughtful about when I installed it. I gave it just for my agent root access to my Google docs and my G Suite, my notion, my Zoom, and my notion, and G calendar, everything, and then I realized, "Wow, I can with my enterprise edition of Gmail, essentially.
I can just summarize for my entire 21-person investment company, every conversation going on in Gmail,
and then correlate it with every conversation in Slack." And then I realized, "Oh my gosh, there's compensation discussions going on. There's a person on a PIP who we put about a performance improvement plan perhaps or something like that. I have to make sure nobody else can access this because the power comes from giving it access to data, but with great power comes
“great responsibility, and I think people are learning that in real-time." Yeah, it's a big problem”
because the enterprise that I is not a single thing that you want to put into a single system that is going to be accessible by everyone. And so you need to have this layer that actually understands what is in the data. You need to have a semantic of what can actually be proposed to HR, or what can be proposed to engineering. And typically, compensation is one of these things. You want to make sure that the compensation data does not flow back to all of the enterprise
because you're going to have a lot of problems if that's the case. And so what you actually need and which is hard to do is what we call context and join. So a mapping of where the data sits, that comes with a certain number of metadata that is telling you that this data is actually not accessible to this part of the company. And if you actually have someone engineering that is asking for something related to comp, the thing is actually going to tell you look, you actually
“contact access that data. So that's, that's how it's actually hard. You need to rethink entirely”
the way your IT systems are being connected. And at some point you also need to think about your management because your information flow is completely different today. If you're connecting agents together with your data sources, then it's used to be. And suddenly, maybe you don't need that manager was only purpose was to take information from the bottom and put the information on yeah, etc. So there's some IT problems to solve. And you need to write primitives,
you need sound boxes, you need airbac, airbac, airbac, airbac, airbac access control, and this kind of things. And you have changed to do. You need to rethink your entire customer service department because suddenly you actually don't need that much to solve information operated by humans. All right. You have to go. You got a flight to catch have. It is so great to see you are with our continued success with Michel. Thank you very much. Cheers. I'm really lucky to have
Daniel Roberts here. He's the co-CEO and co-founder. Along with his brother of Iron, they are a publicly traded company. They started in BTC. Welcome to the all-in interview program. Thanks Jason. Pleasure to be here. Yeah. And so you started in Sydney. You and your brother was 78 years ago and you got in early on Bitcoin and all these Bitcoin miners wanted to have data centers. Yeah, that that's directly right. So the thesis we saw was this explosion of the
digital world, the growth in the online and at some point the real world was going to struggle. So
we said about to build out large-scale data centers. Yes, the first use case was Bitcoin mining.
But as we said to our CD investors, use that to bootstrap the platform, generate cash flow, layer in hiring better use cases over time as they emerge. Here we are today with AI. We are swapping out all the Bitcoin for AI chips. When did you first start seeing the demand in the company shift from, hey, Bitcoin miners? Well, we need some each one hundreds, whatever it is, to, hey, we're this nonprofit open AI. Hey, we're this research lab. We need some AI compute.
When did that start hitting? Look, we had a bit of a false dawn. I would say back in 2020, we saw an MOU with Dell to start bringing out customers in compute. But in hindsight, it was too early. So we went back to Bitcoin, kept bootstraping the platform. But I would say about two years ago and month by month their demand just continues to escalate. And you were in so early that when you were looking at data center space in the United States, you were one of one looking at the space,
One of two or three people looking at the space.
Yeah. So we actually developed the data centers ourselves. So we go and find the land. We go and
“get the permits. We go on the apply for grid connections. And we were doing it in a scale that”
just amazed people at the time. Like 750 megawatts is our flagship Texas side for years ago, it was unheard of in the middle of the desert. We're building these big data centers. The traditional data center industry going, what are you guys doing? We believe in the future digitization, high performance computing, and obviously now today it's paying dividends. Yeah. I don't think anybody could have predicted when chat GPT came out,
open claw recently as a turning point. And then Microsoft, Google, and everybody embracing this. And that's your big partner, Microsoft. Yes, Microsoft's one of our early partners. We
signed a $9.7 billion contract with them late last year. But as I was explained to you before
the show, that's five percent of our capacities. So wow, things are busy at the moment. Yeah.
“And when you do these buildouts, the big conversation today is not there's no longer the number of”
GPUs putting in. It's just power. Power is the constraint today. Yeah. Look, for many of the industry it is. But for us, because we started eight years ago, tying up all this land and power, it's not. So we've got four and a half gigawatts for context. That's almost as much power annually as the Bay Area users even at Centaurity. Wow. It's huge. So for us, the hurdle of the constraint is really time to compute. And that's emerging across the industry as well. And time to compute means
trades, people coming to West Texas, living in a trailer that you set up to then break around on a data center, build foundations, build water cooling systems. Like, this is hard, manual labor going on. Yeah, exactly. And this is the whole real world challenge to respond to these digital exponential demand curves. They're unconstrained by the real world in terms of their appetite. And it just compounds.
“You need thousands of people out in these locations that haven't supported it. You put stress on supply”
chains. We're seeing what's happening with the memory. Every aspect of it. So it's just permanent whack-a-mole, permanent solving fires to try and bring online this compute. And you get to spend time there. What's it like when you set up a town or you bring a thousand people or two thousand people to what's a pretty much remote small town? I'm assuming that like when you bring a thousand, there might only be 500 living there right now. So what are those towns like? It sounds to me like
something out of like the gold mining era when people first, you know, went and were prospectors. Yeah, prospecting town pretty much. I mean, the barbecue's great. That was a draw card.
Been apart from that. Look, we've always had a policy of hiring local, supporting the local community.
This year we're hitting a million dollars in community grants. Humilityfully. That's things like local playgrounds, supporting the fire departments. But we will hire locally. Once we can't find that trade locally, we will expand the radius by 20 miles and hire out of that. And so on and so on. That's very thoughtful. Yeah. And these folks are coming, say an electrician or a construction worker. They're coming having built houses or, you know, maybe building corporate offices. And now they
come for a tour due to here. And the salaries go up massively. But they got to leave their family for a three month tour or something. Yeah, yes and no, because typically where we locate is where there's heavy electrical infrastructure. Where there's heavy electrical infrastructure is typically where old manufacturing and industry has closed down. So we go down, leverage that sunk capex, rehire, retrain local work forces and bring a new industry to town in these
data centers. Has that work for us now being completely depleted? And we need to train another generation, a younger generation to be generation two belt and really embrace the trades 100%. We're partnering with universities, trade colleges, absolutely. And you go to a trade school you got, you go to a college. People are getting degrees in philosophy and English literature. They're going 50 k a year in debt. 200 k a year in debt. What's the starting salary for a
tradesperson working on a data center doing electrical or construction or HVAC? What's the bullpock range? Oh, look, I don't know what it talks specific. They are going up. The process going up depends on the level. But yes, there is a rush for a good money. I'm hearing 150 to like 300 k a mile in the ballpark. The lower end directionally you're up. Yeah. I mean, it's
incredible when you think about it. There's concern about AI shaking jobs. And then on this
other side of the ledger can't find enough talent to to to to service it. Talk to me about
Energy sources and how you think about that.
that kind of started with a clean beautiful coal year two. They're like all sources matter nuclear.
“Obviously, that gas is plenty full in that area. We obviously got a lot of oil. People don't know”
this about Texas. In the United States, the number one source of solar installations. Yeah. Talk to us about energy. So, an awful lot of these things to stand ability from Taiwan. We have used 100% renewable energy since inception. What? 100? Wait, how is that possible? We use hydro and British Columbia. We use wind and solar in West Texas. In West Texas, where we're located, there's around 45 to 50 gigawatts of wind and solar. Yeah. The transmission line
to export that down to the load centers in Dallas and Houston is 12 gigawatts. Oh, do you go and locate to the source of low cost excess renewable energy monetised it into this digital commodity exported at the speed of light as Tokyo. Great arbitrage. And the wind is producing a lot, but it's harder to get from those areas where people are willing to put up. People don't
understand how big West Texas is. It is an incredible amount of land and you're coming from Australia.
We're also on the west side. People don't understand exactly how much just pure nature land there is. Yeah, undeveloped. So much land. And the issue is distance. You've got to spend billions of dollars on this transmission connection infrastructure to move that power to where people actually want it. You can build wind farms. You can build solar farms. But if you build it in the desert and no one can use it, then what's the point? So the whole opportunity for our
industries to go to the source of that power and monetise it. So the data centers follow the wind turbines, the solar installations. How do you think about batteries and are you able to put those online because obviously you're going to have periods where it's not a windy day in Texas. We have very few days when it's overcast. So that problem's pretty much so. But you're going to have 50 days
“where the sun's not beating down. So how do you deal with the demand and softening that duck curve?”
We don't need to. Utility does that on our behalf. So if this is why these green connections are so scarce, so hard to get and so highly valued. Because once you get that green connection, the utility underites all of that variability. They guarantee you 24/7 reliable power. Got it. So on their side, they're figuring it out. Something goes down and they could fall back, even though you're 100% committed to renewables. If they needed to fall back to gas or whatever,
they have that ability out there. So you have that as a backup. A lot of talk about or a debate, are we getting ahead of our skis? Are people slowing down? There was some talk about the opening eye project. Maybe downscaling a little bit is open. An eye partner as well or a concrete comment. Can't comment. Okay. So we'll read into that whatever we want. But are there pockets where people are saying, hey, let's slow down or is it still gangbusters?
“It's right up the end of the spectrum. It's gangbusters. We cannot meet demand. That's why”
the whole industry now is around time to compute. There are no idol GPUs in the world to need a data center. Yeah. And what's your take on when software makes and this is a big discussion from Jensen himself during his two and a half hour keynote yesterday. We're sitting here Wednesday. I think it is keynote on Tuesday. He was talking about, hey, software is going to make it 50 times more, you know, lower the cost of tokens, 50x. And then you have transport also
contributing to that. When do you think the curve goes from parabolic to simply growing at a ridiculous level? Is there a slow down coming or how are you planning for the future? Look, I think it's actually the opposite. I think it feeds on itself. So I'll give you one example. You're going to chat GPT today and you generate an image. You eat to the prompt. It's like the dial-up internet days. It
is right. It takes minutes. You know, I'd better get this prompt right. Yeah. Finally, two minutes later
it comes. Now, I'll give you an example. If we tin it's the amount of compute available, which is an enormous task from where we are today. And those images take five to 10 seconds. Are we going to generate more or less images? Oh, many more. This is Jensen's Paradox. This is the theory of induced traffic. You know, you build a couple more lanes. People start to think, wow, maybe the distance from Bonday Beach to the central business district in Sydney terms would be an acceptable commute.
Love the analogy. Yeah. So what do you think about what are you seeing? I mean, we're here at that in video. Obviously, they make the leading edge tips. They just walk rocks. So now you've got, you know, two of the leading edge chips coming out of the same company. But custom silicon becoming a big discussion has that started to land in the data centers yet. Obviously, Google, don't know if they're a customer you can tell us, but they're making custom silicon. Amazon is
making custom silicon. Meta is making custom silicon. Talk to me about that revolution and is it
Making it to the data center yet?
products. They're trying to tie up data center capacity. So, yes, there's multiple silicon looking
“for homes. I think I think it's been to say in video has a massive head start. The ghost use”
them they've incubated the standards that they're setting. So I would say the safest pathway to build out at scale early is to follow the Nvidia roadmap, but absolutely over time we are seeing these chips emerge. And in terms of desktop computing under every survey announcement that Dell and Nvidia
are making a really powerful desktop 750 gigs of RAM. A lot of power are going to be able to run
some local models open source with open claw and open source coming from kimi and a bunch of the models out in China has the hacker group, which I think you started in like I did probably in similar time periods. People are starting to get really obsessed with having a 10 or 20,000 dollar desktop setup and running this local. What do you think of that trend? I'm curious. Yeah, I'll be the breakthroughs we're seeing in software the way it's distributed in power to every man in every
and woman in every house and their ability to code and use products like open claw, the generation of demand and appetite for computer to local level all the way through to these mega data centers. It's absolutely real. And as we see the emergence of agents using more and more as we see autonomous vehicles and other automation robotics, it's absolutely going to come compound. And what about nuclear? The Trump administration really seemed to flip the switch on a
rolling belief that, hey wait, nuclear is pretty great. It's clean. It's the original renewable in a way and these new modular reactors have nothing to do with Chernobyl Fukushima or three mile islands. They're much safer. They're a completely different architecture. Have those started to land yet? And are you since you followed correctly in the great state of Texas where I'm from? You
“followed correctly that time. Are you following nuclear? I think you have to. I think the reality”
is it's going to take a decade a bit longer by the time big projects can come into commissioning but now is the time to start that conversation, putting place policies, mobilized capital and start that ball rolling. Yeah, have you, do you have a data center going up near nuclear? No, not at the moment, but you're actively tracking that activity because, yeah, this seems pretty inevitable, yeah? It feels like it. And if that happens, what impact does it have on your
industry? If you could obviously it's happening in China and people always put the Bitcoin
miners, they were like the canary in the coal mine near the hydro dams and near the nuclear where there was excess capacity. What impact do you think this has if you could actually have small marginal or reactors next to data centers? Well, I think it just opens up the market and enhances the U.S. is competitive advantage in this space. Like AI is inevitable. Robotics is inevitable. The reality is the correlation between human progress and energy consumption is really, really
high over a very long time period. So if we can find a way to unlock new generation, clean generation is nuclear and locate that more at the source and enable more compute on a distributed basis, all those use cases we just discussed become easier, more fluid, faster, and then you get that positive flywheel around Japanese paradox in demand. Talk to me about the architecture today of Ethernet and data moving between data centers within data centers. That backbone is
going through a paradigm shift as well, yeah. Yeah, it is. In Jensen coins coins the term,
“the data center is the new computer. Yeah. So you need to step back and you say, right,”
this big building is essentially the old desktop PC we had under our desk at home. Yeah, right, how does that work? So all the cabling, the latency, the number of hops between HGPU, how they talk
to each other, the fabric around in Fina band, Ethernet, it's absolutely critical because every
millisecond matters in terms of performance of that cluster. Yeah. And where do you think or what do you think of Elon's vision? Obviously a longer-term vision of putting data centers in space, and there's a couple other people working on it as well. Yeah, I mean, it's very hard to argue with Elon. He's been very right on a number of things for a very long time. I think sitting here today it feels exceptionally difficult, given the cost of moving things to space, the challenges around
radiation. There's a huge amount of energy near-in challenges, but that's never scaredy long before. Yeah, I'm not really qualified. He's inevitably right, but sometimes he's late. He might be late to the party. He might be late to the dinner party. He might show up, but deserved. But generally, he nails it. How much of an issue is getting the data out of the data center?
Two consumers today, is that not something people are worried about when you'...
out in West Texas, all that data fiber, all that's been taking care of, which is that,
become a leading issue at some point. So this was one of the big myths that we had to bust when we started this business, because everyone said data centers must be located close to
“population centers, metropolitan areas, latency is really important. And we said, yeah, that's”
right, latency is important. But the reality is, in the US, Texas especially, there is fiber
everywhere underneath the ground, lots and lots and lots of it. And when you look at latency
“from outside, in the middle of the desert, in West Texas, down to Dallas, the big carrier hotel,”
six millisecond round trip latency, what six milliseconds? There's a thousand seconds in a second.
Yeah, which are you six? It's a Jason. Yeah, it's not even, it's definitely not material.
“Listen, continue to success. And you're hiring a lot of people. Yeah. Yeah, I think we've got”
120 know on job advertisements up at the moment. All right. So everybody go to the I ran website and listen, companies doing fantastic. Thanks for spending some time with us here at all in at G, T, C. Thanks, Jason. Appreciate it.


