The Prof G Pod with Scott Galloway
The Prof G Pod with Scott Galloway

First Time Founders: Is Cohere the Next AI Powerhouse?

10d ago57:099,767 words
0:000:00

Ed Elson speaks with Nick Frosst, a co-founder of Cohere. They discuss why the company chose an enterprise-only strategy, how he sees the future of AI unfolding, and whether an IPO is on the horizon....

Transcript

EN

Support for today's show comes from Dark Trace, Dark Trace is the Cyber Secur...

deserve and the one they need to defend beyond. Dark Trace is AI Cyber Security,

they can stop novel threads before they become breaches across email clouds and networks

and more. With the power to see across your entire attack surface, cyber defenders including IT decision makers, CISOs, and Cyber Security professionals now have the ability to stop zero days before day zero. The world needs defenders need Dark Trace. There's a Dark Trace.com/defenders for more information. Welcome to First Time Founders. I'm Adelson.

Artificial intelligence has become one of the most heavily funded sectors in the world.

More than 30 startups have raised over $100 million this year alone,

as AI becomes more embedded in how the world operates, a handful of firms have emerged as the key players behind that transformation. Among them is a company building the kind of AI most people don't see. That is the AI that is powering the systems that run businesses and governments founded in 2019 by three former Google engineers. This company has focused squarely on the enterprise market developing large language models for clients like Dell,

SAP, and Salesforce. It even recently signed a deal with Canada's government to bring its technology into public operations. Now valued at nearly $7 billion. It has earned a place alongside giants like OpenAI and Amthropic helping to find what the next era of AI will actually look like. This is my conversation with Nick Frost, co-founder of Cohere. All right. Nick Frost. Good to have you on the program. Thanks for having me. So for those who don't

know what Cohere is, I think we should probably just start there. What is Cohere? What does Cohere do

when you guys building in AI? So we're a foundational model company and we are uniquely and singularly focused on the enterprise. So there's about 10 companies in the world that can make foundational models. So foundational models, the large language models that are largely these days synonymous with AI. Somebody's talking about AI. They're probably talking about large language models. There's about 10 companies in the world that can make them. We are unique amongst them

in our singular focus on the enterprise. So we make large language models that are good at the stuff that enterprises need them to be good at. We make them easy to deploy and efficient to deploy for enterprises. We deploy them securely and privately so that we can't see the data that our customers are passing into the model. That allows them to access the truly useful data out there. And we make them easy to work with via an agentic platform. So we do kind of the whole thing in order to get

AI to work at work. So these foundational models, I think most people who are interested in tech

kind of know what they are, but just at a very basic level, the foundational models are the models that all of these AI startups are building off of. Or all of these companies, if you're building AI or you're building AI products, you need the foundational model, which companies like open AI and anthropic and cohere, your company are building. What am I missing? If you talk about AI companies, there's a lot of companies building stuff with AI. Mostly when

people say they're building stuff with AI, they mean they're building stuff with foundational models. These days, they're still a huge amount of work being done on more traditional machine learning, smaller systems, and a lot of those people working on that will still very rightfully cold so that they're an AI company. But if you are talking to someone and they say, hey, I've got to start up and it's an AI startup or something. Chances are, they mean they're building

off of a foundational model. They're making a large language model to do something useful for their customers. There's a relatively smaller number of companies that actually make the foundational models. So that actually make the large language models that take in a bunch of words and then predicts the next words that should come next. Yeah, there's about 10 of those. So there are 10-ish companies

building foundational models, which is basically the backbone of AI, or at least it's one of

I don't know, the vertebra of AI, maybe we could call the chips the backbone. But what is so striking is there are thousands of AI companies and we're seeing many of them and so many of these companies building AI and yet there are only 10 companies that are building these foundational models. Why is that? So in short, it's really hard and it's enormously resource intensive. Building large language models is a lot more like building a rocket than it is like building

other computer science projects. It requires a huge number of really smart people

Who have experience doing it, working in tight unison.

things that need to go well in order for it to be successful. There's a whole bunch of experimentation

that needs to get done and there's still, you know, and there's huge amounts of resources that

need to get put into it in order to make the thing work, right? So you have to get a huge amount of

compute, so rent all those chips, you know, that you're talking about, you need to get a huge amount of data, you need to have a huge amount of people helping you create that data, getting data annotators, you need to have a whole bunch of really smart engineers working together in order to make it go well. And even then it's still challenging. So yes, so there's really only about 10 companies in the world that are doing it because of that reason. In the same way that there's not that many companies

building rockets either, right? So how did you end up being one of the people who built one of these rockets? Take us back to the beginning. How are you, how did you get into this? Before co-founding co-hear with Eden and Eden Gomez and Ivan Zang, I was a researcher at Google Brain. So I worked with Jeff Hinton for a few years there working on explainability and adversarial examples and capsule networks

and stuff which was really fun and it was there that I met Eden and Eden was just finishing up a

stint in Google Brain in California where he worked on the paper attention as all you need, which introduced the architecture that we still use today. So that helped write that paper in 2017. And, you know, almost 10 years later, we're still using the same architecture. So after he worked on that and when I met him in Google Brain Toronto, he was obviously very excited about the architecture and about what it could do and he showed it to me and I

thought it was also really really exciting. So, you know, we noticed something about the nature of this new model that created an opportunity and indeed a need for companies to make foundational

models. What we noticed was that for the first time in machine learning's history,

if you wanted to solve a task, like a language task, the best model to solve that task was not a model trained on that task alone. It was a model trained on a whole bunch of tasks. So that was really exciting and that made us realize, hey, there's going to be a need. If companies are going to actually make this stuff useful and get this to work for them, there's going to be a need for companies to create really big and really good foundational models

that other companies can use. So we had that realization in 2020 and we've been delivering on that since then. We've been trying to make language models useful for the enterprise by making them really affected at the things that they care about. You mentioned Jeffrey Hinton, who you studied under, who, for those who don't know is considered to be the Godfather of AI. Why is he the Godfather

of AI? What did you learn from him? And, I mean, I think people generally recognize him and his name,

maybe if you're into tech, but perhaps they don't. If you're not super plugged into what's happening in AI, so what was his role in the story of AI? And what did you learn from him? So I studied with him as an undergrad. So I only have an undergrad. I don't have a master's or a PhD or anything. So I had an undergrad from U of T and I did take his course while I was there and I sat in the front and asked lots of annoying questions. And I really only worked with him

closely when I was at Google. So I was a research, a research at Google Brain and I worked with him for, I was working at a Waterloo for a little bit and then I had found out that he was working in Toronto and then we started to work together and then I helped start up the Toronto Brain Group with him and worked there for a few years. And it's during those three years, four years, that I learned most of what I know about research and machine learning and neural nets and I learned

from him. So I learned a huge amount from him, but I'm not, I don't have a PhD or a master's or anything. That's for his contribution. I really can't be understated. Neural nets, neural nets have been an idea for a while. People have been thinking about a neural net architecture at a particular Jeff's been thinking about neural net architectures since the like mid-AIDS. There was a long time where people thought they were not going to work.

And there was this whole way of first perceptrons and that was just a single layer neural net.

And people thought that was kind of interesting for a little bit and then some work came out to show that they had some fundamental flaws and that really cooled people down on them. People weren't excited about neural nets and then people started working on multilayer neural nets or multilayer perceptrons. And that solved some of the critiques, but still people were not excited about it and that generally thought it was a bad idea. And if they wanted to build AI, they were

much better doing things like search or symbolic reasoning or things like that. And so very

Few people worked on it and largely they thought it was dumb.

So Jeff tirelessly worked on neural nets in the face of general ridicule for decades.

For decades, until around 2011, 2012 they were finally able to show that neural nets were suddenly

the best academic recognition. That was the first thing that they really knocked out of the park on. And that was done at U of T with a bunch of other brilliant U of T students. The reason we are

where we are with neural nets in general, which of course is the precursor to transformers, right?

So there's kind of, you think of it broadly. There's like AI as a concept, there's machine learning, as one strategy for doing that. neural nets, there's one strategy for machine learning, transformers as one type of neural net. It's kind of where we are. So the neural net part in particular, Jeff can claim a huge amount of responsibility for, and it's really his tenacity that his dedication to continuing to work on it, even when everybody else around him was saying,

now, this is a bad idea. It's not going to work. That we have to thank for where we are today. So when we look through the history books are written about AI, I mean, AI is having its moment right now. What changed? I mean, AI had been worked on a neural nets had been people have been working on this stuff for decades. Jeff Hinton had been working

on it for decades. He makes this breakthrough with image recognition in the early 2010s.

Now it's ubiquitous was chat GPT that the breakthrough moment, like what will the textbooks tell us about what changed when AI became mainstream? There have been other AI moments. There have been other times when people are when the whole world's really thinking about AI. This is the first time that I would say it's been this dominant narrative of the economy for the past few years, and that's the first like I'm a dominant narrative of technology for the past few years.

And it's been the dominant narrative of the economy even more for the past few years. So that's kind of the first. But there have been moments where people have been as really excited about AI and thinking that they're in some kind of AI moment before.

You got to separate AI as a property versus any implementation trying to get at that property.

So people have been thinking about artificial intelligence. Like what happens if a machine has intelligence the way a person has intelligence for a really long time? There's a myth that I

cite pretty often that was written in that like a, they're around, you know, 1500s, 1400s. I believe

a Yiddish myth about the Gollum, which talks about some rabbi imbueing intelligence into a clayman, and then he asks the Gollum to go get fish from the river. And then he leaves his house for a little bit, and when he comes back, the house is filled with fish and the river is empty. And like it's a joke, right? Like it's a, it's effectively a comedic story that's told at that moment. And the joke is, oh, like intelligence is complicated. And there's nuance and language.

And if we gave an artificial thing language, maybe it wouldn't understand that nuance. That's about 500 year old joke. Yes. So people have been thinking about this for a really long time. More recently, you know, after the computer was invented, there's a whole way of people thinking about that. Now Alan Turing was thinking about the Turing test, thinking about intelligence. After that, there was search. There was the deep blue moment when the search algorithms beat

Casparoff at chess and that had a similar moment. So people haven't thinking about this all the time. This is different. This is a different moment. And it's different in its scale. And when people write the history of AI, this is certainly going to be a pivotal moment. And I'm convinced that neural nets are certainly going to be a central component of machine learning and AI going forward. Like they're so good. They're so fantastic. They do all kinds of things.

If there's no other way, we could get them to do yet. And transformers in particular large language

models are very easy to use for the average person. And that is, I think, really why this feels different.

So if you look at the other moments when people were talking about AI, like deep blue, let's look at that one as an example, right? Like you can read tons of articles about people talking about what's happening with the machines, our computers getting as smart as people. They beat the best chess player in the world, like what's going on. But if you're an average person, you couldn't really interact with that. Maybe if you're good at chess, you could try the chess bots. And that

people did. And actually, you know, chess in some ways, chess is more popular than it has ever been before. And in part, that's because you can be at your home playing against something better than a grandmaster. But you could interact with it that way. You couldn't really interact with a search algorithm, like an A star search algorithm, in anything else. So you're experienced that it's pretty limited. Same with machine learning. Like when we made image recognition,

A best image recognition model.

you could search up pictures of, you know, dogs and see all the pictures of dogs you've seen over the years. Like that's new. That's cool. But you couldn't, that's still directive. That's still like

somebody made the model that does the thing. It's telling you how to use it. Transformers are the first

time that any person without any experience in computer science or AI can go up to the model, you know, open up a chat window, ask it to do something. And it'll do it. Or will not do it. And that'll be interesting itself. But you could interact with it without it being prescriptive of how you

interact. And that's I think the reason why this is suddenly so much bigger. It's suddenly so much

more interesting, so much more widespread and why it's become the dominant narrative of tech over the past few years. So when people write the history of AI and I want to be clear that I think the history of AI is not done. I think, yeah, I don't think transformers are going to get us to art official general intelligence. So I think there's going to be more waves of new, independent, spontaneous inventions. I'm sure that's going to happen. But I'm convinced that the Transformer

is going to be a central component of that. And when the history of AI is written 100 years from now, a thousand years from now, this moment will be talked about as relevant and interesting. And a moment when a lot of stuff happened really quickly as a result of the tenacity of a handful of people. Yeah, it's interesting that in a way it was the consumerization that really took things in a

completely different direction, which is almost a testament not necessarily to the underlying

technology, but almost to like the productization and being able to put this kinds of technology into the hands of millions and then, eventually hundreds of millions of people. Is that when you see all of these big tech companies that are spending hundreds of billions of dollars building out their AI capabilities, building out data centers, renting compute, buying chips, and then spending money on models like the ones you've built to build their own products.

Do you think it was sort of a moment where they kind of woke up to what the capabilities and what the prospects of AI could be because they just saw it a lot? Or was it something else? Was it that the technology changed a fundamental way? I mean, to what extent was this sort of the narrative that suddenly captured people's imaginations versus something in the technology actually changed, which made Mark Zuckerberg think, "Now we need to get on this."

I think everything we're experiencing today is largely predictable from around 2020, 2019. Now that's not a coincidence, that's when I left Google to start cohere with an iron. So that's the reason why I think it was largely predictable around that time is because that's one I predicted. I'm sure other people predicted it before. I got on board at that time.

At the time, I think when I remember telling people I'm going to leave to go create this foundational

model. I don't think we use the word foundational model. We just said large language model company. We're going to be a large language model company. We're going to make large language models. I remember everybody saying, "Yeah, that's probably a good idea." I don't think anybody was thinking like, "Oh, that makes no sense." The question was not low limit. The question was like, "Oh, is Google just going to do it? Are the other big companies just going to do it?"

But I think at the time it made sense. Now it really still wasn't popular. And when we had

conversations for the first few years of cohere's history, the conversation was,

this is a large language model and here's why we think it can help you. The conversation now is, "Okay, cool. Why your large language model?" Or how can this actually help me get into production? How can I have it access my private data without giving that away? How can I deploy it in a secure and safe way so that I can handle regulated industries? How can I connect it to my specific data in an enterprise? Those are all the

questions we answer now. So it changed a lot. What changed in particular and a thing I did not predict at the time was the success of chat, fine-tuning. So you've trained this big generic language model when language models were first created, what they did was they just completed the ends of sentences because they were trained on the web. So you really can think of it as a web. They were calling it a large language model. At the time it wasn't a large language model. It was

a web text model. Yeah. It's like a reddit language model. Yeah. Yeah. So you wrote the first part of a

website. It would write the second part of a website. Not even the HTML just the text on the website. And you could do a lot of stuff with that but it was confusing and weird. And then, open AI and a few other companies at the same time, fine-tuned that large language model on chat

Dialogue.

I was surprised at how efficient that was. Because when you think about it you're training a model

on the entirety of the web. So a huge amount of language. And then you fine-tuned it on a relatively

small amount of chat. And yet actually it learns how to chat pretty well. So that is I think

responsible for the difference between 2020 and 2022. It was the data efficiency of chat fine-tuning that allowed people to, to like, for the model to meet them where they're at. They kind of expected users kind of expected chat to work when you told them that a large language model. And it didn't. It was this like weird text thing. So then making it work in the way they expected it to work seems to have really gotten like a woken people up to the effect and the utility of these models.

Yeah and I'm sure it's also the volume too, the idea that if you keep on chatting with this AI

you're contributing more and more data for it, for it to train itself on. I want to get to the specifics of co-here in a moment. But you know it's it's interesting you're describing that the model gets better when it's subjected to or when it's fed. Large amounts of data and also like diverse forms of data. And originally we were kind of just limited to the web, but the web isn't all of life. There's more beyond the web that these models could be trained on.

And so too you could say the same thing about these chats. I'm wondering if there's other forms of data that you think will be prevalent for model training in the future. Things in the physical world. I mean, typing words onto it onto a keyboard and seeing words on a screen isn't everything. But to AI right now, it seems to be close to everything. So is there a way that other forms of data that you think in the future AI will be fed and therefore that would

sort of take us I guess on the path to AI? Let me first talk a little bit about the way we train

these models. Okay. So the first step is to train them on everything on the web. Everything on the open web. So you have you create a data set of all the text that's available for training from the web. And that turns out to be a huge amount of text. Orders of magnitude more text than you will ever read. I like like a thousand people a thousand years reading 24 hours a day volumes of text.

That's how much text. So first step is to train on that. Then you make a data set with people.

So you have people create, like, talk to the model. And if the model gives a good response, they say, that's great. If it gives a bad response to say, that's bad. And they write what the model should have said. If you do that process, you'll create both ratings. Like, it's a good response or a bad response. And you'll also create, what's called supervised fine-tuning data, SFT data. So that's like, here's the input to the model. And here's a gold standard of what a person wanted.

Like, they wrote out the sentence. Like, that's what the model should have said. That's called SFT data. So then you trade the model on that SFT data. After that, you can do reinforcement learning, which is a type of machine learning invented before transformers, where you're training a model without access to the, to the right answer. The model kind of tries stuff. And then you say, hey, this was better. This was worse. And you, and you update the weights of the model based on that,

that signal. So then you can do reinforcement learning. Now we do a whole bunch of reinforcement learning with synthetic data. So now we use the model itself to generate data, and then do reinforcement learning on that synthetic data. So that's a big component of training now. So there's like, the data you get from the web, the data you make with people, and then the data you make with the model itself. And those, all of those are super relevant for making the models that people use today.

Your question about models being restricted to the, to the web and missing the stuff in the real world, is that a blocker to HCI? I like, yeah, definitely. That's a blocker to HCI. If when you say HCI, you mean human-like intelligence. Yes, that's a blocker to HCI. We are embodied creatures. We have, we learn our intelligence through interactions with the real world and intervention into the real world. It's lots of interesting psychological work that suggests learning and

interaction are super related. So interaction is super important. It is that a blocker to HCI,

like, yeah, definitely. But I don't, there's a whole bunch of blockers to HCI. And that's just, that's just one of them. And the technology, as it exists today, is massively impactful, massively useful, absolutely transformative on the nature of computers, and subsequently the nature of work, massively transformative on the economy in general, then I don't think it's HCI. And nor do I think

The transformer alone will get us to HCI.

out in the world and say, oh, geez, I wish my computer was a person. I look out in the world as I say, oh, man, it's so much stuff that a computer should be doing in not me. My time should be free to think strategically, to think creatively. There's so much work that a large language model when connected into the things that I am using can do for me and subsequently allow me to do

the interesting in the human. And like, that's what that's what I want to make. I want to make

a technology that does that as good as possible. Do you think that AI, the people building AI, the leaders of the AI industry, Sam Altman, probably being the the high priest, right now, at least? Do you think that there's not enough appreciation of that? Do you think that people are too obsessed with? We need HCI. We need human-like intelligence. I just look at the contract

between Microsoft and Open AI, which basically one of the stipulations in the contract is,

you know, the terms will change once we achieve HCI. I mean, there are many questions like, what does that even mean? But the fact that HCI is sort of the benchmark for everyone. And I'm even asking you, like, how do we get to it? Do you think there's too much obsession with this concept of HCI in the AI industry right now? Yeah. Yeah. I mean, you said, yeah, look, high priest is a good term. A lot of the thought around HCI and discussion around HCI feels religious to me. It's calm

down a little bit, right? Like, if back in 2023, like 2024, my views on this were a little heretical. People would disagree. If I said, hey, HCI is probably not around the corner, people would disagree and say, why do you think that they're like, I would get a lot of pushback. I don't get much pushback these days. I'm like, yeah, guys, we've got to guess. We know transformers

incredible. Super awesome. Super good. Can definitely be way better than they are. You need to be

deployed correctly. You need to have lots of stuff in order to get them into production. That's what we

focus on. But HCI, like, no, and everybody's like, yeah, totally, I get it. And if you use a large language model, which has everybody does these days, you'll feel that pretty soon. You'll be like, yeah, they're amazing at these things. And then I ask some other things and they don't understand it all. Completely different. And they have a completely different, it's very different talking about language models. It is chatting to a person and people kind of know that when they're grounded

in an environment. The focus on it is I think, you know, a narrative device, more so than it is a scientific belief. We'll be right back. Support for the show comes from LinkedIn. It's a shame when the best B2B marketing gets wasted on the wrong audience. Like, imagine running an ad for a cataract surgery on Saturday morning cartoons or running a promo for this show on a video about Roblox or something. No offense to our general listeners, but that would be a waste of anyone's

ad budget. So, when you want to reach the right professionals, you can use LinkedIn ads. LinkedIn is

ground to a network of over 1 billion professionals and 130 million decision makers according to their

data. That's where it stands apart from other ad buys. You can target buyers by job title, industry company, roles and you're already skills, company revenue, all suit and stop waste

and budget on the wrong audience. That's why LinkedIn ads, both one is the highest B2B return on

ad spend of all online ad networks. Seriously. All of them. It's been $250 on your first campaign on LinkedIn ads and get a free $250 credit for the next one. Just go to linkedin.com/scot. That's linkedin.com/scot in terms and conditions apply. We're back with first-time founders. I may also see the question that you said everyone asks about coherent, which is what is the difference between coherent versus the other foundation model companies. What is the difference between coherent open AI,

between coherent and anthropic? Those are the two big ones in my head. What is the difference? The big difference is we are not a consumer company and we're only an enterprise company. So we can't pay $20 a month to get access to our tech. We're not trying to build a product that people use in their personal lives. We are instead selling only to large and medium enterprise companies and we create language models and search models and an agentic framework for using them

that is tailored to the needs of those companies. That strategic difference comes from a philosophical difference which is a different view on the technology. I don't think the technology is going to get a stage AI and I don't think the biggest utility of the models is in people's personal lives. I think the biggest utility of these models is in work. I think that their ability

To augment and automate work at a desk behind a computer is I think what they...

And so we have that different view of the technology that leads us to think differently about where

we can add the most value to the world and that leads us to being an enterprise, a singularly enterprise-focused company. That is, as mentioned, unique amongst the foundational model players. Just so we can picture what kind of work is being done. What is an example of a use case that an enterprise is adopting because of using coherent as a foundation model? Lots of people will go into work, open up north which is the name of our agent to clap forms so it's like a chat app

with automations and you can make custom agents and you can share those across people like it. But it's a chat app that on its surface you would be familiar with. So I'll go into work. They'll open that up and they might open up our model and say, hey, somebody emailed me yesterday

that a brief for a meeting, read that email, then cross-reference that with our sales force data

and then make a table sharing telling me the state of that customer. That's something that I can do for you. Or they might say, hey, you know, I just got this data room from this company. I'm trying to evaluate, read through the data room, do some analysis, come up with a sighted and detailed document on how you think that company looks and then send a Slack message to my co-workers with that PDF. Just looking at where you are in the AI world, you are

automating tasks that are done at businesses and enterprises that as we are all talking about would otherwise be done by humans which introduces the question of is AI going to replace people and this has been a large debate, we're obviously seeing a lot of layoffs in tech right now.

A very charged debate. How do you think about all of this? How do I think about it frequently?

Yeah. So there's a lot. So I think this technology is, you know, there is a huge amount of stuff

that people do that large language models should be doing for the large language models. Well, do a better job of them, the work itself is not very enjoyable. Humans are really good at a lot of stuff that large language models are very bad at and largely they enjoy the stuff that they're good at and don't enjoy the stuff that large language models are good at. So I think, you know, in the same way that we've had previous industrial revolutions that augmented and automated

a huge amount of stuff that people generally didn't really like doing. And we look back on those periods of time as kind of chaotic, but largely a good idea. Well, normal is running around saying, hey, the steam engine was a step in the wrong direction or hey, the industrial revolution was bad.

We know we should all still be farmers. I think there's something similar going on with this.

Now, I do think this technology is fundamentally augmentative. Right. I think this technology, anybody work on behind the computer. I think this technology can automate, I don't know, 20 or 20, 30% of their work. I don't think it can automate 100% of pretty much anybody's. Huge amounts of the work that we do is not just text on a computer or images on a computer. It's personal, it's understanding the cultural context, it's talking

to people and coordinating and aligning, it's thinking strategically, it's doing all of this stuff. And that true at every level of an organization. So I think there's a lot of people say, oh, this is just going to take out the bottom bit of an organization. What this is going to do is make it impregnant and improve and increase efficiency and productivity across the entire organization. Is that going to have consequences on the labor market? Yes, absolutely. It is just as

the industrial revolution had huge consequences on the labor market. Just as, you know, the widespread adoption of computers had huge influences on the labor market. Like in our lifetime, or in my lifetime, we have seen wild changes in the way that work is done as a result of technology. It was not so long ago that every organization had a huge number of people working as typists to type stuff up because people didn't have computers and that was that needed to get down.

That doesn't exist anymore. But the labor market evolved, labor market figured out

all those people are still redoing good work, just doing different work. So I think that this

will have similar effects to the computer, to the internet, to the industrial revolution on the labor market. And I think governments and organizations and unions and businesses should be thinking about how to make sure that that goes well, how to make sure that that is largely that that uplift people and that builds a resilient economy and that allows people to do things they like to do. And I really, like, that's the conversation I've encouraging everybody to have. What are the

policy decisions that can be made in order to make sure that that is good for all people? But I think recently, like all the talk, you know, there's been a lot of tech layoffs.

I know that that's kind of tried to be tied towards AI.

related to the over-hiring that happened during the pandemic than I think it's related to having

those people suddenly like an AI is doing that job for them. Yeah, I think that's kind of born out.

If you look at the look at the data, yeah. So I do think it's going to have consequences on the labor market. I think when history looks back at this, we'll largely say that it was a good idea. The same way people say the computer was a good idea. The same way people say the industrial revolution was a good idea. But it is going to be a chaotic moment and it does require 10. Do you have concerns about what this will do in terms of inequality? I think about the downsides.

I think the long term, it's, you know, value are creative, which means that's a good thing for

society in the same way that the steam engine was, the internet was. But I think the, the, the

big concern that seems super likely to me is the value is accrued to the people who own the AI and that yes, maybe some of us might might be getting some value out of using AI. But we won't be the ones who own it. And it will only make wealth inequality even worse, which could have all of its own impacts. Do you worry about that? I do worry about that. Yeah. Incoming wealth inequality

is the thing. I, is one of the things I think is the most pressing issue. I think, yeah, I think

it's one of the most pressing issues for the world right now. And I do worry that this technology similar to other technologies stands to exacerbate the wealth inequality that was already rising over the past, you know, a few decades. I think the correct solution to that is policy. Oftentimes when people thinking about the economy that they kind of forget that this is a system we create. And it's a system that can be subtly pushed in one direction or another direction

and you can add policies. You can change things in order to make sure that this works for everybody. For all people in your country or in whatever organization you're within. I think that's the conversation I want the world that we had. And one of the reasons why I'm very vocal about saying, hey, don't think we're getting to AI is that the AI conversation often distracts from that conversation. Because if you're talking about, oh no, what if we make a digital god and it kills all

people? It's very difficult to have the conversation, hey, like, you know, do we have the right policies in place to encourage better income distribution such that we don't end up in a buy-for-cage society, which I don't think anybody wants? What kinds of policies or what do you think that would

look like? This is my first question and then my second question is, as someone who cares about that,

are you a pariah in the tech industry? Because from my understanding, there is a feeling of, if you're talking about policy and regulation, then you're a lotite and you're just trying to

hold AI back and you're just scared. So I guess how do you think about those two questions?

Am I a pariah for talking about that stuff? No, but I also don't live in Silicon Valley. Right? Like, I live in Toronto. I'm certainly in the tech scene. You know, I talked to a doctor. If he sees all the time, I talked to other tech people. I talked to, you know, lots of people thinking about this. But what I, you know, what I be a pariah in, I certainly have, I certainly have lots of different views than people would have hanging out in the remnants of the effective

altruist parties in Silicon Valley. Right? Certainly have very different views than the culture that developed there. I mean, am I, I'm certainly not a bloody, right? Like, I'm certainly not against the creation of technology. But, you know, having been to, wait, what was that town I went? As a town in England, that was kind of the epicenter of that. I went to him using him there on thatites for those very interesting. I wish I had, but I know, I've started, I know what you're talking

about. I wish I had the neck. Yeah, you know, and a lot of the people at the time were, you know, what they were frustrated at the loom for making their economic situation worse. Right. Now, again, we all look back at the automated loom and we think that was a good idea. And the economic situation that people live in now is better than the economic situation that they were living in during that time. But I'm empathetic. I'm empathetic to saying, "Hey,

like my economic situation is shitty." And it's shitty at a systematic level. At a population system level, let's figure out how to make that better. Right? So I am empathetic with that. So what I'd be a pride, I don't think so. I think actually a lot of people know this. I think if you go to Silicon Valley and you tell somebody, "Hey, you know, income inequality is bad.

It's hard to live in that city and not think that.

of care here, reportedly care here is looking to go public. I don't know if you can

talk about that, but that's what we've been reading. Can you tell us about those plans?

In and I've been in my goal in creating this company, was to create something that outlasted us. Was to create a generational company. Like, that's motivating. That's exciting. That's a fun thing to be part of. That's what we're excited about. I think the right way to do that is to become a public company. I think that's how you can build. I think that's like, I like those mechanisms. I think that's how you can build a company that is bigger and longer than you. And that's exciting. I think the tech

we're building is speaks for itself and is getting there. I think the customers that we've closed and the relationships that we've built and the value we've been able to add to our customers is something I'm enormously proud of and something I want to keep doing and something I think would be best done through a public company. We'll be right back. We're back with first-time founders. We've discussed the sort of the decline of the IPO on our

markets podcast a little bit. Just the fact that there are a few of public companies in America than ever. And also the fact that so many of these massively transformative companies are taking so long to go public. The idea that overnight is only now. I mean, they had the non-profit issues,

but the reality is this company is supposed to go public at a trillion dollar valuation. That's crazy.

So how did you balance it? It seems as though the startup world is more interested in staying

private as long as possible for, or at least that's what the data would tell us, versus going

public. So how did you think about going public? What are the pros, what are the cons and and why now ish? Well, I've made no promises to timing. Yeah. I was in there. No promises to timing. But I do think, look, I don't, yeah, again. I don't know what overnight is doing. I think there are very, I look forward to the interesting books that will be written about that company over the next 10 years. I don't know what's going on. And I don't know when they're going to go public. I know

they've announced stuff. I don't know. It's a very, it's its own beast. It's its own unique and

interesting company. Remember, I'm sure people lots will be written to describe that story. For us, our business looks a lot different because we don't have an enterprise. We don't have a consumer offering. We don't have the same losses that they have. We're not using money on every customer. Our margins are actually, you know, they look a lot more like SaaS margins. When we work with a customer, we deploy our models into their environment. Right. And that allows that model to

access their private data securely. So that we can't see it. It also makes the nature of our margins completely different. You know, I think that puts us in a very different position than the consumer companies that are out there. And I think that puts us in a position that makes it a lot more resonant with a public market. A lot more understandable, a lot more. It looks a lot better. So I do think the right way for us to make sure that this company outlasts us and continues to deliver

is to eventually go public. When? Yeah. I don't know. And there's an interesting thing you're pointing out about. They're being a smaller number of public companies. There's an interesting thing you're pointing out about company staying private for longer. I don't think those are unrelated to the income inequality, wealth inequality, stuff that we were talking about earlier. Right. This is an interesting dynamic in the economy going on right now and all, like all of those things

are kind of related. But for us, like, yeah, that's what we're going for. And I think that's the path

that we're on. You are a Canadian company. You're based in Canada. How do you think about AI as a sort of international geopolitical race? We've got some big AI companies in America, some big AI companies in China. I guess the mistrol is another one that's in France. And there's us in Canada. And that's right. Yeah. There are four countries in the world that can make this technology. Tell us more about what that means for society. Yeah. That's a strange one.

I think that's when you think about how difficult it does to make this technology and how resource intensif it is. It's not super surprising that they're like just in, it's not surprising that there aren't that many companies doing it. It's not surprising that there aren't that many countries. So, but it is a strange reality that, yeah, there are four countries in the world that can build

This tech.

same way, you know, I use this analogy of like rockets. It's like building rockets. Another

analogy is to say it's like building power plants. It's like building infrastructure. The technology is a lot more like infrastructure than previous computer science efforts. And so I think it's a good idea for countries to have the ability to build infrastructure themselves. It gets a good idea for countries to be able to build their own nuclear power plants. That's useful. That sets up the country for success. It's a good idea for them to be able to build their own roads. Like infrastructure

for people is good. And it's good for countries to be able to do that from a strategic perspective, from a security perspective, like from an economic perspective, it's generally a good idea.

So I think this technology is important for countries to be able to build. I think there's ways

that countries can work with the providers in order to give that ability to their country. And I think that's something that a lot of the world is seeing right now. You know, we had two decades of the history of tech, really being America, really being centered on America. And America's dynamic, fast-moving, ingenious place that we're going to continue to be defining on the technology and technology in general. But I do think it's good for the world

to have tech that comes from other places. To have a more distributed view on how technology is developed and what it can do for people. So that's one of the reasons why I'm happy to be building out of Canada. Is that the thing that changes the trajectory of geopolitics? For example, you made the comparisons to other technologies. I think some people would also make the

comparison to the nuclear arms race, not to say that AI is like a nuke. But to say that it was the

belief of nations that this is what will tilt the balance of power across the world. We have to build these things because if we don't, and if Russia or anyone else gets their hands on this transformative technology, it will completely upend the geopolitical structure of Earth. Do you view AI in the same way? Not in the sense that it will be, I'm not making a nuke and it's going to be destructive point. I'm making a point of the power of it.

Is it a question of whoever builds the AI first and whoever builds the best AI? They will be

the most powerful force in the world. Do you see it that way? No, that's a little extreme.

Yeah, I see it as a strategic and imperative for countries to have this technology to facilitate economic growth to do stuff. In the same way I see it's imperative for them to build roads with a great health care, build nuclear power plants, build wind, little weather, other pieces of infrastructure. I don't think I would go so far as to say it will be the defining thing. And it's certainly yet that the nuclear bomb analogy I disagree with, and I know that's often

used when people are talking about AI as an existential threat. But again, because I don't think transformers are going to get us to AGI, I don't think they pose an existential threat. And so I don't, I don't think that analogy serves us in conversation. I do think it's important to think about the technology as infrastructure and infrastructure with a good to build, but one piece of infrastructure amongst many pieces of infrastructure that are good to build

and important in this moment. And I do, or certainly in a dynamic and changing geopolitical time, like this, you know, these are unprecedented times, as has been said for the past decade. But yeah, and I think the technology will have an impact on that, but I don't think it will be

the defining thing. You are one of the leaders in AI, which is the most important and

transformative technology. It's certainly my time on Gen Z. I was not there to see the internet

be created and built. So I think, you know, this is an extremely important moment, not just

for America or Canada, but for the world. Does that weigh on you? What is it like to be a founder who is at the forefront of this world-changing technology? Yeah, it weighs on me. Yeah, it's totally a strange place to end up in. And not a place I thought I would. A huge excitement. I love working with co-hear, working at co-hear. I love working with all the people that co-hear. And I get really excited and I'm enormously proud and occasionally deeply moved

by the work that we're doing and the group of people that I get to spend time with working on it. It is a complicated emotional experience to think, hey, this technology is, you know, the defining narrative and we are one of 10 companies in the world for countries in the world

That are building it.

that influences how I think about this. I think we're building something beautiful and cool and

can be useful and it's very meaningful to the world and to the people around me. And that's interesting. There is subsequently a pressure and an intensity that I did not anticipate when we started this company. I don't think anybody did. I think I solve that by staying grounded in

things that have nothing to do with tech sometimes. I think that's an important part of the way that

I'm that I live my life is by doing stuff occasionally that is completely unrelated to AI to transformers to tech itself. And I think that might be why I have pretty different views than the rest of the people in similar positions to me. A lot of young people watch this show, what would be your advice to young people? Not necessarily just founders, but I think young people in general, perhaps even young people who are concerned about their job prospects, their career

prospects, people who believe that AI could be taking their jobs. I mean from the guy who's building the AI, what would your advice be to young people? Sure's a job. My advice is to it has been the same for young people for a while, which is that I know I meet a lot of people, young people who are anxious about making the right decision. I've got to work on this because that's going to be the right thing or that's going to be the right thing. My advice is often been,

look, the world's too chaotic for you to predict what's right. Every year you could read an article

of somebody saying the next big job is this and you've got to go into this and they're almost always

wrong. And so it's just too chaotic. You can't predict it. What you should instead do is focus on what you're interested in. And what you can optimize for is your own excitement, your own curiosity, your own interest. And when you're thinking about what career you want to pick or something,

you should first and foremost be like, well, what am I excited about? What am I interested in?

And conditioned on that, your ability to be successful is much higher than conditioned on you choosing what you think is the optimal decision. So I would really encourage young people like follow their curiosity, follow their passion, more than they think, follow what's optimal. You can't predict it. It's really hard. My other advice is to, when the central, when the narrative around the world these days is wonderful. Like, it's an unprecedented chaotic, absolutely

crazy time. I definitely encourage people to learn about history. Just read just whatever history. From whatever time. Like, whatever you find ancient history, prehistoric, prehistory, you know, enlightened history, modern history, whatever. Literally, wherever.

Yes, we live in unprecedented times. Yes, stuff is chaotic and we are right now. And I think

when the history of this moment, I think people are going to read the history of these few decades with curiosity in the future. But there's been a whole lot of crazy times. There's been a whole lot of absolutely nut stuff that has happened in the history of humanity. And it is calming sometimes to read about those and understand the good things that happened to bad things that happened the way stuff continued in the face of a find that grounding. And that grounding is helpful

for keeping you focused on like what you're interested in, what you're passionate about, what what you're curious about. Nick Frost is the co-founder of co-hear Nick. This was great. We really appreciate your time. And you as well. Thanks to the conversation. This episode was produced by Alison Weiss and engineered by Benjamin Spencer. Our research associates our dance to Lawn and Kristen O'Donay Hugh and our senior producer

is Claire Miller. Thank you for listening to first time founders from PropG media. We'll see

you next month with another founders story.

Compare and Explore