The Room Podcast
The Room Podcast

Ethical AI, Iteration Velocity, and the Future of Software with Grant Lee (Gamma), Noah MacCallum (Open AI), Malte Ubl (Vercel), Chris Messina (Product Leader), Ash Kumrah (The AI Collective) | Inside Summit 2025 [LIVE]

12/16/202536:157,417 words
0:000:00

In this special live panel episode of The Room Podcast from Inside Summit 2025, host Ash Kumra sits down with an all-star group of AI and product leaders: Noah MacCallum of OpenAI, Grant Lee of Gamma,...

Transcript

EN

The most valuable insight today is having access to the problem that no one e...

If you have the ability to identify those problems, find distributions sell, get your first customer. There's a lot of people who can build a lot of software that don't have access to that. We're happily partner with you or you could just build it yourself potentially. So if you have that lean into it, if you don't, maybe try and do some weird stuff with your career that will get you exposed to non-consensus stuff that you can double down on. Welcome back to season 13 of the room podcast.

Claudia, can you believe we've been doing this this long?

No, it's been five years. So many episodes, so many incredible conversations telling the stories of some of the world's most iconic founders and funders.

And whether our listeners are new for the first time today or have been with us from season one, we're so grateful for the community we've built. Completely, we've had the mission to open the door to the room where it happens for those entire five years. And have been able to do so over 120 episodes with hard earned lessons from some of the top technology and consumer founders of our time. And we've been through a lot personally as we have indeed. I built, scaled, sold my startup.

You were our first investor, so you've been there through the entire journey and you've also recently graduated from HBS, you're now building it for sell and through it all, we've brought the funder and founder perspective.

Yeah, it's really been an incredible journey and I'm thrilled that I'm back full time in San Francisco and we're probably here to stay.

That means that there's more for us in terms of growth both inside and outside of the physical room. But what's important to have really out there is these stories and we've been able to do so over incredible guests from the CEO and founders of Flexport, perplexity, particle. Xavier Zillow, Vanta, I could go on these guests and their enterprises have generated over a hundred billion enterprise value and really frankly move the markets. And growing, especially in 2025 with everything that's going on in AI, so I'm so excited to share season 13 stories that are maybe a little bit more AI focused than past seasons.

Absolutely. And you can find as IRL and URL, especially with our events such as our annual conference inside Summit or upcoming SF Tech Week events and beyond.

And if you want to get information around those events, subscribe to the room podcast and subscribe to our newsletter at theroompodcast.com.

Perkins Kui supports the most innovative entrepreneurs and investors in fast moving in high growth sectors, addressing their mariad of legal needs. But the firm doesn't just provide end to end legal and business counseling to its startup clients. It also facilitates introductions to key advisors and sources of capital. Perkins Kui's interactive website startup percolator offers access to programs, resources and rich dynamic content designed to assist entrepreneurs on their startup journey. To learn more, go to startuppercolator.com and Perkins Kui C-O-I-E.com.

This podcast is brought to you by Mercury, the banking platform businesses like the roompodcast used to simplify their finances. It's time banking did more than hold your money. Now it can.

With Mercury, you can pay bills in seconds, close the books faster and even send invoices.

Not only does Mercury do away with a patchwork of tools, it eliminates guesswork, giving you complete and accurate visibility into your business finances all from one account. Applying minutes at Mercury.com. In this inside summit episode of the roompodcast, we're joined by an exceptional group of builders shaping the AI landscape. Moderated by Ash Comra of AI Collective, this panel brings together Chris Messina, product leader and creator of the hashtag. Grant Lee, founder and CEO of Gamma, Noah McCallum, product leader at OpenAI, and Malta Uble, CTO of Versel.

Recorded live that inside summit, this conversation explores what it takes to build durable AI products in a rapidly evolving market. The panel digs into how teams move from experimentation to sustained impact, where real product differentiation is emerging as models commoditize and how leaders think about ownership, leverage and scale in this next phase. With perspectives spanning frontier models, developer platforms, and AI native applications, this episode offers a grounded book and have the best operators are navigating velocity and security and long-term bats without losing side of execution.

Let's open the door.

Before we get started, since we have four amazing people here, could you just briefly introduce yourself and tell us what you do currently, just so everyone has context.

We start with you know, everyone. I'm Noah McCallum. I'm at OpenAI. We're actually starting a new Applied Evils team internally, so I'm a founding member of that. We're focused on identifying gaps in API capabilities and figuring out how to improve models so that you can all build more effective products and top. Thank you. Hi, everyone. My name's Grant, I'm the co-founder and CEO of a company called Gamma. You can think of us as building the anti PowerPoint. If anybody's saved up late at night working on a deck, trying to format it, find it right layout, we're going to try to save you some time.

We use AI to help you eliminate all that, minimize the manual tedious work an...

So, I'm Chris Messina as as suggested, I'm the inventor of the hashtag, which was like 8,000 years ago. More recently, I help a lot of founders go to market in terms of telling their story mostly on a platform called Product Hunt. But before that, I worked at Google. I worked at Uber. I've been on developer platform teams. I worked on a lot of open source and open protocols for the social web and was also a founder of a conversational AI company back in 2017.

So I've done a bunch of random things because I'm not good at anything and so I'm here to be the guide fly on this panel.

Hey, I'm Altown, the CTO for sale, Madison's employer. Yeah, yeah. Thanks for having me. What we do is if you have an app, front and back in AI agent, you throw it over the fence, we're standing on the other side of the fence that we run it for you. And when it goes down at night, we get what we're going to first and only call you if it's really bad. The other thing is we're making tool called V0, which lets you type in what you want to build. And it's coming out in really nice source code that you can then deploy it for yourself or any else.

Thank you very much. There's a theme that I've been hearing about in the AI landscape and it doesn't matter what size of your company doesn't matter what industry you're in. It's even in general news like mainstream news.

It's called ethical AI. Is anyone here familiar with this term? Is anyone like me or like, what is this term? Because I don't know about you, but I keep hearing more and more about ethical AI.

And I'm just curious and we'll start with you know what do you define as ethical AI and how do you feel your company is helping address it?

Of course, ethical AI is super important and something that we care about a lot internally. There's a lot of dimensions to this and it's important to try and pull it apart. And it ranges from inherent model bias and is it providing outputs that are harmful to individual users and maybe more of a per interaction way. All the way through how do we make sure that there's not any big scary sci-fi risks for the models becoming a taught in this sort. And we're paying attention at every level of the stack or now a global company, hundreds of millions of weekly actives. In the decisions we make have huge impacts on people's daily lives. In particular, we were reporting on sick affinity being a problem where the models are being a little too agreeable for users.

To say, hey, is this a good idea? You don't want the model to say that everything is a good idea. There's many ways that we can imagine that failing. I think it's an example of think how we showed up to identify that based on feedback in the wild, providing a postmortem on what we're all going to do to improve.

It's kind of part of our broader stance of iterative deployment where we know that the way that these things fail in the wild are impossible to predict.

So we'll just kind of ship on a very bounded but fast cadence and try to fix things as rapidly as possible. Thank you. For us, one of the most important things is just to have like an open dialogue. So strong partnerships where we're sharing data, what we're seeing, one other avenue at the application layer that we think a lot about early on is around like trust and safety and the things that people are using our product for and a stage of startup that we're our today. I think we're investing in that much more than I think prior startups have had to.

So trying to maintain this sort of ecosystem where everyone's very collaborative and hopefully stirring the sort of future together is definitely cop in mind. Thank you.

The challenge we're talking about, I think all AI is they've just talking about like ethics and that concerns human values and what is important between people and cultures and societies.

What I have found having worked on social media for so long is that a lot of the technologies that we build are downstream of our own ideocincrecies and our own unaddressed traumas and the things that we don't know how to talk about. We don't know how to talk about or don't have a language for or if we do have a language for them, the language then essentially with a large language model is somewhat deterministic of the outcomes that might happen. There's been obviously some terrible instances lately where people have gone down rabbit holes where talking to a chatbot that gives them a sense that everything that they think is a good idea or something that is appropriate or is acceptable or where self harm is okay is something that.

In normal culture with normal humans typically you don't have a great deal of structure to actually unpack that conversation and to support in a way that's helpful. If someone worked to come to me and ask about suicide, I would want to be there for them, but that doesn't mean that they're not going to be pursuing that direction some other way.

And so when it comes to ethical AI, I think we have to be pragmatic and realistic to look at our own culture that is informing these technologies and not put the entire onus on the technology producers themselves.

In other words like we have some responsibility to become more clear about our behavior and how it informs these things as opposed to assuming that creating a censorship regime is going to prevent all negative applications of these technologies. Thank you. We should give a slightly different perspective. I think one thing that's important is often you talk about AI immediately the ethics question gets asked and I'm actually most excited about the very boring apps and boring kind of use cases like expense report classification.

Which used to be very difficult and now you can like go to prototype and then...

And so I think there's like obviously it's worth having the discussion by also works is associating as a stigma for every possible use case.

Okay. I really appreciate the diverse opinions we have here and along those lines, speaking of diverse initiatives, how many people here are building companies right now. There's a reason why I'm asking okay and how many people here are looking to raise money raise your hand. And last question, it's not sure question. How many people here, regardless of your stage are really obsessed right now with how do I take what I have and go from like zero to a hundred. It's already working. I now have let the match and I'm really focused on growth and deployment perfect. So this question applies to all three.

What I love about this panel or I call the Avengers is you all have had amazing experience building products, investing in products and scaling products.

If you were an entrepreneur right now, given what you know about AI and the world that you reside in, what advice would you give to these entrepreneurs looking to really scale their products?

You have a lot of builders here as you can see when I ask them the raise their hand and they're all hungry and they all seem like they really want to go. They're not at the ideation stage or like no, we're here because we want to build something. We want to be in your seat, maybe in a year or two, if not sooner. What would you say? I have this conversation a lot with founders that I'm working one on one with and I think it's a good distinction going zero to one and one to one hundred. Where if you just have a demo, sometimes that can be enough to raise money, but the one to one hundred is where start to scale robustness starts to really matter and you are trying to figure out how to make this actually work in the real world.

This is where so many AI products will end up failing and there's kind of two main loops that I really advise people to nail. One is the sort of vibes based improvement. There's a narrative shift around e-values right now. Some people say that it's the only way you have to have e-values and it's softening a little bit. I think this is right that the first 70% can come from vibes. If you have someone who's very opinionated, who is a lot of taste and is in a very tight feedback improvement loop with the product.

So they see how it works, see a fail to update the system prompt, they tweak some UI. You can get actually quite far. You'd be surprised some of the startups that we work with are quite biased in this direction. But as things start to fail, then build targeted e-values so that you can start to systematically improve on those really tricky really inconsistent capabilities that are impossible to squish out with the vibes based loop.

And then in order to build those e-values, you need to have really good tooling and do that quickly and make sure that they're grounded and useful. So you're not wasting time and there's major failure modes and both directions that I see a lot.

It's a really big question. I think I have a lot to say on this. Often times I'll talk to founders and it's really exciting where I now to be building with AI and you might start seeing some initial traction. Throwing a lot of these ideas at the wall and maybe the temptation as soon as you see something start working, you're going to go all in. I think the downside is that the reality is you're going to be working on this thing for many years. This is a multi year journey. And if you're not really committed to that problem or that problem space, you might be married to a bad idea forever because you saw that little bit inkling of, oh, there's something here.

When we started gamma, this was pre-AI, we focused on really what we cared about, which is we want to help change how people communicate ideas. This was a concept we were passionate about.

I started my career in consulting and investment banking, lived in slide decks and so was always frustrated with that format, that medium has been around for almost 40 years.

And so we started that because of that, we're handed of great gift of like obviously AI is moving really fast. We can incorporate that to how we build gamma, but that wasn't the reason we're pursuing this problem space.

And so I feel like for those that end up going like one or two years in chasing the wrong dream, you're already worrying about scaling when you're not even sure if this is the mountain you want to climb. That's going to be something that is almost irreversible and then you're going to lose some of your best years working on something that maybe isn't something you're passionate about. The way that I tend to think about this, I spent two years investing in vertical AI applications. It seems to me that we get confused between what is technology and what is a medium and this distinction has become more valuable and useful over time.

So rather than thinking about something just as a technology which can be a tool that's delivered at scale, if you think about mediums allow people to express new ideas in ways that previously weren't economically viable. To draw an analogy, once we came up with agriculture, suddenly we had lots of different ingredients that we could use to then create different types of foods and dishes and different cuisines. Once you had those cuisine, then you could make different types of restaurants and different types of expressions from the lowest execution to the very highest. To me what AI is doing is it's changing the ability for people to create different types of software experiences that are incredibly bespoke are very personalized and used to be un-economic to pursue.

What's your point about picking a problem area or domain that you feel is ric...

Then to try to solve a deep technical problem unless you're one of these like AI labs or something where you're building something foundational.

So be very clear into the thing that you want to create in the world. I guess another way to think about this is we're at the beginning stages of what I would think of as the record labelification of software meaning that by applying taste. You can create a record label of different types of software applications that look like something certain or that have a certain typography or typeface or whatever, but they're quite commoditized to your point gamma is its own specific expression and experience.

It needs to get the basics right of creating slides, but then from there you can do jazz and it becomes very exciting. So you might be creating more of a jazz label been actually like a piece of software.

I've been making this one point a lot lately, which is that the way I think about AI is that I used this venn diagram where it's like the whole thing is all the software that would be awesome to have.

And there is this software that we used to be able to economically viable right maybe with a hundred million dollars we could maybe do something but like a practically we wouldn't right. And now what AI helped us is like fill out the value diagram. There's now more software that you can actually do and that's also why we're seeing like this emergence of startups because suddenly there's this entire like white areas on the flag which is obviously incredible. More tactically I think my personal mantra is that iteration velocity solves all known problems and that really just means that you look at everything you do and see like can I cut down on making that a little bit faster.

I've evolved my V-vives right it's really that's the same decision right where it's my iteration velocity bound by someone doing evolves or not and the trade off of that changes over time. And similarly like in the past you might have saying do I get an experiment running and then I need like four weeks to get. So this is going to significance where I honestly knew and they want that one side was shit that's how a thing like is really fruitful if you look at every step of the way like how fast do I go from a having idea to like it's something I can actually try and then make sure it's good.

Thank you you had said something earlier about boring industries and paying attention to them and I was thinking about this along those lines of maybe not boring industry but is there an industry or industries you feel that AI has not really tapped into yet.

Yes we know the obvious we know about the consumer side obviously the LM side and various vertical AI but is there like some industries if you had the time to spend more time on or maybe you're working on public that you're allowed to share.

You're like you know what watch out for this because there might be someone in this room who might be working on that solution or might have thought the same thing and they might get some insights from you will start with you please. Yeah part of this is if it's on my radar it's probably one of the larger segments so I've looked the question where the most valuable insight.

Today is having access to the problem that no one else is paying attention to if you have the ability to identify those problems find distributions sell get your first customer.

There's a lot of people who can build a lot of software that don't have access to that would happily partner with you or you could just build it yourself potentially. So if you have that lean into it if you don't maybe try and do some weird stuff with your career that will get you exposed to non consensus stuff that you can double down on be at a very high level. I'm personally very excited for the potential in healthcare to be realized very good for the world's very interesting it's very hard from a sort of reasoning and intelligence perspective which is something that from the research side we really care about.

If you look at the history of providing leverage for doctors there's some small attempts you have a human scribe or you have a first practitioner that are trying to make the doctor do less of the road work and more of the differentiated sort of top of their discipline type work. But that is only the tip of the iceberg every doctor is complaining about doing too much clicking around in EHRs and just not doing the interesting work which is probably some combination of like hard diagnosis and pathetic conversations with patients.

How do we get them doing more of that and not of the busy work healthcare super complex but AI is really well suited to dealing with that complexity I'm excited to see that realize.

And then we need to be an expert. We're trying to build gamma such that RAA is going to be your expert design partner so every step of the way AI can help you that doesn't need to be the case for all applications I think when you look at certain applications they are very successful like customer success or customer support certainly coding there's almost as fail over where even if it AI doesn't get a right human can still be on the loop and help pick things up where left off.

And so I think it's helpful just to understand different applications will have different needs maybe your skill sets will be well suited for one versus the other and just give that some thought as you're exploring ideas.

Can I ask you a follow up on that just focusing on gamma for a second because I realize this question might be a little out of the focus of what you're working on is there an industry or type of vertical that you were surprised to see a lot of customers penetrating with gamma.

Early on where we saw the sort of more most usage and stickiness was around s...

Even large organizations large enterprise you know would be maybe the sales of marketing teams what we're seeing today is actually there's a demand for much more wall the wall deployment which is.

Everyone's using slides as they call like the language of business large organizations use it as a way to spread information and make decisions make really tough decisions and important decisions and so gamma could be a replacement for that language and that's what's been surprising to us is that.

We can actually create a new creative language you can earn the right to actually be replacing then competent this case.

Thank you. My response to that is going to be challenging if you're guys are looking for pragmatic answers. I wouldn't expect anything less from the inventor the hashtags more about. I try on the one end not to like over fix it on my own lived experience but it's hard not to because I lived it and when we were early building social media and I came up with the idea for the hashtag we were solving a bunch of different problems.

To try to decentralize the social web and the thought was that the web had been built mostly for academics and for the military and was awaited distribute documents and we knew.

Many of us being 24 in San Francisco 20 years ago that people needed to be represented on this network as well and so we went to work solving that problem. The reason why I'm not going to give you a pragmatic answers because essentially I and many of my friends had a vision for the future about how people would behave and they would probably behave or like us and as a result we needed to make those technologies easier to use more transparent and more engaging. Many people at the time wrote or thought that the things that we were doing were stupid or crazy or that no one would ever do them who's going to post pictures of their breakfast to social media.

And now social media is this crazy force on the world. Therefore when you think about what you're going to do over the next 5, 10 and 20 years imagine as though you actually succeed and if you actually succeed what is that you achieved.

I'm all about finding ways for organizations to have better internal communications but it might not be a matter of actually creating better slide where to do that.

And more about I don't want to put words in your mouth but imagine it's about clarity of thought and then communicating those right messages to the right people perhaps with deep personalization based on the audience and what's relevant to them. That doesn't exist today you create a slide deck you send it around it's the same for everybody it doesn't have to be that way in the future so you asked about opportunities and things that I'm looking at. I'm still trying to solve the same social media problems and social networking problems that I was working on 20 years ago and I still think that they're very interesting.

And I think the future is going to be made up more like digital twins and personal agents and custom agents that are interacting with and on behalf of ourselves and the ways in which we control those things are completely unknown right now.

So that's what I'm excited about. Just a follow-up question if I may along those lines though similar to what I asked with Grant here was there a certain behavior or certain type of audience that you have seen accelerate that you didn't expect when you were in those initial eight days with your colleagues building the hashtag and building. At the time Twitter and it was a call Twitter then it was definitely Twitter yeah I guess I'll answer that differently because I was building a conversation like I before we had the LLM it was incredibly brittle and it was stupid you talk to it and then like you go a couple turns and it would just like fall off and you'd be like like what happened.

Once I saw a chat you could come out I posted it to product and then they began product of the year that is what caused me to become an investor because I knew that I was the linchpin of the thing that I was missing. So the degree to which there's a younger generation coming up where they are able to have conversations with artificial intelligence is as though it's normal all the old folks have no idea what's coming. I have no idea how to operate that space they have no idea how to feel like it's okay to have an artificial friend that you build some sort of connection or intimacy with they have our enough time having intimacy with each other.

If you imagine like the electron generation like having electron intimacy as being normal that's going to set a whole bunch of new behaviors for the next five to 15 years. And I look at that and the adoption of chat to VT we were right about what we were trying to build before but it was the LLM that unlocked that new possibility. And if you imagine language being the original software meaning that you can change it right like the hashtag is a piece of software that is linguistic software. I changed language that we could communicate better through groups that is what we are all giving to each other through the LLM.

Thank you. I think one mistake we all are making all the time because that's where we the world we live in. So we still have a scarcity mindset on what software should exist.

It was very expensive to make software and so there would be small amount of software that would be lots of winner take all market where something that's like has a red as we brought a applicability kind of wins that entire market. And it's just the best product I just add to what you're saying the reason why software is hard is because we're trying to communicate with an electron system that is quite brittle.

So when you have an idea for what you want to have happen you have to communi...

And the more that code builds up over time you get more and more complexity such that any different thing that happens could break.

So LLM is allow the system to become much more dynamic and comes closer to the way that we actually communicate and think ideas in the sort of like lossy patternistic way that's resilient relative to the way in which we used to build more brittle systems. So to your point like the reason why we can have so much more software is because we can communicate in the way that's natural to us as opposed to having a very small number of wizards that could write the incantations that computers could use effectively.

And what that means is that almost none of the software ever written has been written. You are asking like are there still any opportunities left? Yes, essentially all of them are still left. We're like building a platform for deploying agents. Okay, we have to have some demos and you're like, oh, I can't think of any. And so you go to like your finance person and you're asking them what do you really hate about your job. And they're like, bam, bam, bam, you're like holy crap every single one of these I can just build right like this used to be like very difficult to build software.

And I think I've been having experience basically every field so every field has just abundance of opportunity and almost all of it is implementable.

What is something you are working on right now and please focus on a specific product. I would rather you talk about something very specific since we have builders here. Is it a product? Is it a feature? Is it an upgrade? Something that's on your mind that you're working on in your spare time outside of your day to day. When we start with you currently have obsessed with coding agents and how to be the best model for coding agents. And in service of that, how to accelerate the flywheel that I was just describing.

Can't share much more than that. But if there's anyone that has complaints about coding agent performance with open AI models, please come talk to me. Yeah, we like to dog food a ton again. So one of the things we're just releasing now is our own API. And I oversee all of our marketing efforts. So one of the things we're really excited about is now that we're going more into B2B from our consumer roots, that are going to actually expand into selling to organizations is all of a sudden, I can automate all of our decoration.

So any customer client I need to pitch. I can take my notes from granola. As soon as the meetings done, it can send that to Gamma. I can give an immediate recap of everything we discussed, all the deliverables, all the action items, and before the meetings even done, I can have that email to the client or email to the customer.

So we're really excited about that. I think that allows us to kind of not only fulfill the vision of like true personalization,

but automation at scale. And I can imagine now that once we actually have a sales team to arm them with that,

it's going to be really powerful unlock.

Thank you. You mentioned granola. I'm a big granola fan. I do a lot of coaching for founders. And so I still use a very manual process in air table to manage all my calls. And so I'm just like, I need to make this fully AI where something comes in, it goes to air table, like the notes come in from granola, like the email is sent like all that's done.

And I'm just so embarrassed that I haven't done it yet. The other thing that I want to work on and I have been working on is creating a decentralized social network for both people and for agents. It's something that I've been working on for many years.

It's something that the protocols that we started in 2008 or 9 are now pretty mature.

And I feel like now more than ever the opportunity exists to do that. So that's something I'm doing. Thank you. To enter panel on a low note. What I'm actually spending a lot of time on is like agent security. It's actually not that long ago, let's say 15 years ago, where security was like in the dump.

We were a lot using HDPS yet. The NSA was reading everything you wrote. Everything was getting hacked all the time. If you were in LinkedIn already, all you data is a cat was a hacker and it has got really good. And now we're getting back into the LM phase, which has no security model at all. A real MCP that you put into your agent gets to prom eject you without any hindrance, right? And so we're like in this wild west phase and we have to figure it out.

And so that's something I really care about. One thing that's really fascinating is that we have this everybody can cook world.

We're like our like SDRs are shipping apps to make that job easier, which is amazing.

So you're in this world now where the security model for apps changes. Where you go from I have these like ultra competent engineers for like engineers, like do things the right way to like people who have no business judging whether the thing is going on. And so like what I'm personally working on is kind of saying like can we extract certain things from the apps so that you can go wrong. Let's say, author like it's not the apps problem that author works. So that even if the app is like completely broken, they cannot give access to the wrong person or maybe the app doesn't control how you've talked to the back.

So again, if the app is broken, you don't security model doesn't fall in over. Basically, there's a wild west situation and we're trying to establish security principles in that world. Madison had mentioned that the AI collective were a not profit. We are here to like humanize the AI experience through in person meetups and events. So if anyone's interested in learning more about that, you could check out acollective.com.

On this note, anyone have a question, don't be nervous now.

Yes, you sir.

This is like the security kind of conversation around the live code discussions on Twitter people saying, oh, like this the end of libraries.

Because if you just find code anything now, anything in the platform level for like security.

Like secure by default versus like teaching AI how to deal with everything. What are you going to think about the juxtaposition those two ends of the spectrum, I actually could say one is like, just buy a code everything one might be like putting in a library that you know is try to true. Interoperable modular, but now people are saying, oh, my agent, they just code that. Why would I use this in a bit PFR?

Yeah, I mean, it's kind of directly what I was referring to. I think we're in the earliest in the eggs of a maturation phase.

There was always this notion of shadow IT, where like some of which is upgrade from Excel or like do something in there are organizations that access to some backend and maybe that wasn't good already, right?

So we just put like fuel onto this phenomenon, but it's not necessarily new, which does mean that like in large enterprise settings you have to find a way to make it work.

I think there there can be this world where you end up with an architecture where you have this like professionally developed services. And then you have bi-coded front ends that access these services, right? But you're retaining all the like core variants of your system in those back ends. I could definitely see like one of the like biggest running jose of the industry is like how horrible sales forces as an interface. Like you can buy code a better interface for whoever in your company choosing Salesforce, it didn't have to noon doesn't mean you need to stop using Salesforce, but it's just an API know and it can maintain your business rules, but everyone in your company will be happier.

Very question. He's here is what we should here a biggest hot takes are when I mean is something it's truly not in the zeitgeist. It's a good question. One off the top is maybe coding agents aren't just for coding. You're alluding to this a little bit, but also to your question agents work best when they have really good tools and really good abstractions. And all the coding agent is agent that's really good at using tools and interacting with environments doing research, making changes to things, and maybe for the n plus one domain, it's all kind of a variation of like, oh, it's actually just another tool, that's all kind of code.

And anyone else wants to answer that? Guys of EC, like I guess I can say that conventional VC might be dead, which makes it very strange for me to sort of be in this position, but I don't know how capitalism is going to change or evolve.

And this is also going to sound totally messed up, and so US where hot takes, there you go, like sometimes you come up with ideas and it's just not the right time.

So communism and socialism might be actually like the right idea, but just the wrong time. We didn't have the right means of production. We didn't have abundance. We never scale. And so there were a lot of externalities as a result of going down that path. There really worked very poorly for humans at the time, but in the future where you have a lot more services or agents managing things and having a deeper sense of what's going on. And that's what tricks to know, like what's working and what's not, maybe some of those concepts are worth reinvestigating. Now I'm not proposing that. I'm not saying we go directly there, but I think it's worth thinking about the ways in which we've tried to solve problems in the past and recognize that the timing maybe just wasn't right.

It's specifically asked about MCP, definitely overrated technology, you either own the server or you own the agent, but very often if you build a startup, you own both the tool in the agent and you definitely don't need MCP. It's only for when you want to be in trechivity and eventually they'll maybe allow an undevelopest of what you said to be that school, right, that'll work because you don't own the agent is trechivities. Or you are trechivity and you own someone to give you some stuff, but if you build an agent, you don't need it.

I want to thank everyone, especially our panelists. Thank you all for attending. This is such a great experience. It's truly an honor to be your moderator for this. Thank you. Thank you so much for joining us at the room podcast. If you want more from the room every week, subscribe to our newsletter at theroompodcast.com/newsletter. While we back next week with a new episode and inspirational guest Tuesday, 10 a.m. Eastern 7 a.m. Pacific. See you in the room. Perkins Ques supports the most innovative entrepreneurs and investors in fast moving in high growth sectors, addressing their merriad of legal needs. But the firm doesn't just provide end to end legal and business counseling to its startup clients, it also facilitates introductions to key advisors and sources of capital.

Perkins Ques interactive website startup percolator offers access to programs, resources, and rich dynamic content designed to assist entrepreneurs on their startup journey to learn more go to start up percolator.com and Perkins Cui C-O-I-E.com.

This podcast is brought to you by Mercury, the banking platform businesses li...

With Mercury, you can pay bills and seconds, close the books faster and even send invoices. Not only does Mercury do away with a patchwork of tools, it eliminates guesswork, giving you complete and accurate visibility into your business's finances all from one account. Applying minutes at Mercury.com

Compare and Explore