Welcome to the Vergecast, the flagship podcast of pointing an LLM at just a b...
to see what happens.
“I'm a friend of David Pierce and I am sitting here getting ready for the next season”
of version history. Version history if you don't know is our tech rewatch show about the most interesting, good and bad products in history. It's very fun show and for this season I have had to do research that has taken me down rabbit holes about Apple history like deep into Apple history and into the history of the monopoly
that AT&T had for decades over the phone business in the United States. And that in particular is a story I just frankly knew nothing about. And I found myself reading a bunch of tech history books which is delightful.
First of all I should read more books which are probably all read more books.
At this moment in time my information system is just insane. I'm on social media, I'm scrolling through apps, I'm on Reddit. I probably read more words than I ever have but it's this like discombobulated galaxy of just stuff all the time and to sit down and just open up a book and stare at it for three hours has been like genuinely cathartic in some really interesting ways.
So all of this is to say go books is the official stance of the Vergecast in 2026. But that's not what we're here to talk about on this episode. We're going to do two things on this episode. We're going to talk actually a bunch about AI. The first thing we're going to do is talk to Boris Cherney who created Claude Code at
Anthropic. Claude Code came out a year ago today Tuesday February 24th as you're hearing this.
And I think it's kind of become the single most important AI product out there.
So we're going to talk to Boris about where it came from. What happened at the end of last year that really made it take off and where all of this goes from here? There's also a bunch of like product support questions that I'm going to make him answer because I can because he's coming on the podcast. After that, the Verge's Hayden Field is going to come on and talk to us about how to think about your own interactions with AI.
Particularly as it pertains to data privacy and security. We talked a bunch about this stuff with open claw and notebook a couple of weeks ago. But I really want to get into this idea of like if I'm going to turn one of these things loose on my computer to build software and interact with my apps.
“How do I think about that as a person in the world with data and privacy and secrets?”
Reckoning with that feels important. We're going to talk about it. We also have a really fun hotline question about gadget buying in the year 2026 and why it's about to be so complicated. All of that is coming up in just a second. But I have a chapter of this Macintosh book to finish and say in the great by seed and leavey.
Hi, they recommend and I have to go get Claude Code to finish something before Boris gets here. This is the Verge cast. Right back. Support for the show comes from Loreal group, the global beauty leader, defining the future of beauty through science and technology. Loreal group.
Create the beauty that moves the world. Support for today's show comes from Dark Trace. Dark Trace is the cybersecurity defenders deserve and the one they need to defend beyond. Dark Trace is AI cybersecurity that can stop novel threats before they become breaches. Cross email, clouds, networks and more.
With the power to see across your entire attack surface, cybersecurity defenders such as IT decision makers, CISOs, and cybersecurity professionals now have the ability to stop zero days before day zero. The world needs defenders, defenders need Dark Trace, visit DarkTrace.com/defenders for more information.
Support for the show comes from public, the investing platform for those who take it seriously. On public, you can build a multi-asset portfolio of stocks, bonds, and options, and now generated assets, which allow you to turn any idea into an investible index with AI. Go to public.com/podcast and earn an uncapped 1% bonus when you transfer your portfolio. That's public.com/podcast.
Paid for by public investing, brokerage services by open to the public investing ink member Fynner and SIPC. Advisory Service by public advisors LLC, SEC, registered advisor. Generated assets is an interactive analysis tool. Outputers for informational purposes only and is not an investment recommendation or advice.
Complete disclosures available on public.com/disclosures. All right, we're back. So, for all intents of purposes, we're about a year into the vibe coding experience.
“And I think vibe coding to me is the most interesting piece of the AI equation right now.”
I'm continually skeptical of the idea that chat bots are the future of anything. I think there's a lot of interesting technology in a lot of these LLMs.
I think agents are a cool idea whose time has not yet come and maybe never will.
But the idea that you can use AI to write good code is just true. That thing has found product market fit.
All of the external questions about the way that these models are trained and...
that they consume, all of that is real. But the idea that you can just write code by prompting is here and it is real and it is powerful. Let me just give you one example in my own life. So, I am constantly switching productivity apps, which means I have a bunch of notes in like 10 different apps.
This is a terrible system because I can never find anything, but I take notes on meetings.
I have like interview transcripts. I have all kinds of stuff just sort of scattered around. And over the last couple of days, I've been using cloud code to pull all of that data out of all of these different apps, put it all into one place in this app obsidian and then actually structure it in a way that makes sense.
So, I have without any manual labor or moving stuff around or messy copying and pasting on my own, I have just been able to tell cloud code in this case code, which is a version of cloud code, where my stuff is and just have it go do the busy work for me. That's powerful and meaningful and a big deal and is a thing that would have taken me a much, much longer amount of time to actually do.
And that's just the tip of the iceberg of what tools like cloud code promise. So, cloud code launched, like I said earlier, a year ago today, Tuesday is your hearing this. And this felt like a good moment for a variety of reasons to check in on where we are with cloud code in particular, but also with this idea of giving everyone the tools to write software
in general. But Boris Charny, who created cloud code at Anthropic, by accident is probably too strong, but certainly not imagining that it would become what it has. He and I talked about what vibe coding means, where it's going to go from here, whether or not there is a future of something like cloud code that is actually useful and usable
for most people and how we're supposed to feel about the end of people writing code at all. The interesting conversation, I really enjoyed it, learned a lot about how to think about
“cloud code and other things like it in my own life, I think you'll enjoy it too.”
Let's get into it.
Boris Charny, welcome to the first cast.
Yeah, thanks for having me. You've talked a lot about kind of the history of cloud code and where it came from and how you made it and now that it's year old, I think the thing I'm particularly curious to talk about is your relationship with coding now. One of the things I saw in all of those interviews I've been watching is everybody does
the YouTube thing where they grab the splashy quote at the beginning and do it as sort of the cold open and then it gets into the interview and over and over, it's you saying, I don't write any code anymore, cloud code does 100% on my coding and this is a big revelatory statement to have made and I want to get into what that actually looks like. But over the course of the last year or so as you've been building it, have you undergone
basically a complete re-identification of what it means to be a coder and developer at this point? It's surprising how little of a change it's actually felt like as someone that writes code.
“I think part of it might be that in some ways engineers are used to change because our”
text action is all the time, there's always a new technology, there's always a new framework
and a new language. It's just kind of part of the job is always re-learning and kind of re, I don't know. It's like every new every three years, there's a new stack and a new language that's popular and so we're just used to kind of figuring it out and learning the latest thing. In some ways, it's felt like a big jump because the big change over the last year is I don't
work with source code anymore. I don't look at the code of the program as much as I used to. I don't write any of it anymore and that's been kind of a big change back when we released a quote code originally in February. So I was like, "Sonnet 3.5 new" or "3.5" actually a couple of terrible name we give that
model. I think it was 3.5 new. We should have called it like 3.6 or something. Yeah, AI model names not famously great in the industry right now. It's not our strong suit, not our strong suit, not our strong suit.
So we released it and back then, you know, a quote code was writing maybe 10% of my code.
“When we released "Sonnet 4" and "Opus 4" in May, I think that jumped to me like 30%”
or something. And it kind of creeped up over time, but back in November when we launched "Opus 4.5" that's when it just suddenly jumped for me from like 50% to 100% and that was actually very sudden but it also just felt very natural. What does that change look like?
You just wake up one day and realize, "Oh, I'm not this thing has stopped making mistakes. I don't need to do it anymore." Yeah, as an engineer the way that you would code maybe like, I don't know, like middle of the year or last year, is you kind of struggle to work in a agent and an agent as the first pass.
But then the code is like, isn't perfect, there's a bunch of stuff that doesn't work. So then I have to go in and have to assess the code, I have to open it in a text editor to make some final changes to it. And what I realized around "Opus 4.5" is one "Opus is now testing my code."
This is kind of cool, like, you know, it's like it's running the test but als...
able to open the browser and it's able to kind of verify that, you know, the website works correctly, you can click around if something is off by a few pixels, it'll kind of move it over and fix it. And then the second thing is the code is just really good. So I don't have to open a text editor anymore, I don't have to fiddle with it by hand.
And those actually kind of nice, because that means I can move on to the next thing and just write a little bit more code, a little faster. It really does feel like that Claude moment sort of happened overnight. It was like, everybody went home for the holidays, got bored, used Claude code went, "Oh, my God."
And we were sort of off and running. But it seems like you as the person who pays incredibly close attention to it all the time, also had that bigger kind of overnight shift in how you think about it. Was it just big new model, all of a sudden, had this new capability that no one was expecting
“it to do this well, like what accounts for that bigger change that quickly?”
For the wrongest time, coding has been a thing that we just want the model to be really good at, because, you know, essentially the road to safe HCI, like this model is going to be very, it's going to be intelligent, that's something that's going to be super intelligent. Our job at Anthropics is to make sure that goes well, and that it's done in a safe way. So the model doesn't do bad stuff, and so, you know, this is kind of aligned with the interests
of what the users want, and humanity brought away. And the model is softer, and the way that it interacts with the world is the tools and the rather software that it writes. And so for us for the longest time, we've had this belief that the way to safe HCI is to coding and then kind of tool you, some computer use, so this kind of increasing capabilities
to interact with the world, but it's always mediated through code, so it always goes to
your code. When you do model training, you try a lot of stuff, there's a lot of experiments, there's a lot of new ideas that people are trying all the time, all the time, so it just doesn't work, but sometimes it does. And, you know, for Opus 4.5, the direction was kind of set early on because we knew where
we want to be headed, but it just turned out that a bunch of good ideas worked, and there was just a big step change. It was just a surprising for me as it was for everyone else. One of the things I have been trying to figure out, and one of the things we've talked a lot about on this show, and at the Virgin General, is ultimately who the end user of something
like Cloud Code is, and I think right now it's fairly clear, right, especially for a product in the terminal. So it is a developer product for developers, is that fair to say right now? We designed it as a developer product for developers, but even from the earliest days, all sorts of non-developers started using it, and this was just a crazy surprise, but also, you
know, the best possible thing that you can see in product is people want to use it so much, they jump through hoops to use it. Yeah, that is definitely a thing we've seen with a lot of these tools. I mean, I was playing around with some open claw and some of this stuff like that, and the
“amount of work you have to do as a just normal layperson to get some of these things up”
and running is pretty remarkable, and yet people are willing to do it.
I suspect there are a lot of people who had never heard of their terminal until Cloud Code
started to happen in their lives. Yeah, yeah, it's right, and now all the biggest companies in the world use Cloud Code is like Spotify, Shopify, like Fran, the Netflix, Nova, and Norris, like Nvidia, Snowflake, Salesforce. Everyone uses Cloud Code, the small startups use it, but also I think that we're starting
to hear is even at these bigger companies, a lot of people that are not engineers are using Cloud Code, and so I think like ramp just we did about this pretty recently that they have a bunch of product managers, data scientists, a lot of people using it. So even at these biggest companies, this is kind of what we're saying, and this was also, by the way, like the reason we launched Core is we see people using Cloud Code for things
that are not coding, and we're like, all right, I think we can do better than a terminal for you. And so we build a thing that we think they would actually want to use. And this is the thing we're still learning about, and we're seeing how people actually use it.
Yeah, so I'm talking to you about that early signal a little bit when you start to see people who are not developers who are not traditionally people who would be in an idea and thinking about code and thinking about the terminal, start to use this product. I remember walking to the office in Brandon, who's our data scientist, was using Cloud Code in a terminal to do data analysis, and he had like little charts in the terminal and
“stuff, and I was like, this is just crazy. There's no way this is the best way to do it.”
And he was like, "Well, it's great." And the next day, he had like three Cloud codes running at the same time doing like data analysis in Pearl, and then all of the data scientists started using it. But I actually still didn't really get it, because I thought there's something weird about maybe people that work at Anthropics, maybe they were very early adopters, more willing
to try these new tools, because you know, like, engineers are always the early adopters,
and, you know, they try thing, and then eventually everyone else tries to think. But I think by the time that I think now like half of our sales team, this is called code every week, I think when that started happening, that's when I really started to get it, that this is a product that's not just for engineers, and we got to make that easier.
Yeah, I would think that realization would lead you in one of two directions.
One is to say, okay, actually we're giving people access to a developer tool, and maybe we should do it in developer way, it's right that maybe have people understanding what
“the terminal is, and their computer is not the worst thing in the world, and if people are willing”
to go through these hoops to do this thing, maybe we're onto something, maybe we don't need to sort of radically rethink the UI because people are figuring it out. Or you look at that and say, we need to radically rethink the UI because these people are having to jump through these crazy hoops just to do the work that they want to do. Do you have a stance on which one of those is the right reaction?
Yeah, so I mean, it looked like we started in a terminal, but pretty quickly we started experimenting with other form factors too, so we have like ID extensions for like VS code, cursor, the JetBrains ID is, we have like iOS and Android apps. I actually do like probably a third of my code on the iOS app nowadays. Really?
I never would have predicted that, but that's where we are.
We have like a web surface, there's a desktop app, so like, you know, the same desktop app that has coworker, it also has clock code in it, so you know, you can use the exact same clock code. So we're just like always experimenting with this, but yeah, it's like the surface is just a little bit different for different kinds of users.
So cowork under the hood, it's just called code, it's like the same agent SDK, it's the, you know, it's an awesome agent and it's the same exact one that's running everywhere. But for people that aren't engineers, we want it to be a little less footgunny, like we don't want people to mess up their system and things like this. So we actually ship like a whole virtual machine.
We have deletion protection built in, there's a whole bunch of things that we built for less technical users that engineers would actually find kind of annoying and they wouldn't want in the way. First of all, engineers, there's something a little bit different about the tool because engineers love to customize everything.
If you talk to like two engineers, they're going to use their tools totally differently. There's no two engineers that have the same setup. And so the way that we build Quack code across every surface across terminal IDE, desktop everything, is we want it to be the single, most customizable dev tool that anyone has used. So it's very, very comfortable, you can hold it however you want, you can customize
it however you want, there's hundreds of ways to configure it. And what's also kind of cool is because Quack code is Quack code. You can just ask Quack code to configure it for you so you can just be like, you know, change the theme or you know, like change the setting or change the setting, it can just do that for you.
See, this is one of the things I have really enjoyed about my own experience with Quack code is there's so much of it that is sort of relearning what's possible in certain ways. The idea of asking Cloud code to rescind itself because I don't like the color scheme,
it just never occurred to me.
I don't like the color scheme and I would like a different one, but it literally had just never occurred to me to ask this thing that is writing code for me to write that bit of code for me. And I feel like this is kind of why I'm curious about your own relationship with writing code.
“Is it just, yes, there are certain things you have to do, but I feel like, I don't know,”
I would think of learning a new coding language is like learning how to play a new kind of instrument, right, where it's a lot of the behavior is the same. I am just pointed in new directions with new details and new systems to figure out. But this is like, you know, you used to play the violin and now you're on the soccer team. It's just like, it's a completely different way of thinking about how to use your body.
Do you know what I mean? Yeah, yeah, yeah. The way I would think about it is like you used to play the violin and now you're like you're conducting the orchestra. Hmm, okay.
That's like that's a great idea to think about it. But it's also, yeah, I mean, the hardest thing for me is just changing expectations every
time a model comes out, it's just so quick, you know, like this thing that just never
would have worked for Sonic 4, Sonic 3.7, and that was Sonic 4.6 that just works. And I just have to constantly re-were in this. All the stuff that I would have thought, you know, it didn't work. I just assume it'll work at some point. Do you have like a list somewhere of all the things that are broken that you try every
“time a new model comes out and just checks some things off the list?”
Essentially, anything that I do by hand, oh, interesting, okay, yeah, yeah. So, for example, like Sonic 4, open for, and even like 4.5, it was okay at this, before it was like not great at it. We have like a feedback group. We have this like Slack channel where all the anthropic employees get feedback about
clock code. We have a lot of external feedback channels for customers and get up and things like this. And before the model is not very good at looking at the feedback channel and deciding what to do and what to fix. But now actually, a lot of the code that we should for clock code, you know, like clock
code is 100% written by clock code at this point. But also, I would say maybe 20%, 30% of that is clock code just looking at the feedback group, figuring out the kind of things people are reporting and then automatically fixing it. And this kind of productivity just would not have been possible with older models.
But with like with open 4.5, open 4.6, it actually just started working. Yeah. What have you learned with cowork in particular? Because I would assume like you said, there's a different set of person coming to cowork then to clock code with the different set of expectations in a different set of knowledge.
Are they using it kind of radically differently and making you rethink this whole system
All over again?
You know, the most surprising thing, so at this point, clock code, there's some study that
it writes like 4% of all the commits in the world, you know, like all the code in the world.
“I think the number is actually quite a bit higher than that, because it's not including private”
code and also our growth hasn't affected since that study, it's actually going up even faster than before. So I think it's actually quite a bit higher. In the early days, though, clock code did not grow very fast. It was not a hit originally.
It took like a few months to catch on because it was just such a new idea. Co-work on the other hand has been a hit immediately. So like as soon as we launched it, it's just been, you know, exponential sense and this is what we like to see, because we also, we think in exponentials. So I think the biggest thing that's been surprising is just how quickly it's been growing,
how quickly people have figured out how to use this. Why do you think that is, what do you feel like you got right about cowork? There was just a pent-up demand, I think, I was like the biggest thing like just for something a little more understandable. Yeah, more understandable.
Like he saw these people on Twitter, they're using like quad-code to, they were a grown-to-made upflans, like recovering, corrupted photos off of like, off of a hard drive, like someone like used to recover, wedding photos, Pietro, I think, who actually used to work at Anthropoccused it to, I think it was like genome analysis, we got us like genome sequence and then he's like quad-code to look at like, you know, like specific sequences and stuff,
quad-code not intended for medical advice, but they're using it for MRIs, so I think just this pent-up demand is the single greatest thing that you can see in product, because it just means like people are knocking down the door and there's jumping to groups for, you know, this, this terminal thing that wasn't really designed for this. Yeah.
So it was pretty obvious, I think, that it would have been a hit. One of the most interesting things about coworking in particular to me has been that the product itself is really focused on, uh, sort of busy work, I guess is the way I would put it, like it's, you open it up in one of the first things it offers is to organize your screenshots, right?
Where it's not, it's not build a dashboard of your entire life.
Like, what one of the jokes I always make on the show is that everybody looks at AI tools
and the first thing they say is I want to build a daily planner, because all of my information is every, this is like the first idea, everybody has about what to build at AI is just
“a thing to tell me what matters in my life, um, but I think the real truth of like software”
forever is that this thing, this stuff all starts by just sort of solving relatively straight forward relatively simple problems for people, like, I need to do math and so spreadsheet success, right? Like, this is what it is and I think to me, one of the most eye of anything is about co-work was it just has a bunch of ideas of little things it can do for me that would take
me a long time to do on my own that aren't hard, they're just, this is a tool that will automate a way a bunch of my busy work on my computer and, and my sense is that is the kind of thing that has just every single person in the world resonates with the idea of that in a way that strikes me is very powerful and it's not as open-ended as you can build any kind of software you can imagine, or you can talk to this chat about about
anything, it's organized your screenshots and I think that is like a surprisingly powerful
bit of product to put in there like that. Yeah, absolutely. And if you want to build something, you just, you know, hit the code tab in the desktop
“up and you can go build whatever, but if you want to, if you want to organize your desktop,”
like, I actually, I use core for a lot of stuff like I used it to pay a parking ticket out there day, I was up in Seattle, we went clamming and I used it to purchase a clamming license, that was pretty awesome, like I just did something else and it navigated this like actually kind of annoying government website to do it, so one on the team is using it to pay their taxes right now, so also not financial advice, but it's actually like
quite useful for all all this different kinds of stuff. This is one of the things that's like, also kind of hard to explain to people is like, people ask, what do I use it for, and my answer is, well, kind of everything, it's like all the, all the, all the stuff you didn't want to do by hand, it can just do, so you can do the stuff you actually want to do.
Yeah, so I think, okay, let's talk about doing taxes, which I know is just an example off the top of your head, but I think is, is a useful sort of middle ground of the kinds of stuff that I think about a lot with AI, where organized my screenshots is relatively low risk, right, like the idea of, it might delete a thing that I didn't want it to delete, but in general, it's just going to put things in places and delete stuff off of my computer
that I don't want, and I think you can get people comfortable with doing things like that on their computers fairly quickly. Have cowork do my taxes, just has naturally more consequences, right, and I think part of, I know a question you get asked a lot, and also a thing that I think is tricky with a lot of these tools is it's one thing to have it right code that I can then go check, even if
I don't, right, the responsibility is back on me to check it and make sure that I understand where it is, and code is legible to me is a developer, but if I'm just a person and I'm like, "Korea, go do my taxes for me."
How much faith is it reasonable or fair or rational to have in code code or a...
go just execute that entirely on my behalf at this point? The tools are not perfect, and it's still early, but they are surprisingly good at things that people often expect they would not be good at, and again, they just improves with every model. For something like taxes, I would definitely, oh, definitely double check it.
So you have to go to the tax, and actually the thing that I would do is say do the taxes, but then triple check your results, it does have to go to that work for you. And then by the time you check it, there's a very high chance, it's just going to be pretty good. Yeah, actually, to your point about you, you can have it test itself.
That actually, I think that there's something very powerful about that too, but part of
the reason I bring up taxes is because the last innovation in tax software was that it will scan your W2s for you. I don't remember this being a very big deal in my life, or I didn't have to type out. I could just upload the PDF of my W2, and it would just pull in all the information.
“And I remember for a minute, it was like, okay, you have to check that because the scanners,”
the scanning system is imperfect, the software won't get it exactly right. But now, like, I don't remember the last time I double checked the numbers. I just, you just, you just upload the W2, it shows up in the field, and you move on in your life. Okay, I wonder, it feels like we are just barreling towards that with all of these tools
too, that it's like, there's going to be a beat of, I mean, I guess it's like your experience with cloud code. There's going to be a beat of, I need to check its work, and then a beat of, well, spot check it. And then we get to, I'm just not worried about it anymore, and that's the right end state.
It just doesn't feel like we're quite there yet. Right, right. I mean, it's like two things that happen at the same time. It's like the model gets better and the product gets better, and then it was like, as users, we get more comfortable with this thing, and, like, both things kind of happen
at the same time. But before we release cowork, I was using it to do all of our project management for the team when I was like, first, testing it out, and I still actually use it for this like every week. So we have a spreadsheet of kind of all the things the team is working on, and we as the team
to just, like, fill out their status every week. So just say, like, is it on track? Is it off track? And so I just have cowork, like, ping people on Slack if they haven't filled it out. And so all I do is I'm like, take a work, open the spreadsheet, and then for anyone that
hasn't filled it out, message them on Slack. It'll just suit perfectly. There's actually one person's name that for some reason can't figure out on Slack, so I have to do that. But otherwise, it just doesn't.
And I was actually like, kind of taken it back, because I didn't even realize that it would be able to do this.
“So I was just like, experiment with this double track until you're comfortable, but I think”
we'll be there pretty soon. In a case like that, does it message people on Slack as you or as like a bot? I asked it to sign as messages as cowork. So that's smart. Okay.
Yeah. And cowork actually now. It supports this thing in quadcode we have this idea called cloud.md. It's just like a special file, but since it's like all the instructions, you won't cloud to take in tech count every time.
So cowork also supports this now. So you can just say, like, whenever you message people on Slack, sign yourself as, you know, send a message just like coworker, bot or something, it will just do that. Yeah. That's smart.
Yeah. I think that that kind of, there's a little bit of transparency there that I think it's interesting. Like I remember you said in one interview, I was watching that cowork would occasionally
in the course of doing stuff for you, go and tweet on your behalf, and that that always felt
kind of strange. Yeah. Yeah. Yeah. It's funny actually.
Quadcode does this too pretty consistently now. When I'm like debugging something, sometimes quad will be like, hey, this code is kind of weird when you look at the history, so to look at the history of the code and get once in a while, it sees a really weird change by someone and it'll message that engine on Slack just to get context.
It'll wait on the response and then I've also seen it push back. So like the engine is like, yeah, I did this change for this reason, and then Quadcode is like, well, I don't think that's a very good reason. And I think you actually introduced a box, so let me go ahead and fix that. How are you thinking about the rest of the UI around this stuff?
I think like so much of Cloud code and code are very chat-based still, does that feel like the right UI to you going forward or is there more work to do there? We are constantly experimenting with new ideas. I think the UI of the future has not been discovered yet. So we have a lot of experiments in flight.
I would expect that the change, there's going to be a lot of things that we test.
The single most important thing is just seeing what people want.
So I'm on Twitter and the reds all day and so is a lot of the team. We just love talking to people we love getting the feedback, 'cause we have a lot of ideas,
“but the only way to figure out what the right ideas are or to see what people say and to see”
what people enjoy. I agree with you that the UI of the future has not been discovered yet. Do you have a hypothesis at this moment in early 2026 about what it might be? I don't yet. I don't think we found it to be honest.
I think there's a lot of ideas around proactivity and clot jumping in when it knows that you're going to need help, but it's hard to get this boundary right, because you don't
Want to end up with something like flippy.
It speaks to the progression you're talking about a little bit, A from playing the violin
to conducting the orchestra, it's just a different set of tools that are available to you
“when that's what you're thinking about, but also, you mentioned going from basically code”
to tool used to computer use. Can you just walk me through what that progression looks like as we go through, because I think we've heard a lot about agents to the point where I think the word agent essentially means nothing. Aged in is just like magic that happens on your computer and it's a short whatever.
But I think you're thinking about this in a much more sort of practical, how do we give this thing more powers kind of way? Why does it go code tool use computer use? Yeah, oh my god, don't let me start it, like the word, okay, I will get started. The word agent actually, actually everyone just misses the uses it, like it has a really
specific meaning when you talk about AI research, when you talk about engineering. So an agent is an LLM that you talk to, but the LLM can use tools, this is the thing that makes it an agent, it's like it can use tools. And so if you think about without tool use, the agent can write code. So let's say you give it a prompt and it can kind of write some HTML or something and
then as a user you take this and you kind of copy and paste it into like ID or something like this. So this is just like the coding capability. And as the model gets smarter, it gets better and better at working with big code basis.
But there's still kind of this problem that you hit where at some point you just can't give it all the context it needs. But you know, the model actually does know the context that it needs because it's able to search around and it's able to look throughout the entire code base. It's able to look at Slack, it's able to look at like the history of the code.
It's able to do all of this, but it's just too much information, like you wouldn't be able to give it all the information upfront.
And so the answer is tools, you give the model tools and it can use a tool to look at
the code, it can pull in more files, it can look at history, it can do all the stuff.
“And so this is why tool use is important, it's, you know, the same as a person.”
If you don't have tools, like you actually, I can't do a lot, like just with your hands, right? You need like keyboard, you need shovels, you need like if you're cooking the kitchen, you need a whisk. Like these things are just very, there's not a lot you can do without it.
So it's kind of the same thing for a model. And then when you think about computer use, there's just like a lot of things that are kind of hard to interact with just with tools. So if you think about like what can you actually do with the tool on a computer, it's something like MCP or it's an API or it's a command line interface, but not everything has that.
So you know, like if you have like, I don't know, like this, this like clamming thing, I was getting this like clamming license and, you know, there's no API for that, but there's a website. And to use the website, you want the model to be able to use a browser, you want it to be able to use a computer.
And so this is kind of this natural evolution. So you start with coding, then you move onto tools and this is the way to interact with the world, and so you don't have to spoon feed the model context. It can just use the tools to pull in context. And then computers are kind of the last thing, because then the model can just use everything.
Okay. Do you think as AI continues to grow? And if it takes over all of software and computing the way that a lot of people think is going to, that the computer use part eventually becomes sort of updated. Like if there were enough tools and enough MCP access and enough of the stuff that you're
talking about is computer use, just sort of an elegant hack that gets around the stuff that maybe will exist later and we won't need it. Early on in the early days of using the model for coding, people were talking about designing special programming languages to make it so the model can code better. Right.
And I always thought this was kind of silly because the model can just figure it out.
You know, it's not like us where there's a programmer that likes Python, there's another one that likes JavaScript and won't touch Python. The model's not like that. It can just write whatever language it doesn't care.
“So I think it's kind of the same thing here.”
I think over time the model doesn't care. Whatever tools you give it, it will be able to figure it out, and it can use those tools to do things for you. Talk to me about how people should think about their own risk profile in giving access to their data and their computer and their files and their photos and whatever to a system
like cloud code or code work. I think you have, you know, lots of incentives to tell me that it's totally fine. You can have all the stuff on my computers where we're putting the safeguards in. But how should people think about what it means to give cloud code access to a folder on my computer, even something like that?
Yeah. Totally. So I would think about it on a few levels. And the most basic level was like, why does nonprofit exist? We exist to make safe HCI. We, you know, initially we have a bunch of founders that, you know, like left a different
AI lab and came and started and property people have heard of. Yeah.
I'm sorry.
But this is the reason we exist. And there's a lot of quarrel here is to safety. You know, like the security is actually very important.
“If you want to get safety right, privacy is very important.”
If you want to get safety right. And all of this stuff we sort of have to do, we're very lucky that we care about safety.
And so does our most important target customer, which is enterprise and companies?
You know, there's a lot of like consumers that use anthropic products. This is awesome. And this is something we love to see. We will build for you. But actually like the main market we care about is enterprise as a company.
And we're very lucky. And we pick this market on purpose because we know enterprises care a ton about safety and security and privacy. And so we build for them. And so like if you look at the product, it's actually kind of annoying for me because
like if someone has like a quadcode bug report or something, I literally cannot see your data. So like I need you to like give me reproduction stuff. So I can like reproduce it. But I literally can't access the data to you know, like see this issue. So there's a lot of like controls like that in place.
Also because we care about safety a lot, there's a lot of work that goes into just making
the model inherently more aligned and interpretable.
“And this is also just it's very important and also very related to this.”
And yeah, I mean the final thing is like there's just a lot of stuff that we build into the product. Like cowork can only see the folders that you give it access to. We cannot see anything else on your computer. We we put an entire virtual machine in cowork to make sure it's a really hard security
boundary to you know, so you can't access stuff that you don't give it access to. The biggest thing to worry about is attacks like prompting juction anything like this that would kind of exfiltrate your data. We have a lot of protections in place for this. And Opus 4.6 is just the most aligned model that we've ever built for prompt injection
in particular. And there's also a lot of like runtime classifiers and kind of safeguard that we put in place for this. But this is the biggest thing that I would think about is as you have cowork as you have cloud code interact with the internet.
Just be thoughtful about what websites it's using and it will ask you for permission. But it's a thing to keep an eye on because this is not a solved problem yet. It's quite good, but it's not yet solved. That's a good one. All right.
Give me one like normal human co-work activity that lots of people should do, that you you've either done or building or if you've heard from people that not everybody might expect that they should go do and then I'm going to let you go. Oh, no more human. Okay.
One is just like responding to email, just like open my Gmail, look at the top three things that I should respond to draft responses. So you can do that quite well. A second one that I do is just like canceling subscriptions. So I actually use it to cancel like a TV thing that I wasn't watching.
That's the most unbelievably annoying thing to do. I'm going to make cloud code unsubscribe to all of my email newsletters that I don't want anymore. This is going to work for me. Yeah.
I love this like dual track. Like you can use it for your right the emails and also unsubscribe for email.
Yeah, exactly. I just never want to look at my email ever again.
If it's cloud can make that happen, we will have accomplished something. ETI. All right. For us, thank you so much. I really appreciate you doing this.
Yeah. Thanks, David. We'll be right back. Support for the show comes from built rewards. Unfortunately, no one will patch you on the back for paying your rent on time every
month. But you can earn rewards on your rent with built, which is like a pat on the back. But for your wallet. It is the loyalty program for renters and rewards you monthly with points and exclusive benefits in your neighborhood.
With built, every time you pay your rent, you earn points that you can put towards flights, hotels, lift rides, amazon.com purchases, and so much more. And now, it's not just for renters, built members can earn points on mortgage payments too. Being a built member also locks exclusive benefits with more than 45,000 restaurants, fitness
studios, pharmacies, and other neighborhood partners. Join the loyalty program for renters at joinbilt.com/verge that's j-o-i-n-b-i-l-t.com/verge. Make sure to use our URL so they know we sent you. Support for the show comes from Shopify. Starting a new business, it could be a lonely endeavor, especially in the beginning.
If you're just starting out, it's more important than ever to make sure you have the right tools at hand. If your business includes e-commerce, a great next step is to try Shopify. Shopify is the commerce platform that millions of businesses around the world rely on to sell their products online.
You can get started with your own design studio with hundreds of ready-to-use templates. Shopify helps you build a beautiful online store that matches your brand's style.
“If you're asking yourself, what if people haven't heard about my brand?”
Shopify helps you find your customers with easy-to-run, email, and social media campaigns.
If you get stuck, Shopify is always around to share advice with their award-w...
24/7 customer support.
“It's time to turn those what ifs into, which Shopify today.”
You could sign up for your $1 per month trial and start selling today at Shopify.com/vergecast.
Go to Shopify.com/vergecast, that's Shopify.com/vergecast. Support for the show comes from upwork. When you run your own business, the to-do list can feel endless. Well, that might be the sign that it's time to grow your team. And for that, there's upwork.
Thousands of growing businesses already trust upwork to higher flexible, high-quality freelance talent for everything from one off-projects to ongoing support. Upwork also cuts down operational hassle by handling things like contracts and payments in one place, so you can spend more time running your business. Upwork is free to use, but if you decide to upgrade to upwork business plus, you'll get
access the top 1% of talent on upwork, and with AI power shortlisting, you'll get matched to the right freelancer in under six hours. You can visit upwork.com right now and post your job for free.
That's upwork.com to connect with top talent ready to help your business grow.
That's you, P-W-O-R-K.com, upwork.com. All right, we're back. Hayden Field's Verge Senior AI Reporter is here. Hi Hayden. Hi.
We're here recently, and we're talking about Notebook and OpenClaw and all of the insane things that you can do on your computer with AI tools and AI agents.
“We talked a little bit about privacy and how to think about whether or not you should engage”
with these tools and install them and what kind of data you should give to them. I've realized I've been having varying levels of existential crises about AI tools, starting with, like, I had a real experience with OpenClaw where I downloaded the installer for OpenClaw onto my computer. I have a Mac Mini and I have a MacBook Air and I was on the MacBook Air and I downloaded
OpenClaw and I was like, "I'm going to use this, get into it, see what it's like, try the whole thing out." I got literally halfway through the install process and it was like, "This is so stupid." This computer is full of all the information I care about in the world and all of this stuff that I know about everyone that I know, including, like, important confidential information
is a journalist giving this unknowable AI agent access to this is insane. So that's one level. But then even, like, I use, I mostly use Cloud for AI stuff and one thing Cloud really wants you to do is connect your Gmail and connect your Google Calendar and I've had moments to be like, "Is this an irresponsible thing to do, like am I being stupid, giving Cloud
access to my email?" So what I want to do as best we can is just trying to think through sort of how to think about your data and AI framework. Okay, does that seem reasonable? Perfect.
I've been asking the same questions. Okay. So let's just start kind of big picture. I want to get into some sort of nitty-gritty. I literally want you to tell me if I should give Cloud access to my Gmail.
But we'll get to that. You've been reporting on this a lot and talking to experts and trying to think through this for yourself. Do you have kind of big picture guidance on just how people should be thinking about this stuff?
Yes, and this is perfect because the big picture is the easiest to get out. Okay, it's really a different, it's a different decision for each person, depending on their risk tolerance and like the other ways they live their life.
“So honestly, the big picture is the easiest kind of square and then ever can kind of make”
the rules for themselves. But I did a bunch of expert interviews this week just to make sure my instincts are kind of on track with what actual privacy experts and tech leaders are thinking.
And it seemed like they were in that basically it's hard to give people good advice on
this stuff that stays current over time. That's one of the privacy experts I was talking to said, you know, it's like every six months things could change every month, every year. So you know, as of this moment in time, a lot of people are essentially kind of ignoring the way that they usually, you know, evaluate their risk tolerance and just kind of adopting
AI tools that are going viral or being talked about a lot just because of thumb-o or like, you know, the promise of making your life a lot easier. We all as humans want to make our lives easier, you know, one expert I talked to said it was like the siren song or like teenage remote. It's like, you know, you're just, you want to have the short term gain and make your life
easier. You don't really want to think about the long term stuff sometimes. That's fine. But teenage remote is such a good way to think about it. Yeah, that's like not quite full, like yellow mode, like LOL nothing matters.
But it's a little bit like my brain is just not yet fully developed. Yeah. And I could directly tell me that I was like, you know, doing your seatbelvers and just being like, oh, it's fine. I'll just drive down the line.
So yeah, exactly.
It's like, you know, basically you need to treat AI tools the exact same as y...
any other service that was requesting a lot of data from you and maybe even with a sharper
“eye because these companies are newer, they're less time-tested and they're also more”
incentivized to move quickly and they have a little bit less, you know, regulatory frameworks on them. You know, a lot of times they're voluntarily complying with certain rules, you know, and that's all well and good. But, you know, something that a bunch of experts told me is, yeah, they can change that
at any time. You know, they can kind of on the dial like shift that voluntary framework, shift that little like our mission, anytime, you know, and there's no hammer coming down on them if they do, they are just, it's voluntary. So they're doing it as a favor and, you know, that means that they can shift it any given
time and change how they treat your data, who they share your data with, how they use
your data to train their own systems or not. And the other thing is these companies may get bought eventually, you know, one expert I so good was like, if you wouldn't feel comfortable with your employer knowing certain things about you five years from now, when opening eye gets sold and, you know, they're selling off the end for the highest bidder again, that was like a extreme scenario.
Sure. Sure. He was like, yeah, don't share it. So, you know, that's something to keep in mind too, with like sharing health stuff. Like, do you want insurance companies to find out certain things about you and change
your premium?
Again, that's an extreme scenario, hopefully it would never happen, but you never really
know because all this stuff is so new. So it's like, I'm never going to be like, don't do X, Y or Z for you, I can do that because I know you, but like, you know, other listeners are going to be like, no, I want to get my health data to charge me to get helped me so much, like, the medical system is failing me.
That's fine. Yeah, the medical system sucks. So if you do want to find patterns in your health records and you feel comfortable with that level of risk tolerance, okay, just if you do that, of course, make sure you're doing it like within like, chat to be helped and not like, just regular chatbot, but
it's still pretty risky. Yeah, I think I, to continue using the teenage analogy. I feel like it, it's a little bit like the advice you hear a lot of parents give to like their teenage children who are sending, let's say, sensitive pictures.
“This idea of like, you should assume that anything you send to someone or create digitally”
will eventually be public and that that is your framework is, is you shouldn't share anything that you wouldn't share with everybody and I think everybody draws that line really differently, right? And there's kind of no wrong or right place to draw that line, but that is, it's a pretty extreme way to draw that line, but given all of the stuff you're saying, we just don't
know now and isn't regulated and isn't even sort of industry accepted yet. I think that there are a lot of ways that, you know, we give a lot of information to Google and we give a lot of information to Facebook and whatever. But there are at least now sort of accepted norms in how that data is treated and there would be real problematic ramifications if that changed.
It doesn't seem like any of that exists in AI right now. And it's also hard because even if they do have those protections in place, you know, like most of these companies do retain some semblance of your data, even if it's like, you know, anonymized or the personal stuff is stripped out, they like use it in some degree usually.
And so, and we've seen like in the past like 10 years, it's pretty easy to do anonymized data and it's also imperfect science on like what a system knows is sensitive versus not like, you know, market cutting him from dark trays was telling me like, it's hard for chat bots to tell the difference between a phone number and a social security number or, you know, like street address and an account number.
So it's tough because one, even if they're trying their best, the like guard rails here are not perfect and even if they do work great, you can do anonymized data. You know, it's, I don't know for sure if, you know, like a, if chat to be helped, like would be able to do that, but I'm just saying like in general, it's a pretty understood rule that anonymization systems for, you know, protecting personal data are very, very imperfect.
So it's like, you know, you just really need to know the risks here before you make a decision. And if you do want to give your data over, that's fine.
“It's just, you need to do it in an informed way without just like, you know, taking off your”
seatbelt and being like, whatever, like, you know, this may come back to about me in 10 years, but I don't care. If you want to be like that, sure, but do it with an informed take. That's all I'm asking. Yeah, I am very much of the, the generation that shared every photo that anyone took at
every party in college on Facebook. And then boy did we all learn several years later to go back and pretty ruthlessly come through all of the pictures that we had shared on Facebook.
It feels like this is the generation that's going to go through that exact sa...
I remember I used to climb on my high schools rooftop with my friends. Like it was like, so fun. And we would like go up the trellis and just hang out up there.
And I could never share the photos on Facebook because my mom was a teacher at the school.
The, the, the building is now been torn down, so I can share that story. But it's like, yeah, I mean, that was like, they got, I decided not to share those. But yeah, we were just being super willing, nearly about everything is we grew up in the age of the internet and we wanted to have people writing on our walls and like, you know, liking our Facebook album.
So yeah, it's tough.
“Like, I think people should just, you know, kind of apply the same thought process here.”
Um, even though it feels private because it seems like a one on one conversation, you know, we've seen like, tragedy records go public because like the length became searchable, you know, and like company execs were like giving financial data from their company into the system. And then like any public person could search it. That has since been fixed, but things like that can happen.
And you never know how when or why they're going to happen because this technology is relatively new. Yeah. One thing I see a lot of people wondering about and being fearful of is this idea that these companies are going to use my data to train their models.
And that's both sort of the facts of our interaction, but also like, like you said the, the the important financial data that I upload into chat GPT is going to be used to train the next version of chat GPT and that that is a privacy risk or breach in some way. What do you make of that? How should people think about what of their data is being used to train AI models and what
that means privacy wise? It's hard because these companies will be very careful with their wording, you know,
so you never really know the full extent on how they're how or why or if they're using
your data to train their models. Like they will say, for example, chat GPT help, they say explicitly your health data will be kept confidential and it won't be used to train their AI models. But does that mean some anonymized stripped down version of what you say won't be in some way used to train the models?
Don't know, you know, they don't, that's the thing. They don't really go into the how here and they don't have to because it's all voluntary. So the other part of it is that can change, you know, they can change their mind at any time. So I would say, you know, if they explicitly say for any given service, we don't use your
data to train our models, you can be pretty sure that they don't for the most part, at least for that, but that may change and certainly not for the ones that they don't explicitly say that.
The other thing to keep in mind here is that if it's a free product, you are the products.
So like, you know, if you're paying for a product, there's a less of a chance they're using your data to train or at least to a lesser extent. But if it's free, like, all bets are kind of off.
“That's what we learned with OpenClaw and what we learned with like a million products”
way before that. But yeah, I would say you're a little bit safer of your paying. And if you're an enterprise, a user of something, you're way safer, you know, tragedy health specifically, you're pretty safe because they're pretty explicit about that stuff. But that's just for now, who knows, you know, anthropic has a similar product, like the
type of compliant, but still like, you know, that's, you're not bound. So I don't know. It's just a tough thing. You need to kind of operate with a little bit of a grain of salt here. Yeah, that's good.
And I read you on that I found it, just like, flow mixed me forever. Yeah. And I think as it is a good proof of what you're talking about. So this is from anthropic's terms of service for Claude. It says we do not train our models on your Gmail or calendar integration data, ensuring
your private information remains private. Simple, straightforward. Right. Again, this is so much of this comes from I use Claude. I'd like the idea of being able to pull information from my Google Drive and my Gmail
in Claude as I look for things. Do I do this?
“I reading this, looking this up, that sentence makes perfect sense, right?”
We do not train our models. This is the next sentence. Note, if you are using our consumer products, EG, Claude, Free, Pro, and Max when using Claude code with those accounts, that was a double parenthesis. I just read to you by the way.
And you have chosen to allow us to use your chats and coding sessions for model training. Then any content you copy paste from your Gmail or calendar or Claude responses, which includes specific information from these integrations, maybe used to improve our models. What? Exactly, this is what I'm saying.
It's like, you can't really know. They will be pulling all sorts of double parenthesis on you. They will be doing double negatives. You just don't know, so that's my thing, too. It's like, yeah, maybe it's not going to directly use your emails.
If you're copy and pasting things from your email into it, it seems like it'll
use that based on what you just said, and what if it's returning stuff from your email
and saying, hey, here's a summary of all the emails you got today. Okay, it seems like that's also going to be used to train, so it's like, okay, and again, like I said, they didn't mention the enterprise there. So it's like, enterprise is pretty safe, but they consume our products. You don't really know.
It's just tough because there's not a lot of hard and fast rules here. And the fact that they can change these things at any time, the thing you just read, maybe that'll look a little bit different in a week or two. I have a track or set up for when these companies change their mission statements. And like, you have no idea how often I see a alert that's like, oh, this changed slightly.
You know, I mean, usually it's something dumb, like they took out a couple parenthesis, but sometimes it's not.
“So yeah, I think it's like, we should treat these documents as like living documents that”
are drafts and constantly changing, and if you're not okay with, you know, that policy
looking different in a couple of months, you know, air on the side of caution is what, how I operate. Yeah, that makes sense. Yeah, I think one really interesting outcome of this whole experiment for me has been that it all sort of leads to Gemini in a really funny way because Gemini offers a lot
of same things, right? Gemini can go find your YouTube information, it can go find stuff in Google Drive, it can go find stuff in Gmail, it can find stuff in your calendar. And I found the same thing in Google's terms of service, it says when enabled, Gemini accesses your data to answer your specific questions to do things for you.
And because of this data already lives at Google securely, you don't have to send sensitive data elsewhere to start personalizing your experience. This is a key differentiator. Like, I think that's true. This is such, I wrote this thing a few weeks ago about how Gemini is is winning.
And this to me is one of the key pieces of it that's like, okay, I feel uncomfortable giving my email to someone who doesn't already have email, you know, who already has access to all of my Google and so this idea that actually privacy ends up being a win for Gemini is so against what I would have expected coming into this, especially for a company like Apple, which builds itself as the privacy company, but is going to ask for all kinds of access
to other data from other platforms, most of the stuff that I care about already lives inside of Google. Better or forwards, I have made this privacy agreement with Google already.
“And I think there are in an increasing number of good reasons to get as much of your stuff”
out of Google as possible, but to the extent that you're comfortable with the amount of information that Google already has on you, which for most of us is all of it. Totally. Gemini ends up becoming a much simpler security tradeoff, right? It's like, I'm giving my Google data over here to Google over here, not crossing some
new corporate barrier. Totally. And just like with this security of anything else, like the more complex you make it in the more organizations that have a hand in something, the less security. It's the same reason why, if you're really, really trying to meet a whistleblower on the
DL with no trace, you meet them in person somewhere. It's like that just the more people that have a hand in something, the less security is. Actually, I was watching tell me lies this weekend saying the more people that found out about a secret, leaked the next day.
“So, yeah, I mean, I think I was talking to a guy at Checkmark's Darren Meyer, and he said,”
like, we have a history and tech of giving our data to an organization to get something about you like a tradeoff. And then when we find out later that they used our data in a way we weren't okay with, we get upset. And AI companies haven't done anything to show us there any different.
And in fact, are probably even more so like that because they haven't been time tested.
So it's like, you know, yeah, I think it is interesting because I've always thought,
when you keep most of your stuff in one ecosystem, you know, the AI agent that can help you parse that ecosystem is probably going to operate better because it's been trained in that ecosystem. And it's going to be a little bit more useful and have less of a learning curve, less friction. So, I mean, it makes sense that, you know, Jim and I would be working well and be a little bit more secure potentially.
I mean, it's such a funny thing, right, because it is, you can give good security advice on both opposites of that spectrum, right? There is a good and reasonable case to be made that the best thing you can do for your personal data is put it a lot of places, right? So that you, you have fewer, the risk of each individual sort of vector of attack is smaller and the possibility of something going wrong in a huge catastrophic way goes down.
I mean, you hear these stories about people whose Gmail accounts get locked for whatever reason and their life falls apart, right? That like, if you, if you have all of your stuff in one place, all of your eggs in one basket, if something happens, it's a disaster. So put your stuff in lots of places. It's better. The flip side of that is now all of a sudden, if you believe in an AI future, what I'm actually doing is then decentralizing all of that stuff
into a new place, which is a new vector for problems, right? I think that it kind of depends
On how many services you're connecting.
places, yeah, that's clearly more secure. And each company has less of a complete profile on you.
But if you're connecting Google to Glock and you're connecting Google to, um, charge of you tea, then it's like, even more companies have a full or profile on you. So
“I think that's three companies with everything. Yeah, three companies with a little or one company”
with a lot. Right. I think of it kind of like diversifying like your assets or whatever, like, financially, like, you know, if you, like, sure, if you have everything in different banks, it's great. But if you have, like, and this can't happen. So this is a bad metaphor. But if you had everything and everything, okay, that's kind of worse. So yeah, I think it's the same thing here. It's like, you know, for example, okay, last week, you and I were laughing really hard on, like,
the charge of tea trend on asking it to create a character of you based on everything and you
about you and your job. Now, I often use charge to be to you with the memory turned off, like, would log down because I just don't really need it creating, like, a intense profile on me, do I have accounts where I let it? Yeah, because I need to test it. And I'm an AI reporter. But, you know, if I'm just like doing something random, there's a lot of times I'm like, you know what? I don't really need this to go into, like, my the understanding of me and what I want. And so
the account that I used to do that, I had only used it for, like, five conversations, like, that were recorded. And so it didn't have that much info on me and it did generate me in a hilarious way. Like, just, like, travel, like, Paris, string, like, it was, like, very basic. But, yeah, I'm, like, it only had five conversations to go on and some of them were, like, wedding planning tips. So, who knows? Your is was, of course, like, more tied to, like, your work. But it was just funny to see, like,
you know, that's a kind of good example of, you know, the type of profile that these companies are building on you based on your conversations. And, like, what chargey tea itself or a cloud itself knows about you based on everything you put into. It's like, you gotta think critically about everything you're putting into these systems. Yeah. So, it sounds like, again, everybody can make this decision for themselves. Lay your privacy framework where you want to. You, you know me, you know what,
what I do and think about in the chaos that is my computing life. It sounds like you would tell
“me that I should probably not put all of my Google data into cloud. I think not, but I also understand”
the temptation because emails suck as you know, from other conversations we've had on this podcast. I have, like, 13,000 handwriting emails in one account. So, because you're a monster. Yeah. So, I get it. I think that, like, you know, if it's not that much, like, personal stuff, like, or not that much sentiment of information, you could connect it. Like, you know, if it's like work related stuff that's, like, you know, more sensitive, that's when I wouldn't ever, um, for you. But, like, you know,
if you just have, like, your personal Gmail in there and it's, like, a lot of appointment reminders and stuff, like, again, like, for other people, maybe I'd say no, but knowing that you value, like, the ease of organization and stuff so much. I would say, like, go for it. If there's not much sense of information there, but also it's like, we all have so if you delete your emails, for example, like, I keep every email I've ever had and I just, like, pay for the two terabytes of data or whatever,
even in my personal account. So, it's like, the amount of stuff goes back so far. I don't even know what's there. I wouldn't connect it. But if you, like, delete emails when you're not using them and stuff and, like, you keep it pretty, like, manicured, why not? Yeah. Yeah, it's interesting, like, because I do think, my use case is exactly what you describe, right? Like, I just want to be able to ask Claude what my Delta frequent flyer number is and it can tell me by finding it in my email. Like,
that's fine. But what you're also making me realize is I remember when Alexa and Google Assistant
were first coming out, my running theory was that actually what we need is not one all encompassing
AI assistant that everything funnels into that is like, I use Alexa to talk to my TV and to my speakers and to my car and to everything, but that actually eventually what we're going to have is this, like, incredibly distributed thing where every tool is going to have something that is more specific to it that, like, instead of addressing Alexa to talk to my TV, I just address my TV. And that is, we got part of the way there with some of the voice assistants, but not all the way there.
And I wonder if maybe I should be rooting for that outcome with AI too. Is that rather than connecting everything to Claude that there should be, I should use Gemini for my Gmail. And I should use something else for my television and I should use something that maybe the
“the future is many AI's and not just one. I think that that is a lot more secure for sure.”
And sometimes more efficient. I mean, you can argue either way. Like, in a way it's less efficient because obviously not everything's in one place, but also, like, you know, chapats on certain systems are going to work better if they've been trained in that environment. Like, we've learned that through tons of, like, our all research and stuff. It's like, the environment you train in is the environment you operate the best in. So, you know, like,
Gemini, I probably would operate better within the Google ecosystem.
Same with, like, a chapat for your TV that would, like, train specifically on only those use cases. So, I mean, it, like, kind of, like, mainly to less friction, even though it's kind of annoying to have everything in a ton of different places. It's also more secure. Like, yeah,
“I think you can't really go wrong with that. It does feel, it also feels more, if not more secure,”
then at least more understandable, right? Or it's like, I can make a clearer decision on what my TV should know about me than I can. This sort of all-encompassing needs to know everything about me. And this goes back to OpenClaw, right? Or it's like, okay, if I'm just giving this thing complete unfettered access to my computer, that's actually a really hard decision to make thoughtfully. You can either just say, you know, you'll know it's worth it. And to your point about the trade
off we've made, these companies are statistically speaking correct to assume that we will
trade privacy for features and convenience. We always have, is that the right decision has it been
the right decision every time. Often, no, we have made that trade off every time. Everybody who is ever bet that users will make that trade off has been right. So I wonder if A, that will turn because AI is asking so much more, or B, if these companies can just keep barreling through knowing that we will continue to make that trade off. But my hope is that at least it becomes a little more readable, right? The idea that like, I at least know the trade off that I'm making
in order to use this product feels very hard with a lot of these AI tools. And I think distributing it a little bit more would at least make it more possible that way.
“I totally agree. And yeah, I think that's the most important thing is knowing what you're giving”
up so that you can actually make an informed decision on like, am I getting enough benefit from this service to make it worth it for me? Like, everyone knows everything's a deal. Like, you know, it's a trade off, but do you understand the trade off or not? And, you know, these companies need to make it crystal clear what the trade off is and in what cases and not try to like try any funny business with like the double negatives and the double parentheticals. Like, just be honest,
what are you giving up and let people make that decision for them? And, you know, a ton of people will be like, yeah, I'm fine with that. You know, everyone knows everything about me anyway. Don't care. Like, ton of people will be like, whoa, I don't want to do that at all, especially because this company is like only a few years old. And I don't know what is going to happen three years from now, four years from now. Yeah, I mean, and also like, you know, people that I remember when
like, Fred just became smart for the first time and people were really worried that like, you know,
“if you bought a ton of beer and like not enough fruit that health insurance companies would somehow”
get that data. It's like, you just, I don't know, it's, you just need to know the trade off you're making and then see if it's worth it for you for me, like a smart fridge. It's not something I really need. I don't need to be seeing ads on the front screen of my fridge either, you know, but like, you know, for a chatbot that's like parsing all your my 12,000 emails, maybe it's worth it, who knows. But like, yeah, you just need to be able to accurately understand
what you're giving up. Yeah. Okay. All right. Well, I should confess here at the end that I already connected my Gmail to the cloud and I am now regretting that decision. So I'm going to go undo that for now. But Hayden, thank you so much for being here at this, this is great. I appreciate it. Thanks so much. All right. We got to take a break. We'll be right back. Support for the show comes from Laurel Group, using the latest advancements in science and tech to
create personalized beauty solutions for all. The global beauty leader recently introduced
two breakthrough technologies that bring the power of light to hair, care, and skin care.
Light, straight and multi-styler and the new LED face mask, both of which were recognized as CES 2026 Innovation Award honorees. Learn more about both technologies on Laurel.com. Laurel Group, create the beauty that moves the world. Support for the Vergecast comes from Wix. Most AI website builders and vibe coding platforms feel like a game of prompt and prey. The results can be all over the place and changing one
thing can break the whole design. Wix Harmony is doing things differently. You don't need to pick between a free form editor or vibe coding tool. You get both in one place. It's the hybrid editor that's bringing the next generation of website building. Wix Harmony blends the power of AI and precise drag and drop tools. You can launch a professional grade website by entering a single prompt and flow between prompting audio, your personal AI agent, and using manual design
tools that let you shape every detail of your site. And you can rest easy knowing that your Wix site is backed by 99.99% uptime and enterprise grade security with no add-ons required.
Ready to create your website? You can see why 280 million businesses around the world
rely on Wix for their websites and go to wix.com/harmony. That's wix.com/harmony. Support for the show comes from samsara. If your business relies on drivers, that means you're used to relying on the rules of the road. But road accidents happen and it's tough to prove what
Happened without footage.
That's where samsara comes in. It brings AI dash cams, vehicle tracking, and acid visibility
together in one simple platform. Samsara can help you protect your drivers, cut costs, and operate smarter. Their AI dash cams capture real-time video that proves when your drivers aren't at fault, protecting them from false claims. It's trusted by over 20,000 customers worldwide, including major companies across transportation, construction, and logistics. Don't wait for the next accident to take action. Go to samsara.com/verge to request a free
demo and see how samsara brings visibility and safety to your operations. That samsara.com/verge. Samsara. Operate smarter.
All right, we're back. Let's do a question from Verge has online. As always,
the number is 86 versus one. One, the email is [email protected]. We're not that hard to fight. Like, it's not. There are no good excuses for not hitting up the VergeCast hotline. Generally, I mean, here with me is the Verge's senior phone reviewer, Alice Johnson. Hello. Hello. I caught you just before you were about to disappear into the beginnings of phone season. Yes, my family will not see me for a week and a half. I'm going to be just among the phones.
Yeah, so it's samsara impact and then mobile Congress, which is in Spain still. It is in Spain.
“Okay. And then we think potentially maybe an iPhone, like right after that, right?”
Yeah, just for fun. Everybody was just like, what if an iPhone?
You know, why not? Yeah. What if we iPhone us do it? So our question is actually sort of tangentially about this. And I think is the kind of question a lot of people either are asking or about to start asking very quickly about their phone purchases. Let me just put this question for you. Hi, VergeCast. My name is Lucas. And I was wondering if I should do a kind of mid-cycle upgrade
for my phone right now. So I've got a 15 Pro Max. I've had it, you know, a couple of years. And it works fine. It's a perfectly fine iPhone. It'll probably be fine for another year, too. But I'm worried that with the price of RAM constantly going up, that the next phone I get is going to be significantly more expensive. And I'm wondering if I should do just like a mid-cycle upgrade now to kind of future proofs. So I don't have to worry about something really expensive later.
So Allison, degree or disagree that this question is, either is or should be on a lot of people's minds right now? I totally legitimate concern. I'm going to be honest, I was in the kind of like, oh, sure, RAM is a problem, but I don't build PCs. Like, so whatever, kind of camp, our friend and colleague Sean Hollister wrote a great article about why the RAM crisis is coming for all of us.
“So definitely check that out if you haven't. Yeah, and I think it is, I'm already like,”
feeling this question on our internal slack, you know, and people in the similar situation as Lucas that are sort of like, I was thinking I would probably upgrade maybe a year or two from now, but should I pull that timeline forward? And I think the question is real. I think that price increases in one way or another are coming for smartphones and everything else, apparently. Okay, so let's take this very tactically in two directions. I think the one is
Lucas specifically has a 15 Pro Max, presumably once another iPhone, and I think so 15 Pro Max is now a two generations old phone. I think you would probably assume that Lucas would be looking to upgrade, especially if your person is a Pro Max, not this cycle, but potentially next cycle.
“Like, I think Lucas is probably going to buy like a 19 Pro Max would be my guess. Yeah. Right.”
The skip to 17, that's fine. The 18 will be what it's going to be, but like in the normal course of events where he's probably two years away from an upgrade. And but I think the question of should I buy now to reset that cycle to give myself four more years for this to get better? What do you think? I here's where I've kind of landed. I think it is a factor to consider, but I don't think it should be the only one. The 15 Pro Max is an interesting case of like,
yeah, you are probably after you're more interested in the latest and nicest hardware,
Probably more so than someone else, so meet that seems like a factor towards ...
sooner, but there's a lot of things I'm unsure about how this is actually going to shake out for
prices. Apple is especially hates raising prices on the iPhone. They'll do that sneaky thing that they all do where they kind of like just take away the lower priced option, take the like lower storage option off the table, so they didn't really raise the price, but you know, you can't buy that cheaper version anyway. So maybe that's not gonna affect Lucas directly, but there are all kinds of pressures, I think, you know, the weirdness with tariffs that's been happening,
the ram thing and there's it's gonna manifest in ways that I think lead to a more expensive
“iPhone, but I don't think I think you should buy a new phone when it's the time to buy a new phone.”
You know, in thinking of the ram situation could be one factor in that purchase. Yeah, I think I agree, and I think for Lucas in particular, and the reason I want to focus on his
use case super specifically is I did not expect my advice to be weight, but I think the answer is
weight, like the 15 Pro Max is still a very good phone that will be a very good phone for at least two or three more years, right? Like the idea of it being sort of so vastly outdated that the camera is not up to stuff and that it can't do the things that you want to do. I think it's pretty unlikely in let's say the next two years. And, you know, knock on wood, it also seems pretty unlikely that all of the things that have led to this particular ram shortage being this bad right now are
also pretty unlikely to all be accelerating at this pace still in the next couple of years. Either the bubble is gonna pop and a bunch of weird stuff is gonna happen or people will start
“to ramp up the capacity to build more of this stuff. Like one way or another, I think we are”
headed to a not a permanent ram shortage. Fast forward to 2029 and somebody plays this clip back to me and reminds me of what a moron I am. Like maybe. But it does seem to me that like if you're in the position of saying, okay, my phone is gonna be very good for two more years. Is that a risk worth taking? I would kind of say the answers. Yes. Where I feel differently is people who are like I was gonna buy a phone sometime in the next year or so, right? People who are like, I'm not,
I'm not tied to the upgrade cycle. I buy a new phone when I need a phone. Most of those people right now, I would tell to just go buy a phone. Yeah. Do you agree with that? Yeah, like the the case in our internal slack was someone had a pixel 7a and perfect example. Yeah, and I'm like, you know what, you've gotten your money's worth on that phone. I think you're in your safely in the zone of like upgrade now, upgrade next year and just adding that factor of the ram situation,
maybe that tips you toward like yeah upgrade now. But yeah, I do think it's it kind of has to be time already, not sort of oh maybe next year will be time or next year I'm gonna start thinking about it.
“That's that's why I'm landing and like you know, looking you can always change out the battery.”
There's always refurbished options, you know, in a year or two, if the flagship phone prices are crazy.
I think too. I don't know. Do you remember during the pandemic when the car prices went nuts and not only did new car prices go nuts used car prices? Yes. Like we it was we had a harder time buying a used car than a new car in like 2021. It was insane and so part of me is worried that everybody's gonna have the oh I'll just buy a refurbished idea and actually that's gonna become a strange market too. Yeah, but in general I think you're right that it's not
if you're less of the of the mode of like I need the best phone right now the minute it comes out you do have a much larger set of probably more stable options. Yeah, um so is there anyone you would tell to wait a minute and like you're you're we're about to go into phone season. We're gonna get Samsung phones people are gonna hear and watch this on Tuesday. Very soon after we're gonna get new Samsung phones you're going to MWC to see stuff
we're we're hearing some inklings about you know there's presumably more pixels to come. There's more iPhones to come is there anything that you are like wait there's something coming don't buy it until you at least see what the new thing is. I think the good answer is like no
Phones are kind of boring right now which is like works in our favor you know...
they're getting more expensive um it's just not as important to upgrade every year every two years
“even every three years I think you're you're fine you know um so and I think that that many”
manufacturers you know we're already seeing this with the I think the Pixel 10A was a very iterative like hardware upgrade maybe in an effort to keep that price point down considering everything um so it it's kind of a good thing that phones are boring right now and it might be the case for a little bit. Well that's a really interesting point actually that maybe maybe the outcome is not that your iPhone is about to get you know several hundred dollars more expensive
but that actually the upgrade from this one to the next one is going to be even smaller so that they can keep the price in range. And I think yeah and we might not see I think it's pretty
likely we won't see you know like incremental increases in RAM every year um the way we have been
and yeah Apple hates hates putting a higher price tag on something so I think they will
“pull all kinds of strings before they have to do that. Yeah so okay I think this this feels right”
so if you if you have a device that you like and you feel confident about for a couple more years you can feel okay waiting but if you're like oh I should probably go get a new go get it yeah like don't don't wait just go get the thing all phones are good now it's going to be fine just go buy the thing. Yeah this is how I feel I just bought a I bought new uh Sony headphones not that long ago for exactly a this reason it's like tariffs all the shortages everything is complicated
like I need a new pair of headphones I'm just going to go I'm going to go do it I don't need them this minute but I'm going to need them soon and I'm just going to go do it. Yeah yeah we did that with the PS5 which we were so late to the PS5 but we were like it's time to get one and then the pricing increases kind of came up it's like okay now's the time and I have zero regrets about that. Yeah that's perfect example. Yeah all right Lucas I hope this helps let us know what you
end up deciding I feel like given that Lucas has a 15 Pro Max the odds of him hearing this and going screw you on buying a 17 Pro Max is like pretty high yeah but Lucas let us know what you do
Allison thank you's always yeah no problem. All right that's it for the first cast thank you to
Allison and Hayden and Boris for being here and thank you as always for watching in listening as always
“if you have thoughts feedback if you want to keep sending me stuff that you're vibrating this has been”
my favorite thing in my email inbox over the last couple of weeks is I asked on this show for people to send me examples of things that you've been vibrating and I have heard incredible stuff I think at some point on the show I'm just going to sit here and just like read people emails out loud for 10 minutes because you should hear some of the stuff that other folks are building it's it's so cool and so interesting and if you're building something that you think is cool and exciting I want to hear about it.
866 first one one is the hotline first cast at the verge.com is the email keep it all coming. This show is a production of the verge and the box media podcast network and this episode was produced by Eric Gomez, Brandon Kiefer and Travis Larchuk. I'll be back with Neil I on Friday to talk about all of the news there's there's policy stuff still happening there's Epstein files stuff still happening gadget season is back we got Samsung phones we have a lot to talk about it's going to be
awesome see that rock and roll support for this show comes from indeed if you're looking to hire top tier talent with expertise in your field indeed says they can help indeed sponsored jobs give your job the best chance at standing out in grants you access to quality candidates who can drive the results you need spend more time interviewing candidates who check all your boxes less stress less time more results now with indeed sponsored jobs and listeners of this show will get a 75
dollar sponsored job credit to help get your job the premium status it deserves at indeed.com/fox business just go to indeed.com/fox business right now and support our show by seeing you heard about indeed on this podcast indeed.com/fox business terms and conditions apply hiring do it the right way within deed this is advertiser content brought to you by Stonyfield Organic our cows them going out to pasture they love it they're so excited to go out every day they wait
rate of the during fact we milk them and we just open up the lane way and let them just go right out to pasture. I'm Rhonda Miller Goodrich and I'm a dairy farmer in Cabot Vermont. Our farm is Molly Brook Farm we're an organic dairy farm and we are a supplier to Stonyfield. Molly Brook Farm has been in my husband's family since 1835 we started our organic transition in 2015 we had 53 acres of corn ground and of course we had to use herbicides and pesticides
and the soil was dead really for all intense purposes we stopped growing corn and stopped using
Herbicides and pesticides and we seeded that down to perennial grasses after ...
biodiversity and that soil again to be organic certified our cows need to be in pasture at least
120 days I think the organic practices really benefit our animals you know having good feed
“good water a nice light area that's what's important to us and that's what's important”
to Stonyfield. Visit stonyfield.com to find Stonyfield organic yogurt near you.
Support for the show comes from core weave everywhere you look AI is expanding what we thought was
“possible and at the center of it all is core weave medical research and diagnosis education”
complex visual effects for movies science and technology breakthers core weave powers AI pioneers
around the world with purpose built tech building what's never been built before core weave is the
“essential cloud for AI ready for anything ready for AI to learn more about how core weave powers”
the world's best AI go to coreweave.com/ready for anything

