Support for the show comes from David Protein, who doesn't enjoy a protein ba...
good work out.
Here's a tip, David Protein bars.
βAll David Protein bars are designed to maximize protein, while minimizing calories.β
And they say that their bars deliver the highest protein per calorie ratio of any leading bar on the market. Their David Gold Bar, for instance, delivers 75% calories from protein and the David Bronze Bar delivers 53% calories from protein. Add to David Protein.com/Prophogy, whether offering a special deal for our listeners
by four cartons and get your fifth free. You can also use their store locator to find David and stores at a retailer near you. Support for the show comes from VCX, the public ticker for private tech. The US stock market started history's greatest wave of wealth creation. From factory workers and Detroit to farmers in Omaha, anyone could own a piece of the
great American companies.
But today, our most innovative companies are staying private longer, which means every day Americans are missing out until now.
βIntroducing VCX, a public ticker for private tech.β
Visit getVCX.com for more info that's GetVCX.com, carefully considering investment materials before investing, including objectives, risk charges and expenses. This is another information to be found in the funds perspective at GetVCX.com. This is a paid sponsorship. Just a week on net worth and chill, I'm taking you inside my sold out New York City Book
for a stop from my brand new book, Well and Doubt. I sat down with the hilarious Header McMahon for a night of laughs, real money talk, and honest financial truths. We're getting into everything the book covers from how to actually build wealth, how to protect it, and how to stop leading money on the table.
Whether you've already grabbed your copy of Well and Doubt or you're still on the fence, this episode will show you exactly why everyone's talking about it. Listen wherever you get your podcasts or watch on youtube.com/yourrichbf. Episode 386. It's 6th of the year.
It's serving north central parts of Florida, 1986, top gun, head theaters, true story. Tom Cruise is starring in romantic comedy about body positivity. He and his actress both can't 300 pounds for the roles. The name of the film, Missionary Impossible. Let's give it a second.
Welcome to the 386th episode of The Prop G Podcasts happening. In today's episode, we speak with Meredith Whitaker, the president of the Signal Foundation and the leading voice on AI Policy at first came across Meredith that's out by Southwest. She was on a panel.
I never, I was bored and I walked in.
βI almost never listened to panels and I thought who is this?β
Who is this strange dark hair woman speaking all sorts of truth and logic about AI? In an end, she runs the app and I had lunch with her and she struck me as really intelligent and I have been much more concerned about, for the first time I don't know if I'm getting older, much more concerned about my own privacy worried that at some point all of my AI queries will be made public.
Is that my prostate question mark, expecting AI to answer? I'm pretty sure every ailment I have is because I'm in a large prostate. I'm convinced everything starts with the prostate. Anyways, don't know how I got here. Anyway, she's an incredibly insightful intelligent person and I would argue probably the most
well-liked person or CEO and tech right now, which isn't saying a lot, very impressive, very intelligent and sort of signals trying to, or is I think carving itself out of sort of the clean well-lit part of the internet and I'm fascinated with the trade-off between privacy and utility and we'll speak more about that. Anyways, she's our conversation with Meredith Whitaker.
Meredith, where does the podcast find you? I'm in New York City. In New York, I thought you were, I thought you lived in Europe. I'm in Europe a lot, I go between Paris and New York, we're small or we spread a lot of jurisdictions.
There you go, so let's plus right into it, I want to start with the basics. Signal has been in the news a lot this year and we'll get to that in a moment. We know it's widely used by journalists, public officials, and people who are especially concerned about privacy, but on a practical level, how does signal actually work and what makes it different from other messaging apps?
A practical level signal is the most widely used, actually private communications platform. If we go out of our way to collect as close to no data as possible, and that's really what sets us apart because we existed in ecosystem where for better or for worse, in one way or another, most of the time you make money in tech by collecting and monetizing data.
You collect data about the users of your platform and then you sell access to...
types of users based on that data to advertisers or you collect data and you train your
AI model with it, et cetera, et cetera, et cetera, et cetera, that's kind of the economic engine of tech since the 90s and maybe before, signal is obsessed with maintaining the human right to communicate privately and we have built an alternative communications platform that does just that. We end up rewriting core pieces of the proverbial stack to enable us to, if do what is normal,
to provide a basic and easily usable messaging platform in a way that does not collect your data and does not put us in a position of being forced to turn it over if we get a subpoena of having a breach exposed your most intimate information of violating the compacts that we make with the people who rely on us. So that's in a nutshell, we're also open source and open source matters here because that
means you don't have to trust me, you don't have to like me, you can actually verify that
yeah, the thing that she or anyone says it does is what it does because we can scrutinize the code we can prove it.
βBut I think the most likely we'll see you on tech, which isn't saying a lot, but yeah,β
the bar, the bar is pretty low. I meant that, the term encrypted is a loaded term. Can you talk about the biggest misconception about encryption and messaging apps? I mean, I think it's a little bit like, you know, the way skin care ingredients are like, I don't know, gold or something gets invoked, right?
We can say it's, you know, both of these have encryption in them, but one has 10% encryption or encryption is only applied to 10% of the data, whereas another is fully encrypted. And so if you look at, say what's up and signal, what's up uses signals encryption protocol? And this is the gold standard for encryption messaging was released in 2013, has stood the test of time really advanced the field of privacy preserving technology when it was introduced.
That's licensed by what's that, but what's that only applies it to one layer of the WhatsApp layer cake, so to speak. They use it to encrypt the contents of your messages. So if I'm texting you like, you know, hey, Scott, where we got to meet it, South by Southwest, what's up would not be able to see that.
What's up does not encrypt intimate metadata and metadata is a fussy little term, but it's, you know, it's actually pretty revealing data to who you text it's who's in your contact list. It's your profile photo. It's when you started texting someone your therapist, your oncologist, your FBI cutout, whoever it is, that's very revealing data.
And then of course, we're not owned by meta, which means that, you know, there is no bunch of Facebook and Instagram data, you could then join that intimate metadata with to make profiles, et cetera, et cetera, et cetera. So signal is, you know, encrypted up and down the stack. We encrypt the contents of your messages, but we also encrypt your profile photo, your contact
list, who is texting, whom, who is messaging with whom, who's in your groups. So you can look at our website, signal.org/bigbrother, and we work to unseal in a subpoena that we are forced to comply with. And what you see there is a long list of requests for data. That's normal.
βThat's what, you know, governments assume an average messenger is able to give up.β
And then you see what we're actually able to give up, which is very close to nothing. We can confirm that in a phone number has an account, we can confirm a handful of other things, but we have gone out of our way to be, you know, unalloyed, you know, 100% encrypted to use that slightly metaphorically, but to, you know, you get the just we really take that extremely seriously.
We're not just sprinkling encryption dust on top of a, you know, ultimately non-private infrastructure.
So I wanted to talk about something that gets known to us. But I don't know if you've heard of AI, but it's in the news recently. And AI. Yeah. Right.
In the structures. There you go. What is that? It's a movie by Steven Spielberg. So the AI agent specifically, you've been pretty vocal about the dangers of the agentic
AI that the danger it poses to our privacy and security.
βCan you elaborate on the risk here and what are most people not aware of?β
Yeah. The risks are the flip side of the promises, really. We actually started talking about this about a year ago when we were seeing things like Microsoft recall creep into, you know, the product updates in this case for Windows and really
Recognizing that a signal signal exists at the application layer, right?
Which means that we have to trust the operating system.
βWe build on top of iOS or Android or Windows.β
And we have to trust that the operating system will be a reliable set of tools that we as developers can leverage to ensure that signal works for the people who rely on us and that, you know, the users of the device can rely on. And our primary concern is that as agents get integrated into the operating systems by these AI companies that the people who maintain the operating system and as they get leveraged
beyond that in ways that are giving them very pervasive access to your life, it undermines our ability a signal to guarantee the type of privacy that we guarantee at the application layer. And I'll give, you know, that may sound a little bit arcane to people who don't, you know, live in these waters with me.
But just a quick example, you know, if you have an agent running on your operating system or so given deep access to your file system and other other data on your device in order
βto do something like, you know, plan a work dinner, well, the agent will need accessβ
to your calendar, it will need access to your browser, perhaps to look for a restaurant, maybe your credit card or your EA's credit card in order to book that work dinner. And in a scenario where you are as we should be all using signal, it will also need access to your signal and your signal contacts to text them and coordinate dates and times.
All of that becomes a pretty frightening set of data access points and ultimately a security
vulnerability because instead of, you know, having to break our gold standard encryption algorithm, which has been, you know, tested and so mathematically proven to be secure, you just have to leverage the type of access that these pervasive agents are being given into your applications, into your intimate data in ways that, you know, are, you know, just from a security architecture perspective, very, very insecure.
And I'll note that right now, almost every agent that we're seeing kind of in the mainstream is relying on large LLM models, models that are too big to run on your device, which means that, you know, ultimately most of this data would need to be sent off your device to a cloud server to be processed for inference, you know, creating another security issue and potentially you know, placing data in the hands of whatever company is running that agent.
So that's, your art concern is really coming from a privacy integrity standpoint and from a concern for the people who rely on signal by the introduction of these tools, which can be useful for some things, but you know, also pose this pretty significant risk that isn't getting the kind of attention I believe it should. We'll be right back after a quick break.
Support for the show comes from Better Help. This international women's day, Better Help wants to remind all the mothers, grandmothers, ants, and sisters of the world that you deserve to take care of yourself as much as you take care of people around you.
βIf you want to help getting connected with a therapist, you could try Better Help.β
Better Help does the initial matching work, so you can focus on your therapy goals. All you need to do is fill out a short questionnaire that helps identify your needs and preferences and better help matches you with a licensed therapist, operating under a strict code of conduct. After 12 years plus of experience, Better Help says they have an industry leading match
fulfillment rate. And if you aren't happy with your match, you can switch to a different therapist at any time from their tailored recommendations. So with 30,000 therapists, Better Help is the world's largest online therapy platform
having served over 6 million people globally.
And out of over 1.7 million client reviews, Better Help's Aberdrating is a 4.5 out of 5 for a live session. Your emotional well-being matters. Find support and feel lighter in therapy. Sign up and get 10% off at BetterHelp.com/probtree.
That's BetterHELP.com/probtree. For for the show comes from LinkedIn, it's a shame when the best B2B marketing gets wasted on the wrong audience. Like, imagine running an ad for a cataract surgery on Saturday morning cartoons, or running a promo for this show on a video about Roblox or something.
No offense to our gen alpha listeners, but that would be a waste of anyone's ad budget. So when you want to reach the right professionals, you can use LinkedIn ads.
LinkedIn is going to a network of over 1 billion professionals and 130 million decision makers
according to their data. That's where it stands apart from other ad buys. You can target buyers by job-titled industry company roles in your already skills, company revenue, all suit and stop wasting budget on the wrong audience.
That's why LinkedIn ads both one of the highest B2B return on ads been of all...
networks. Seriously, all of them.
βIt's been $250 on your first campaign on LinkedIn ads and get a free $250 creditβ
for the next one. Just go to LinkedIn.com/scot, that's LinkedIn.com/scot, terms and conditions apply. Support for the show comes from Square. Think about your favorite small business, that coffee shop on your block, or the salon you've
been going for years, or that dog walker you always pass, who seems to be having the time
of her life. Square makes it simple to run a small business no matter what it is. Whether it's one brick and mortar, a pop-up, mobile service, or franchises, Square can help track sales, management, or an access report in real time. Square even has built-in tools like loyalty and marketing to help you connect with customers
and reward them for showing up again. Square supports every major payment method, including tap-to-pay and offers instant access to your earnings through square checking. A lot of the local businesses I go to seem to be using Square, which makes me, actually,
makes me feel good about the brand.
With Square, you get all the tools to run your business with none of the contracts or complexity. And why wait? Right now, you can get up to $200 off square hardware at square.com/go/probg. That's squ-a-r-e.com/g-o/probg.
And your business smarter with Square gets started today. What do you think the risks are? If you're using a cloud or a chatGPT, what do you think, realistically, the risks are of the next 5 or 10 years that your data is compromised in some bad actor or the LLM themselves
will have access to your private information and be able to link identity with, I mean,
the John Oliver segment on finding people's data in the dark web, including their search history, should people be really, should, should, should, should, should be cognizant of what they, of what they, uh, query these LLMs?
βI mean, I think they absolutely should be cognizant in a query to an LLM that isn't sortβ
of a specialized private inference setup, you know, kind of what Moxie invented signals doing with confer or other similar setups, but any, you know, a general query chatGPT is sending that data to servers that are controlled by OpenAI, Microsoft servers. They retain that data, they could leak that data, we know that when presented with a valid subpoena, they will turn that data over in a world in which norms and laws and definitions
of criminality shifts from, you know, one year to the next, perhaps it's good to be cognizant of where that data could go and what it could do in terms of, you know, marking you as one or another type of person, not to mention, I think, you know, with the introduction of advertising and, you know, increased targeting, at least the plans to introduce advertising and chatGPT, I think there are also issues about what that can reveal about you, you know, in more mundane
context as a consumer or as a job seeker and, you know, the kind of advantages or disadvantages that might accrue, given that the power to define you based on data that is, you know, in the context of TFT, often extremely intimate.
βYou've actually referenced that AI as a marketing term, what did you mean by that?β
Yeah, I mean, I think it's, I'm being flatly literal, although I think that's sometimes taken to mean that I'm saying AI doesn't exist or it's not serious, which is, you know, marketing is, in fact, very serious, you know, what I'm talking about there is just sort of denaturizing AI as a technical term of art. If you look back at the term AI, you know, it was created in, you know, 1956, 1957 by John McCarthy, who hosted the Dartmouth conference, those of us in, you know,
in this world will be familiar with that kind of an iconic conference where a number of the quote unquote "fathers" of AI gathered to try to create intelligent life, you know, in the form of a machine over the course of a summer, and John McCarthy created the term in his own words in subsequent interviews because he wanted to exclude Norbert Weiner from the convening. They didn't get along. Norbert Weiner had, you know, created the term cybernetics and the field of cybernetics
and McCarthy classically did not want to be a disciple. He wanted to be the father of his own thing, very common academic urge, and he also wanted grant money. And he thought artificial intelligence was a kind of flashy term with, you know, a cool valence that would get some of that, you know, Cold War era, ARPA money flowing to his lab, which it did, it funded the conference,
Over the history of the term, it's like over 80 years now, we've seen it appl...
very disparate technical modalities. So McCarthy was invested in symbolic systems, which we look much more like decision trees, and was actually deeply skeptical of the Norl approach, which predated, you know, the term by about 10 years and was, you know, macula and pits and Norl networks. So stem from that. So what we see as a term that was invented primarily to describe an approach that's out of favor today has now been applied, you know, because of the specific resources available
and the recognition that, you know, neural networks can do interesting things with data and compute in the type of business models we have. The term AI is now applied to an approach that was not actually kind of under its umbrella when McCarthy invented it. And why is any of this important
βbeyond it just being very interesting if you're a nerd? I think it's important because it allows usβ
to step back and actually recognize that this is not a term of art and what we are describing our very particular approaches that have their own historical and political economic formulations and that we can actually sort of have a bit more agency to define what we mean by intelligence, to choose the technologies that we are leveraging to produce intelligence
seeming outputs and to be a bit more critical and actually regain a bit more of our own agency
in relationship to mythologies that kind of naturalize these systems as to say as a linear arc of technological and human progress. There's been a lot of, I don't know if it's warnings or catastrophizing from AI executives who said, I'm scared of what I've built and I need to retreat to the, you know, the Cotswaltz and Wright Poetry. I'm curious what you think the threat level is of AI and if it's been overstated, understated and where you see the biggest threats
βand how we as a populace respond to it. I think there are threats, particularly if we integrateβ
these probabilistic, you know, generative and decision-making systems into high-stakes domains,
you know, nuclear, defense, energy and put them to tasks that they are ultimately not secured
or suited for. So, you can have reward hacking, you can have emergent behavior, all of those things are real. Those aren't things that are simply going to sort of spring out of nowhere or, you know, Athena from Zeus' head and suddenly we have ephemeral technologies running around without our control or delegation in some sense, right? Those would need to be choices that are made by people and decision-makers and I do think, you know, in some sense, some of the fear has a bit of escape
velocity from material reality and almost sounds a bit like a religious fervor rather than kind of a, you know, technically grounded concern about the rush to integrate technologies that are not fit for purpose and could have collateral consequences, which is where I land on it. My primary fear however is the combination of the the mythology of artificial intelligence, which is really framing these technologies as, you know, superior to human judgment, superior to human capabilities,
which on some axis measured in some ways, you know, surely they do math much quicker. So, as a calculator, they can, you know, produce things more efficiently, et cetera, et cetera. Yes,
but ultimately these are very centralized technologies that rely on huge amounts of data,
data that is captured by an industry invested in what I'd call this surveillance business model, which is effectively, you know, collect all the data you can via your platforms and then, you know, train an AI model, sell it to advertisers, et cetera. And, you know, so it's requires huge amounts of data, it requires huge amounts of infrastructure, and I don't have to go into the, the wild capex spending that kind of, you know, Nvidia's kicks and shovels, the, you know, monopoly on
on chips and the, you know, build out of data centers. And it requires huge distribution networks, which often get left out of that calculus, but basically if you're going to make money, you're going to integrate this, you need, you know, either a large social media or marketplace platform,
βor you need a cloud business model, or you need to latch on to one somehow. So, all of thatβ
redowns to an industry that is highly concentrated in the hands of effectively the winners of the, you know, the last tech boom, the platforms who were able to establish, you know, data pipelines and massive amounts of data, large platforms, cloud infrastructures, global reach that were sort
Of cemented via network effects and economies of scale, all, you know, classi...
network monopolies. And so my concern with all of that is that what we're looking at is a significant concentration of power over infrastructure and decision making that is then rebranded as a kind of
godshead intelligence in ways that are making us less critical than we need to be about how that
power is being leveraged. Well, it's, it's drilled down to specifics. What do you think, and nobody knows, but what is your best guess with respect to AI and employment? And let's call it the West and Europe and the US over the short and the medium term. I've seen TikToks of economists and AI executives saying, or AI thought leaders saying, employment, we're going to see a massive destruction and a labor force. But the flip side is so far it hasn't really manifested. There's some, there's,
you could potentially interpret that the job market is softening, but youth unemployment is about
βwhere it has been historically at average. AI and the labor force, what, what is your best guess?β
Yeah, and this, I got to be careful here. This isn't really my lane and I'm seeing a lot of competing
headlines. It does seem clear to me from some, some conversations that at least in part, AI has been a handy pretext for job cuts, boards and media and shareholders will accept that, hey, we cut X number of people because this is part of our AI strategy that doesn't look like weakening demands that looks like innovation. And so I do think there's some AI wrapping of downsizing that is happening and I've heard that firsthand from some folks. I do think, you know, we are seeing
at least this sort of degradation of work and, you know, degradation meaning, you know, there are people who maybe use to have a job as a copywriter or translator. And we
βsee this with translation, who are now just kind of editing AI output, right? And it's a less secure,β
maybe less fun, less rewarding job, but you, it's, it's not removing the human. It's sort of review, removing the agency and power that a human would have in that job under different circumstances. I am really impressed with what I've seen or, you know, it's the, the new round of coding agents are very, very capable. And, you know, they're definitely seeing a lot of excitement across my industry there. It's, you know, you can't deny that these are very useful and produce output that is, you know,
pretty commensurate with like a junior programmer. But again, you still need a senior programmer. You still need somebody who understands how it works to review the code and maintain it. And so, even though you're seeing advances in capabilities, one thing that isn't being talked about enough is, you know, there are a few things that many engineers I've worked with hate more than having to maintain someone else's shitty code. So you still need somebody who has an understanding
of the systems level who's bumps their head up against problems and understand them, you know, and can fix them who understand how one, you know, pull requests or kind of launch of code might interact with another. And that's the place where I'm not only concerned the kind of rapid
βoutsourcing of some of the development work to agents, you know, I, I think some of that couldβ
backfire in, you know, a kind of technical debt that is very difficult to pay down if what we're looking at systems that are sort of, you know, built by agents or, you know, kind of coding AI and not fully understood by the people who, you know, the kind of skeleton crew who are left to maintain them. So those are, you know, those are some reflections. I don't think I have a clear answer because I think this is not just a question of AI. It's also, you know, where is their market will, you know,
how is AI going to be used as a pretext? And then what happens when we do have the first significant issue with the reliance on these AI systems? And I, you know, I say that as I recognize that, you know, Amazon went down apparently because of an error made by an AI agent that
they integrated. So, you know, we have already seen a kind of, you know, first wave of critical issues
that are caused by a kind of dependence without human oversight. We'll be right back. Support for the show comes from VCX, the public ticker for private tech.
For generations, American companies have moved the world forward to their ing...
And for generations, everyday Americans can be part of that journey through perhaps the greatest
βinnovation of all, the US stock market. It didn't matter whether you were a factory workingβ
Detroit or a farmer in Omaha. Anyone could own a piece of the great American companies. But now, that's changed. Today, our most innovative companies are staying private rather than going public. The result is that everyday Americans are excluded from investing in getting left further behind while a select few read all the benefits until now. Introducing VCX, the public ticker for
private tech, VCX by fundraise, gives everyone the opportunity to invest in the next generation of
innovation, including the company's leading AI revolution, space exploration, defense tech, and more. Visit GetVCX.com for more info. That's GetVCX.com. Carefully consider the investment terrible foreign investing, including objectives, challenges, and expenses. This and other information can be found in the funds perspective at GetVCX.com. This is a paid sponsorship. Nutella, what a fun-marmash on far-pappa-galliped, not Nutella, is Nutella.
>> We're back with more from Meredith Whitaker. There's a tension between privacy and encryption,
βand I think the potential weaponization of encryption and privacy by bad actors. I wouldβ
imagine my virtue of your position. I think I have an understanding where you would land on this, or at least a bias, or a view on it. In London and New York, they say you can't go more than 12 or 15 feet outside without being on camera somewhere. And to a certain extent, I like that. I think I like it more in Britain because I'm less worried about it being weaponized by the administration here. But if you look at the decline in crime rates, I think some of it is because
of technology and then court-ordered mandated, if you will, violations of privacy, if there's enough evidence that this person is a bad actor, and then we need to violate people's privacy
βto understand if something bad is about to happen. You must be given this question all the time.β
That tension, where do you land on that tension and is there? Is there ever a reason for why people's privacy should be violated in the context of larger safety concerns? I want to back this up to the fundamentals of encryption. And when we're talking about your signal, what we are talking about when we talk about it and encryption and the way that it works is a technology that either works for everyone, or it works for no one. If you undermine the math of encryption, if you
put a back door in there, you have a not actually random random number generator that means you
could so it basically perturbed the encryption, decrypted. That's not just a back door. That's just
not just an error that only the good guys can avail themselves of. That is effectively breaking encryption for everyone. So it really is a scenario where the people you hate the most have to be able to use it to exercise that right? So to speak, if the people you love the most are going to have access to it as well. It's a you're all in one basket and that's at the level of math. I'm not answering the question is it ever good or appropriate to undermine price? You know, that is,
that's not actually what I'm talking about. What I'm talking about is a world in which over the last 30 years, we are surveilled within an inch of our lives. He said, every 12 feet were recorded great. And then you made the comment. You know, I'm more comfortable with that under one regime than I am under another regime. Well, that becomes the issue. You're not really in control of, you know, how the sands of that regime shift. I mean, maybe voter, you know, whatever it is, but that data is
indelible. Those systems are pervasive. Meta is adding facial recognition to their
Rayban glasses, right?
is interesting to me that in a, you know, a golden age of surveillance when
βunprecedented in human history, our actions, our preferences, our communities, you know, who we date,β
who we talk to, what we do for a living, how we spend our money, our surveilled and log at a level of detail unimaginable to the stasi that we are still pinpointing a tiny refuge where the fundamental right to private communication that is recognized as such, that is necessary for a full and joyful and intellectually rigorous life that has intimacy and the ability to exercise our opinions and dissent and blow the whistle and do journalism and all of that. That that one right is
presented as a problematic. And as the barrier between stopping crime and allowing it to run rampant
in a world where, you know, the issue is more often than not finding the needle in the haystack of noise and the haystack of data not getting access to an encrypted channel. So my stance on that
βis very, very clear, but I also think the framing of the problem needs to be shifted a little bit.β
Yeah, my pivot coho said something that really struck me. She said that people have the right to have secrets and it really struck me and this, the kind of the smartest people I know that also understand tech, all you signal. And I realized how promiscuous and careless I've been with my own data. I thought what I do is just not that interesting. And most recently, when I hear the Trump administration talking about assembling lists of people who are vocal, you know, pretty outspoken against the
Trump administration, I'm like, wow, I spoke too soon. If you were to, if, and you have advised the government, I know you were part of, you were to lean a con. What regulation, if you were to to advise the administration or the FTC, maybe it's under a different administration, on what would be the most thoughtful regulation as it relates to privacy and encryption or AI,
βyou know, kind of magic one time. What do you think is most needed from our governments right now?β
Yeah, this is a bit of a tricky question for me because I've been not in the policy bubble for a while. I do think, you know, something as simple as, of meaningful consent and, and by, by which I do not mean just a bunch of click wrap and cookie banners around whether or not a given company or institution gets to create data about us at all, not what they do with our data, but whether they have the right to tell my story to know about me,
would go a long way. Of course, that would wreck an entire logic of the tech business model, but I do think the fundamental thing that needs to be done, however, the regulatory paintbrush would paint this, is to question and then take back the authority to define who we are from a handful of companies that have, can naturalize their right to sort us and order us and tell us our place in the world. That's a bit of a philosophical answer, but I do think that's the core issue,
is the authority we've given tech companies who create data for advertisers to sort and order our world and tell our stories for us. I'm just curious where you thought of the ring Super Bowl world, my god, my god. I mean, I didn't expect it to become so flagrant so quickly, I guess, and seeing that it was, you know, I was like, do who are they selling this to? Is it people who would install this or is it the government contracts who recognize exactly what this is selling
and want to sign up for data access? Like, I certainly wasn't the core demographic. It was aimed at, but it also felt like there was a tertiary market that was actually being addressed that wasn't you know, eager doorbell owners. When you look at the landscape, well, I'm going to ask a market question and I'm getting, my guess is you're going to tell me
it's not your lane, but I just want to remind you that's never stopped us from opining on it.
And I'll talk, I'll matter of topics. We have no domain expertise. And in the markets, there's been a meltdown around SaaS companies from evaluation standpoint. You work, you essentially work for or run a software company. So we have a think of it. I don't know if you call it that, but at the end of the day, I would imagine it's code. I work for a software company. There you go. So there's been an enormous destruction of value among SaaS companies, believing that AI is going to come
In and kick the crap out of these guys or make them obsolete.
do you have any initial thoughts on the viability of these somebody's software companies who are
you know, some of them lost 40, 60, 70% of the value? I think ultimately when you're providing
enterprise software, particularly to highly regulated industries, it needs to be interoperable with legacy equipment. Even if you don't like that legacy equipment, as you know, there's a superstructure there. There's a foundation. It needs to work with the data that you
βhave even if that data isn't great, that data needs to be clean and fungible. You need to be ableβ
to account for the different determinations that are made depending on what kind of model you hook in there that might not be a possible particularly in financial services and other industries with high compliance burdens. You need to have, you know, often human oversight that is personally liable or accountable for different decisions. So I, you know, I do anticipate that AI and some form or fashion will be integrated will have impacts here. But fundamentally, this is not a magic one, right?
And there's a lot of legacy infrastructure, regulatory burdens and labor processes and modes of work that need to be accounted for. And I don't see essay software going away anytime soon. And I don't see AI doing anything to really erase those other considerations, right? I think predictions of the demise are a bit self-interested and, you know, far premature. And last question, Meredith, it strikes me the young people are absolutely, at least when I see their actual behavior,
there's some consumer dissonance that is people talking big game about privacy and then I see people
basically telling the world where they are, what they're doing and who they're doing it with.
And it strikes me that even if you put a thin layer, if Uber would ever get hacked, a thin layer of AI on top could basically, you know, who's having affairs, terminating pregnancies, HIV status, it just wouldn't be that difficult to just know everything about someone with just their Uber data. Do you see the same dissonance I see and that is consumers have just decided to trade off massive privacy for utility and do you have a message for them? I do see some of that.
βI would shift it a little bit. I think ultimately humans want to be loved and they want to be included.β
They don't, you know, even when we talk about signal and privacy, we're not talking about a vacuum. It's not Meredith by myself with none of my thoughts escaping the anicoach chamber of my, you know, meaning making. I am using signal to share what I think with other people because I am a human and communication maps to human relationships and the desire to be connected and to be included, et cetera, et cetera. So I think we're in a world where
you know, ultimately we will opt as human beings. I use these services too because I want to go
to the party. I want to see what people are doing. You know, I got to get somewhere. I want to participate in life while I'm living as do I think most people, right? So the ways to do that are things we're going to do. And I don't think they represent actual choices about where we feel comfortable or uncomfortable with our data. Whatever our data might be, right? We don't really have access to it. We know we don't want someone to share our mean DMs with our friend. We know we don't want,
you know, our health data leads to our insurance in ways that would harm us. But that's also a place we don't have that much control. And in the meantime, we got to get to work. We want to see what our friend posted. We want to be part of the popular people. And the ways of doing that have been slowly, you know, we can say colonized or sort of, you know, instrumented by these tech services that advertise convenience, advertise connection, advertise ease. And then below
the surface have sort of hollowed out our privacy and our ability to, you know, define ourselves in the place in the world. So I would say what we're seeing is a natural human inclination, you know, we use what we can to be together, to connect with each other, to participate in life.
βThose services have themselves kind of, I think in some sense, betrayed us structurally.β
And that doesn't mean we don't care about privacy. That means a meaningful choice around what, what it would take to care about privacy has not really been given to us. You know, we do see the number of people using signal going up and up and up. We do see people's understanding of why
Privacy is so important.
personal level when they see people social media posts being used at the border when they see, you know, these collateral consequences that are coming home. I think the issue then is,
okay, what do we do about it? And you can't say, well, the choice is never to communicate with your
βfriends because that's simply unrealistic and anti-human, but you should use signal.β
I'm not exaggerating, and this is my final plug. The smartest people I know and then the people understand technology, the most have domain expertise around technology, that vent overlap, they all use signal. It's almost like a badge of like, I get it, you know, anyways, but my favorite quote from this is people or people want to be loved and included, Meredith Whittaker is the president of the signal foundation and a leading voice on AI policy.
She co-founded the AI now instituted NYU, advised FTC Chairleena Khan and was named one of
times 100 most influential people in AI. She joins us from New York. Mary, very much, appreciate your time and your good work. And I'm at what I said, you're the, you're the bright,
βwell-lit, clean part of the AI technology bookstore. I, let's put Meredith Whittaker in charge.β
Let's just, let's consolidate all of it. I'll go raise 11 trillion dollars by all of these companies and put you in charge. Deal. Is it deals, Scott? It's a deal. Yeah, look forward to working with you. And thank you for having me on. [Music] I'll drop happiness. A hack for young dads. It is striking to me.
How selfish kids can be. I mean, it's just a, I feel like I'm essentially a, essentially a credit card that occasionally gets to watch a football match with them sometimes. And let me just give you a hack. If you're a dad like me who thinks that you're going to have all these hallmark moments with your child, you'll have some of those. But for the most part,
it's going to be mostly a one-way relationship. And I'm not saying it's not amazing. But the hack,
I have implemented and it's helped me a lot, is that my favorite title, I've been a founder, you know, all these cool titles, see or whatever, my favorite title in the world is dad. And that is every time my kids call me or say a high dad or they call out dad or, you know, I love you dad. Every time I hear the word dad, I'm like one of those dogs that hears the word walk.
βAnd I've trained myself to just love that term. It's the most important term in my life.β
And it just, it's more dope at for me than anything. It's when these two things that kind of look smell and feel like me called me dad. And what I've decided and I started believing and training myself to believe five years ago is that when my kids are awful, you know, they give me a hard time or they come home and expect to rate their emotions or their own reasonable or they slam their door. My kids and what you'll find is generally speaking, your kids don't
behave that way outside of the house. If you're like 90% of us, you're going to find it outside of the house. Your kids are pretty reasonable, pretty good citizens, pretty polite. And at home, they're fucking terrorists assessing the household for vulnerability so they can strike when you're out your weakest. Now, why do they do that? Because they're processing, they're emoting, and they know what they can do with you because they know you are there unconditionally. They know
you love them unconditionally. Why? Because you're their dad. And so what I have done and it's been a real unlock for me is that when my kids say something and consider it or even mean to me or are respectful or aren't kind, I'm not saying I let them roll right over me that I assume they're saying one thing to me. They're saying, "Dad." This episode was produced by Jennifer Sanchez and Laura Jenaire. Cammy Reak is our social producer, Bianca Rosario and Maris is our video editor.
And Drew Burrows is our technical director. Thank you for listening to the PropGPod from PropG Media.


