The AI Daily Brief: Artificial Intelligence News and Analysis
The AI Daily Brief: Artificial Intelligence News and Analysis

A Guy Used AI to Cure His Dog's Cancer*

11h ago28:275,448 words
0:000:00

The AI discourse is absolutely frenetic right now — everything from Karpathy's misinterpreted jobs visualization to a viral dog cancer cure story that's both less and more than it seems. NLW&#...

Transcript

EN

Today on the AI Daily Brief, all about that guy who used AI to cure his dog's...

and what it says about the discourse in AI's second moment, before that of the headlines

a preview of Nvidia's GTC.

The AI Daily Brief is a daily podcast and video about the most important news and discussions

in AI. First of all, thank you to today's sponsors, KPMG, let's see AI UC and PromptQL. To get an ad-free version of the show, go to patreon.com/aidilybrief, or you can subscribe on up a podcast to learn about sponsoring the show, send us a note at [email protected] and while you're at AI Daily Brief.ai, you can find out all about all the various things

going on in this ecosystem. The big one this week is, of course, Agent Madness, it's a March Madness style bracket where we will be having live, human, and agentic voting on the coolest things that you have vibed coated in built this year. In addition to bragging rights, I will feature these agents on the show.

So if you're interested in that, check out AgentMadness.ai. Curly submissions are slated to close on March 18th, that is Wednesday of this week. So again, get on over to AgentMadness.ai. It is a big week for Nvidia, as their GTC developer conference kicks off in San Jose. The CEO, Jensen Wong, was scheduled to deliver his keynote on Monday morning, so we'll

likely know more about the time this episode goes out.

In the lead-up to the event, much of the speculation was around a new chip system developed in collaboration with Grock. That is, GROQ, not GROK, Grock with the Q is the one that is not an Elon Musk company. Nvidia acquired the chip making startup in December, and are expected to announce the first collaborative product this week.

The information described the new product is integrating Grock's language processing chips into Nvidia's rack-scale servers. If that's the case, this will be Nvidia's first attempt to directly address inference demand. Until now, Nvidia's chips have been world-leading in AI training, but haven't been particularly

focused on efficient inference. That's where GROQ steps in, delivering a chip tailored exclusively to inference workloads. Nvidia is expected to announce open AI as a buyer of the new chip. Source has said that production has been ramping up at Samsung's chip-founder and mass production is expected to begin in the second half of the year.

Certainly this will be the first time Nvidia has manufactured an AI chip outside of TSMC, potentially diversifying supply chains out of Taiwan. The new servers also use Intel CPUs rather than Nvidia CPUs according to sources, which suggests that Nvidia's chips don't integrate well with GROK chips at this stage. The sources added that multiple generations of hardware are being planned, with the potential

to build GROK's technology into Nvidia's fine-men GPUs, which are the next generation following Ruben later this year. Inside of product releases Nvidia's Neo Cloud partners are stepping up operations. The information reports that N-scale is in negotiations to acquire a huge data center site in West Virginia.

The site has cleared regular Tory hurdles, and is targeting two gigawatts of capacity by 2027. Now, the deal is a little unusual for a Neo Cloud provider, which typically rented data centers in the past. It would also immediately make UK-based N-scale a major player in the US market as they move

towards an IPO. New document surfaced by the information said that the acquisition would triple N-scale's

revenue projections to 30 billion for 2027.

They are reportedly in talks to rent the capacity to bite-dance, but could also rent their servers back to Nvidia. Rights more insights in strategy CEO and chief analyst Patrick Morhead, Nvidia is no longer a chip company. As GDC-26 opens, the company plans to present itself as a full stack, heterogeneous AI infrastructure

platform, spanning training, pre-filled, decode inference and agent orchestration. Next up, while many software CEOs have been downplaying the AI disruption risks to their company this year, SEC filings are telling a different story. So far this year, 27 firms have listed AI agents as a material risk to their business model, up from just seven this time last year.

The list of companies warning about agents includes Figma, Workday, and HubSpot, whose CEOs have all recently dismissed concerns.

During their most recent earnings call, Figma CEO Dylan Field said, "I think it is

the case that humans will continue to use software and increasingly agents will too," and I'm excited about that.

However, he added, "I think right now, if you're willing to hand-off mission critical

work to agents and just let them do it unsupervised, you're a very brave person." Meanwhile, Figma's 10K filing released on the same day acknowledged that agent AI may quote, "change how people access and interact with digital products in ways that reduce reliance on traditional software applications." Now, keep in mind, SEC filing should not be taken too literally.

properties are required to discuss any material risk to their business, which often leads to disclosures, a fanciful, or unlikely risks. Still while individual disclosures don't tell us all that much, the volume is another signal that we've moved past the tipping point on agents. The idea that agents were capable of disrupting SaaS barely registered in the first half

of last year, and yet disclosure volume rapidly increased in the second half and in the beginning of this year as the technology became more viable. If nothing else the shift means software executives are taking the threat of disruption

More seriously, or at least their legal departments are.

Next up, White Dance has paused the global launch of their cutting-edge video model

due to copyright disputes.

The information reports the global release of C Dance 2.0 has been "mothball due to a

series of copyright disputes with Hollywood studios." C Dance 2.0 was released in China last month, gathering a huge online reaction. You might recall this viral clip with Tom Cruise and Brad Pitt in a fistfight, which demonstrated incredibly high-fidelity replication of real-world actors. The new model led to outrage in Hollywood with companies including Disney, Warner Brothers,

Paramount, and Netflix, sending cease-and-assist notices to bite dance. Motion Picture Association CEO Charles Riftkin said in a statement at the time, C Dance 2.0 has engaged in unauthorized use of US copyrighted works on a massive scale. Bite dance had planned to make the model available globally in mid-March. The plan included API access through their cloud platform by plus as well as a new consumer

app designed for a foreign audience. Those plans are now reportedly on hold. Chinese users meanwhile are reporting the model as far more tightly controlled than it was at launch. To the point of rejecting prompts with no relation to copyrighted content. Enterprise customers have complained that model access is limited to Chinese companies with

no intention of distributing content internationally.

One source said they'd been unable to negotiate terms without committing to spending around

1.5 million on the model.

Interestingly, it seems like the major hold-up is not so much about implementing guardrails, but instead about refining them so that they don't block too much unrelated content. We've seen this with OpenAI's release of Sora to as well. While it is relatively straightforward to block copyrighted content, doing so without frustrating the user with too many refused prompts is a much more difficult engineering problem.

And speaking of difficult engineering problems, a new AI startup led by former anthropic founders is raising money to push the frontier of AI enhanced scientific research. The new company called Mirandil is in talks to raise 175 million at a billion dollar valuation. And if successful, the round would make Mirandil the latest AI startup to establish unicorns

status in their seed round. The company is led by former anthropic researchers, Betham Nishabba and Harsh Meta, who

spend their time at an anthropic working on things like long horizon scientific reasoning

with AI agents and automated AI research.

Both founders also have experience at Google. Now exactly what the company plans to do is not known yet, but sources say the new company aims to conduct AI enhanced scientific research and fields including biology and material science. This area of AI research is quickly gathering interest in investment dollars, as multiple Neal Labs focus on AI for science.

I would expect this to be a trend that continues throughout the year. Speaking of Google, Google maps is getting an AI twist with a new conversational interface. The new feature called Ask Maps allows users to tap into a Gemini Power Chatbot to help them navigate the world. The feature is designed to answer questions about landmarks and help schedule travel.

Google gave small practical examples like being able to ask for a nearby location to charge your phone, or find a public tennis court with lights for an evening match. The feature can also help with trip planning, with Google offering the example of building a multi-stop trip to the Grand Canyon. Right, Google.

Previously finding this information meant lots of research and sifting through reviews, but now you can just tap the Ask Maps button and get your questions answered conversationaly. And with a customized map to help you visualize your options. The feature integrates with Gemini's memory, so if you ask maps for a restaurant recommendation, it can tap into what Gemini already knows about your preferences.

Google is also leveraging Gemini to launch a new visualization mode for navigation and maps. The update adds a 3D view that depicts buildings overpasses in surrounding terrain. Once again, Google flexing its multi-modality and the integration of its entire ecosystem. Lastly today, sort of a bridge topic to our main episode. ServiceNow CEO Bill McDermott has warned that AI could send unemployment soaring above 30%

for young professionals. In an interview with CNBC McDermott said that unemployment for college graduates could "easily" go into the mid-30s in the next couple of years. So much of the work is going to be done by agents, he continued, so it's going to be challenging for young people to differentiate themselves in the corporate environment.

Now according to data from the Federal Reserve, unemployment for recent college graduates currently stands at 5.6%. Which is far lower than the 7.8% unemployment rate for young people without a college degree. However, 42.5% of college graduates are classified as under-employed, meaning they don't have enough work or are working in roles that don't require a college degree.

This is the highest level of unemployment for college grad since 2020. Under science majors have among the highest unemployment rates at 7%, but their under-employment rate is relatively low at 19.1% compared to other majors. Now just why this type of discourse is so potent right now is in fact the topic of our main episode, so with that we will close the headlines and move on over to the main.

Agenda AI is powering a $3 trillion productivity revolution, and leaders are hitting a real decision point. Do you build your own AI agents by off the shelf or borrow by partnering to scale faster? KPMG's latest thought leadership paper, Agenda AI on Tangled, navigating the build by or borrow decision, does a great job cutting through the noise or the practical framework

to help you choose based on value risk and readiness, and how to scale agents with the right

Trust, governance, and orchestration foundation.

Don't lock in the wrong model. You can download the paper right now at www.kpmg.us/nevigate. With the emergence of AI code generation in 2022, Nvidia Master inventor and Harvard engineers said perashi took a contrarian stance.

Inference time compute and agent orchestration, not pre-training would be the key to unlocking

high-quality AI driven software development in the enterprise.

He believed the real breakthrough wasn't in how fast AI could generate code, but in

how deeply it could reason to build enterprise-grade applications. While the rest of the world focused on co-pilets, he architected something fundamentally different. Blitzie, the first autonomous software development platform leveraging thousands of agents that is purpose-built for enterprise-scale code bases.

Fortune 500 leaders are unlocking 5x engineering velocity and delivering months of engineering work in a matter of days with Blitzie. Transform the way you develop software, discover how at blitzie.com. That's BLITZY.com. There's a new standard that I think is going to matter a lot for the enterprise AI agent

space. All the AI you see one, and it builds itself as the world's first AI agent standard. It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability, and societal impact, all verified by a trusted third party.

One of the reasons it's on my radar is that 11 labs, who you've heard me talk about before and is just an absolute juggernaut right now, just became the first voice agent to be certified against AI you see one, and is launching a first of its kind insurable AI agent. What that means in practice is real-time guardrails that block unsafe responses and protect against manipulation, plus a full safety stack.

This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third party certification and say our agents are secure, safe and verified, that changes the conversation. Go to AIUC.com to learn about the world's first standard for AI agents. That's AIUC.com.

If you're an operator, your day is a nonstop stream of decisions, and most of them require you to look at the data. You don't need another dashboard.

You need answers you can trust, fast, but the bottleneck is always the same.

The data isn't ready. It's scattered. It's messy. Definitions aren't clear. You're reading on your day to team, or waiting on domain experts for clarification and

confirmation. That's the bottleneck today's sponsor, PrompsQL, is built to break. PrompsQL is a trusted AI analyst for high-frequency decision-making. It connects across warehouses, databases, SaaS, and internal APIs. No massive data prep or centralization required.

It's built for multiplayer input. T-mates can jump into a thread, correct assumptions, and nuance, flag edge cases. PrompsQL turns everyday conversations into a shared context, and if something is ambiguous, it doesn't guess. It escalates to the right expert, captures the correct logic, and gets it right next time.

That's how it delivers trust and accuracy.

Over time, PrompsQL specializes to your business, like that veteran employee who just knows things. From simple what is questions to complex what if scenarios, you can model impact and stress test decisions before you commit. PrompsQL through a simple natural language prompt.

PrompsQL, the trusted AI analyst for teams with shared context, and messy data. Welcome back to the AI Daily Brief. Today's episode is nominally about this guy who used AI to cure his dog's cancer, or at least that's what everyone was talking about online. But more broadly, it's about the state of the AI discourse.

And I think that the starting question that we need to ask, taking a big step back from all of the headlines is, what the heck is going on right now? The AI discourse out there is absolutely frenetic right now. You've got Bernie Sanders dropping nine minute long videos about X-risk CEOs like Bill McDermott from ServiceNow, dropping insanely terrifying statistics all over the mainstream

media. In this case, a casual prediction that AI is going to cause recent college graduate unemployment over 30%. Every time a poll comes out in America, it shows just increasingly negative sentiment. The experiment around AI, which who knows maybe has something to do with all these media outlets

publishing these scary predictions, but then on the flip side, you've got normal people who haven't coded before, managing teams of a dozen agents or more, doing all of this

work that was never possible for them before.

The divergence, in other words, between mainstream perception and actual capability has never been higher, and yet both of them are in this incredibly heightened state. So what is going on? The short of it is, and this is a concept that I imagine will end up exploring a lot in the near term.

I think that we are in AI's second moment. Obviously, in this case, I'm using AI a short hand for generative AI, and the first moment was the chat GPT moment at the end of 2022, beginning of 2023. This moment was the Claude Code Opus 4.5, Code X5.2, et cetera, moment.

And if you want to be really productive about it, it's the AI moment and the agent's moment.

At the beginning of the month, Ethan Molich tweeted, from an AI user perspective, the four big leap so far in ability. One, GPT 3.5, chat GPT November 2022, two, GPT 4, spring 2023, three, reasoners starts

With a one preview, but the real deal was a three, spring 2025, four, workabl...

systems, hardest, plus good, reasoner models December 2025.

But really, I think it's first two and a second two were all part of one thing.

And remember, in and around the first time, we also got some really heightened frenetic discourse. So my remember in May of 2023, which was the second month of this show, when Time magazine dropped an issue called the end of humanity, a special report on how real is the risk. So the point that I'm making is that if this really is AI's second moment, it makes sense

that the cloud of dust being kicked up around it is proportionally bigger and more heightened more dramatic than even the important conversations we've had in between these two moments. And to some extent, I think part of what we're experiencing is just to resurfacing of everything that came up in the wake of the first moment, with some key differences now. The first difference is that there's obviously been a huge increase in capabilities.

Chatch BT with 3.5 was amazing.

You combine that with some of the image generation capabilities of the models that were coming out around then, and people who were trying these tools absolutely felt like wizards. You didn't really have to convince most people if they tried these tools, they realized that something big was changing. And yet even in those early days, there was still this idea of something even bigger.

The first episode that I ever had go viral, at least in the terms of a show like this on YouTube, was about an early prototype agent. We had experiments like auto GPT and baby AGI and GPT engineer, which would form the seeds that would go on to be lovable. And so two years later, as agents really come online, that big increase in capabilities

has I think proportionally heightened the discourse once again.

A second big change between the first moment and the second moment is that there are now many more people in the conversation. Around the Chatch BT team moment, these tools were some

of the fastest growing we'd ever seen, remember Chatch BT got its first 100 million

users in its first five weeks, beating the previous record of eight months for TikTok. But now we have literally billions of people using these tools every week, even people who don't like the tools are using the tools. So there are just far more people in the conversation. A third difference between the first moment and the second moment is higher economic stakes.

And in this case, I'm not even really talking about theoretical future job displacement things. I'm talking about right here and right now, Wall Street's interaction with SaaS companies, AI infrastructure build out deals and the private financing thereof, valuations for private companies that are building AI, et cetera, et cetera, et cetera.

And Thropic wasn't even a blip on the radar to most people then. And now it's at a $19 billion run rate, taking down industries every time it announces a new feature.

A fourth key difference between AI's first moment and second moment has nothing to do with

AI itself but has to do with the evolution of the market between 2022 and 2026. AI is now useful as a corporate fall guy, specifically in the context of companies trying to undo over hiring in the post-COVID period. Investor Shemoth Palahapatia writes, "What if AI doesn't need to show an immediate ROI? But instead is the plausible deniability companies used to RIF 50% of the workforce they

already knew did nothing." Number five, no matter what you think of the politics of the moment, I think it's fairly an arguable that finally as a difference between the first and second moment, this is happening in the context of generally increased political volatility.

In other words, AI isn't the only thing happening in the world, it's now interacting

with things like war and Iran. There is a last difference which I could point out is that we've now had three and a half years of the AI industry doing a completely awful job of explaining itself and talking about the future in any way that's going to be even remotely resonant to the average person. Not boring's packing a quarmac recently tweeted, "AI is very weird for me because normally

I'd be the guy who'd argue that it's crazy we're not more excited about this miracle technology. But I completely get the negative sentiment, AI companies have clearly botched telling the story. That's a big piece of this. Telling people, we built this thing that is definitely going to take your job and hopefully

we can figure out how to give you handouts or something on the other side, or come up with even better jobs or whatever, say thank you, is clearly terrible messaging." Anyways, it's a much longer tweet, but I think that the incredibly poor messaging from the AI industry is absolutely another thing that has changed between the first and the second moment.

Not that there was good messaging around that first moment, mind you, there just hadn't been as much time for us to shoot ourselves in the foot over and over yet. The point of this is right now everything around the AI discourse is incredibly heightened. The whole conversation is at an 11 all the time, and basically has been since we all returned to work at the beginning of 2026.

There were two conversations that really demonstrated this this weekend. The first was around a weekend project from developer Andre Carpathy that became an absolute firestorm. At 5pm eastern time on Saturday night, Kaito on X tweeted, "Five minutes ago, Andre Carpathy just dropped Carpathy/Jobs."

He scraped every job in the US economy, 342 occupations from BLS, scored each one's AI exposure zero to 10 using an LLM, and visualized it as a trim app. If your whole job happens on a screen, you're cooked. Surged score across all jobs is 5.3 at a 10, software devs 8 to 9, refers 0 to 1, medical

Transcriptionists 10 at a 10, skull emoji.

It pointed to this link Carpathy.ai/Jobs, which is the full chart. Instantly, Twitter was flooded with takes like this one from Tuky.

Siren emoji do you understand what Carpathy just did?

He didn't write an opinion piece. He scraped every single job in America, ran it through AI, and scored how replaceable you are, on a scale of 1 to 10, not a prediction, a diagnosis. Accountant scored 9, Parallegals 9, copywriters cooked, radiologist reading scans, the AI already does it faster.

The only jobs that scored lower the ones that require you to physically touch something. In 2015, learn to code was the answer to everything, in 2025, code writes itself, "The people who listen are now the most replaceable generation in history." I guess you're degree didn't prepare you for a career. Even people who aren't usually Schlok merchants like that started to veering to this same sort

of sensationalist territory. Chubby @CaminismistWrites, Carpathy is by no means interested in hyper-exaggeration.

Using AI, he concluded that out of 143 million people working in the U.S., approximately

57 million are at high to very high risk of their jobs being negatively impacted by AI. That's almost 40%, let that sink in and consider what it means. Now at this point, if you listen frequently, you're probably waiting for the yes-but-wears to nuance here.

Well, first of all, if you go actually read the page that Carpathy posted, which I don't

think most of the people who were tweeting about it did, he has a very important caveat

on digital AI exposure scores. He writes, "These are rough LLM estimates, not rigorous predictions. A high score does not predict the job will disappear." Software developers scored 9 out of 10 because AI is transforming their work, but demand for software could easily grow as each developer becomes more productive.

The score does not account for demand elasticity, latent demand, regulatory barriers, or social preferences for human workers. Many high exposure jobs will be reshaped, not replaced. Indeed, Carpathy himself was frustrated by the response. When someone on that original tweet from Kaito said, "I can't find it," Andre responded,

"This was a Saturday morning, two-hour vibe-coded project inspired by a book-on-reading. I thought the code in data might be helpful to others to explore the BLS data set visually, or color it in different ways or with different prompts or other own visualizations." It's been wildly misinterpreted, which I should have anticipated even despite the read-me doc so I took it down.

In another tweet he wrote, "The quote-unquote exposure was scored by an LLM based on how digital the job is. This has no bearing on what actually happens to these occupations, which has to do with demand elasticity and a lot more. People are sensationalizing the visualization tool and putting words in my mouth."

Now there was some interesting nuance conversation about this. The update newsletter stuff in Schubert wrote, "Many seem to take this as a reason to believe that the overall pace of automation will be high, but I don't think that makes any sense." Even more to the point, and more insistently phrased, was Chicago Booth Economist Alex Emas who wrote, "Exposure does not mean threat of displacement.

It can literally mean the opposite." AI exposed jobs may increase hiring in attract higher wages. It all depends on A, elasticity of consumer demand, and B, number of AI exposed tasks in a job. The tropics Peter McCrory added, "I agree strongly with Alex here, and my read is that

Claude Usage Patterns clearly point toward uneven labor market implications." Our recently introduced observed exposure measure aims to identify cases where exposure is more likely to transform into actual displacement. AI Claude is used in automated ways for work-related purposes on tasks that are conceptually feasible for LLM's, but no exposure measure is perfect or has monotone predictions.

And even when much of a job is automated, the remaining bottleneck tasks may ultimately

increase demand for complimentary human skills even among highly exposed roles. Toronto Economist Kevin Bryan said, "I bet $1,000 that from now to 2030, most quote-unquote susceptible jobs see increased share of labor. In the model of these types of charts are based on, it is explicitly not AI can substitute, but AI is related.

AI is a compliment too, who doesn't want to code right now, for instance."

And I think that's all true, and obviously we will continue to discuss the real no BS

labor market implications of AI. But the point is relative to our larger conversation, this frenetic tone to the discourse, not helping this was the fact that, at literally within one minute of Kaito posting that thing about Carpathy's research, the Kobayisi letter posted, "Breaking, meta is planning sweeping layoffs that could affect 20% or more of the company."

Like I said, right now the conversation goes to 11. But it wasn't just the negative side of AI that was at 11. Google DeepMine said, "Criar, shared an article linked from the Australian that went hyperviral

with nearly 13 million views.

Vittorio summed it up this way. This is actually insane. Be tech guy in Australia, adopt cancer riddled rescue dog months to live. Pay $3,000 to sequence her tumor DNA, feed it to chat GPT in alpha-fold, zero background in biology, identify mutated proteins, match them to drug targets, design a custom mRNA

cancer vaccine from scratch, genomics professor is gobsmacked that some puppy lover did this on his own, need ethics approval to administer it, red tape takes longer than designing

The vaccine, three months finally approved, drive 10 hours to get Rosie her f...

tumor halves, coat gets glossy again, dog is alive and happy, professor, if we can do this

for a dog, why aren't we rolling this out to humans?

One man with a chat bot and $3,000 just outperform the entire pharmaceutical discovery pipeline. We are going to cure so many diseases. I don't think people realize how good things are going to get. So here's the story.

Australian entrepreneur Paul Coiningham has a dog named Rosie. In 2024, Rosie was diagnosed with cancer that ended up being non-responsive to chemotherapy or surgery. The tumors just kept growing. When Paul turned to chat GPT for help, it suggested that he should get Rosie's

DNA sequenced and then use Google Deepmines alpha-fold to look for mutations that could

be a target for immunotherapy. When a drug maker wouldn't provide an off-the-shelf immunotherapy treatment, Coiningham turned to Pally Thorderson, the director of the RNA Institute at the University of New South Wales. Thorderson used Rosie's DNA to develop a bespoke mRNA vaccine in less

than two months. He told the press, "This is the first time a personalized cancer vaccine

has been designed for a dog. This is still at the frontier of where cancer immunotherapatics are, and ultimately we're going to use this for helping humans." What Rosie is teaching us is that personalized medicine can be very effective and done in a

time-sensitive manner with mRNA technology.

Now as you can tell, there is a lot more to this process than simply prompting chat GPT to cure cancer. And indeed, even the treatment itself was an entirely successful. Yes, some of Rosie's tumors have shrunk, but it would be certainly going too far to call it a cancer cure.

On top of that, it's arguably a story about how revolutionary the Nobel Prize-winning alpha-fold

model is rather than a story about chat GPT. Pally Thorderson ended up turning to X to explain some nuances of the story. The nuances include, the fact that this was less about a cure and more about buying time, the fact that it's difficult to estimate the real costs, as lots of people donated time and resources to this, a third nuance is that regulation of vet research and treatment

is obviously quite different than human health, but ultimately Pally says, "In the human health space, Rosie's story demonstrates that we can democratize the process of design and cancer vaccine. While genomic analysis and RNA production will continue to be specialized, they could turn into pure service provision, especially as automation increases. This then begs the question, "Do we need to overhaul the regulatory regimes with this

in mind, and can we ensure equitable access?" Now of course, there were tons of people who were skeptical on spec when they saw the story, even before all that nuance was shared. And what's more, unsurprisingly, I personally find it a little bit refreshing to have people excited about the positive disruptive potential of AI, then to just be constantly looking

at the negative, but the point is that these are still two sides of the same coin. We are in the midst of the transition into AI second moment, and for a little while, until we all get used to the new paradigm that we're living in, it's going to be weird. While I can promise is that if you hang out around here, you will feel at least slightly less like you're taking crazy pills.

For now that it's going to do it for today's AI Daily Brief, I appreciate you listening

or watching, as always, until next time, peace!

Compare and Explore