Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Everyone deserves access to clean, affordable
0:02
energy. Everyone. Millions
0:04
of Americans rely on propane for dependable
0:07
energy that is independent from the electric
0:09
grid. Propane is a reliable, versatile energy
0:11
produced in the United States, the power
0:13
school buses, hospitals, and our homes, with
0:16
lower carbon emissions than conventional fuels. The
0:19
path to a low-carbon future includes cleaner
0:21
energy like propane. And the production of
0:23
renewable propane, which is made from used
0:25
cooking oils and plant oils, is growing
0:27
rapidly. Propane is the
0:30
energy for everyone. Learn more at
0:32
propane.com. Hey
0:34
everybody, it's Sabrina. I'm
0:36
here to talk to you one last time this
0:38
week about the New York Times audio subscription. Sorry.
0:42
We just know that change is hard
0:44
and we want to make absolutely certain
0:46
that all of you, our lovely, incredible
0:48
audience, understand what's going on.
0:51
So, two things. First,
0:53
today's show is not as long as
0:55
it looks. This week, we're doing this
0:57
kind of unusual thing where we attach
1:00
episodes of other New York Times podcasts
1:02
right to the end of our show. So
1:04
today, that's the fabulous podcast, Hard
1:06
Fork, where hosts Kevin Roose and
1:09
Casey Newton break down the latest
1:11
tech news with smarts, humor, and
1:13
expertise. We're doing that because,
1:15
and here's the second part of all of this,
1:17
the New York Times has launched an audio subscription
1:19
this week. That subscription gets
1:21
you full access to shows like us
1:24
and Hard Fork, Ezra Klein, The
1:27
Runup, The Interview, and Headlines. So,
1:30
this is included for people who subscribe
1:32
to all of the New York Times,
1:35
the big bundle with news, cooking, and
1:37
games. But you can
1:39
also sign up for audio separately.
1:42
Either kind of these subscriptions will
1:44
allow you to log on, Apple
1:46
Podcasts, Spotify, and you'll have access
1:48
to all past shows. Bonus content,
1:50
early access, stuff like that. Reminder,
1:53
you don't have to subscribe to
1:55
keep listening to the daily. Recent
1:58
episodes will still be... free. But
2:01
we hope you'll see it as a way to support
2:03
the show and the work we do. Okay.
2:05
Thank you for bearing with us. And with
2:08
all of these announcements, all this week, TGIF,
2:11
we just want everyone to know the deal. And
2:14
as always, thank you for listening. From
2:18
the New York times, I'm Sabrina
2:20
Taverducee, and this is the Daily. We
2:28
are following a major breaking story out
2:31
of the Middle East. Israel says the
2:33
leader of Hamas is dead. Yahya
2:35
Sinwar, the leader of Hamas and
2:38
the architect of the October 7th
2:40
attack, was killed by Israeli forces
2:42
in Gaza. Today, the images of
2:44
Sinwar's body lying in rubble, surrounded
2:46
by Israeli troops, sent shockwaves through
2:48
the region. Sinwar's assassination,
2:50
dealing a major blow to Hamas
2:53
amid the threat of wider escalation
2:55
in this region. It was a
2:57
major victory for Israel and prompted
2:59
calls from Israeli leaders for Hamas
3:02
to surrender. This war can
3:04
end tomorrow. It can end if
3:07
Hamas lays down its arms and returns
3:09
our hostages. But
3:11
what actually happens next is unclear.
3:14
Today, my colleague Ronan
3:17
Bergman on how Israel
3:19
got its most wanted man and
3:22
what his killing means for the future of
3:24
the war. It's
3:39
Friday, October 18th. So
3:46
Ronan, we're talking to you around 3 p.m. New York
3:48
time, 10 p.m. your time. Just
3:51
a few hours ago, we started getting hints
3:54
that Israel possibly killed the leader of Hamas,
3:56
Yahya Sinwar, and just a little while ago,
3:58
we started getting hints that Israel was
4:00
announced that he was in fact killed. What
4:04
was your first thought when you heard
4:06
the news? So candidly,
4:08
I thought, Oh, here
4:10
we go again, because on the 25th of
4:12
August, I got a call
4:14
from a source who said, we believe Sinoire
4:17
is dead. And then the source called
4:19
again, said, they thought it's
4:22
Sinoire, but it was not. So
4:25
I thought that the beginning, maybe it's the
4:27
same thing going again. But then we
4:29
got the first picture. And
4:33
when you look at the picture that was
4:35
just taken from the site,
4:39
though, you know, I'm not a
4:41
forensic expert, but it
4:43
looks like
4:46
the body of the
4:49
leader of Hamas, when
4:52
an hour later, my hutch was
4:54
that it was him, though
4:56
not yet confirmed, I
4:59
thought this is a
5:01
watershed moment where the
5:03
war can end. And
5:06
maybe, maybe the hostages could come
5:08
back. This is a
5:10
critical moment where things maybe
5:13
can go for the
5:15
first time in so long for
5:17
the better. Okay, we'll get
5:19
to that. But remind us
5:22
first who Sinoire was. Sinoire
5:26
was one of
5:28
the most important people in the history of
5:31
Hamas. Yekha Sinoire was
5:33
born in a refugee camp in
5:36
Hanyones, south of the
5:38
Gaza Strip. And
5:40
he was one of the young
5:43
first followers of Sheikh Ahmed Yasin,
5:46
the founder and the
5:49
leader and the spiritual compass
5:52
for Hamas, the
5:55
jihadist religious movements
5:58
he founded in the 80s. the
10:00
circles of secrecy, the intelligence community
10:02
and the leadership of the military
10:04
and later the Prime Minister office
10:07
heard that there is a strong chance that he
10:09
is dead. It's all by
10:11
coincidence. So
10:15
basically, after a year
10:17
of trying to kill him, using
10:20
all of the technology and
10:22
intelligence that Israel has at
10:25
its fingertips, they
10:27
killed him kind of by accident
10:30
is what you're saying. Like almost, you know,
10:33
by mistake. And it was a
10:35
bunch of trainees who did it. And he wasn't even
10:37
in a tunnel like everyone assumed. He was just kind
10:39
of walking around out there. So
10:42
one of the things that we
10:44
already identified this phenomena which
10:47
was discovered by Israeli intelligence
10:49
only in the latest stages of the
10:52
war is that it
10:54
is very hard to spend constant
10:57
days and weeks in the tunnels. You
11:01
know, our embed with
11:03
the IDF tours in Gaza, I have been in
11:05
many of these tunnels. And
11:07
let me tell you Sabine, it's even
11:09
in the tunnels that are built
11:13
for Hamas leadership, we have been
11:15
in few. It's very
11:17
hard to stay human,
11:21
claustrophobic, small,
11:26
narrow, and
11:28
everybody needed to go
11:30
out from time to time. Sinwar
11:32
thought that the
11:35
area is free from
11:37
enemy hostiles. He was
11:39
wrong. He was killed. Now
11:43
Ronan, I understand from your reporting
11:45
recently that this was not the
11:48
first time that the IDF was at
11:50
least close to killing him. Tell
11:52
me that story. Yes,
11:54
okay. So in January of 2024,
11:57
this year, fantastic
14:00
sci-fi, inspiring motivation, and more. It's all
14:02
there in the Audible app. There are
14:05
also thousands of included titles with more
14:07
added every week, so you've always got
14:09
something new to try. There's more to
14:11
imagine when you listen on Audible. Find
14:14
out for yourself. Sign up
14:16
for a free 30-day trial at
14:18
audible.com/the daily. How can
14:21
a microchip manufacturer keep track of
14:23
250 million control points at once?
14:26
How can technology behind animated movies
14:28
help enterprises reimagine their future? Built
14:30
for Change listeners know those answers,
14:32
and more. I'm Elise Hugh. And
14:34
I'm Josh Klein. We're the hosts
14:36
of Built for Change, a podcast
14:38
from Accenture. We talk to leaders
14:40
of the world's biggest companies to
14:42
hear how they've reinvented their business
14:44
to create industry shifting impact. And
14:46
how you can too. New episodes
14:49
are coming soon, so check out
14:51
Built for Change wherever you get
14:53
your podcasts. So what are these
14:55
important documents on this computer that
14:59
they found? And what's the story that
15:02
the documents reveal? So
15:05
what is there is
15:07
the first understanding of
15:09
the decision-making process, the
15:11
preparations, the
15:14
deception, everything that Hamas
15:17
leaders were doing throughout
15:19
the last two years before the war. There
15:23
are the minutes of these meetings, 10
15:25
meetings from July 2021 until the 7th
15:27
of August, 2023. So
15:34
exactly two months before the attack,
15:37
where Hamas leaders,
15:41
Sinoar, and five other
15:44
military leaders were
15:46
talking freely because
15:49
they were sure that Israeli
15:51
intelligence doesn't listen
15:53
it to that room. So
15:56
these documents are minutes of
15:58
high-level meetings. peace
24:00
to the north with Hezbollah
24:05
and at the
24:07
end form something that could turn
24:09
the page into something better, a
24:11
better day for the
24:14
Middle East. Unfortunately, I'm
24:16
not sure that this
24:18
will happen because
24:22
Israel is now gearing
24:24
towards a
24:26
massive attack on Iran. But
24:29
Ronan, I want to understand that because Israel
24:32
has always said since October 7
24:34
that they want to eliminate Hamas
24:36
and a huge part of that
24:38
was killing the head of Hamas,
24:40
Sid Israel
24:57
has gained some successes
24:59
in its war against Hezbollah. Israeli
25:03
defense establishment is regaining its
25:05
pride for these successes
25:09
and they feel that this can go on. And
25:13
even without a clear exit
25:15
strategy, they believe they can,
25:18
some of them believe they can just win again and again.
25:21
The second is Netanyahu
25:24
is coerced by parts
25:26
of his extreme parts of his
25:28
coalition to continue, not sign a
25:30
ceasefire, and
25:33
start some kind of
25:35
implementation of military rule
25:37
in Gaza. So, Israeli
25:40
coming back to Gaza and we hear
25:42
parts of the coalition talking about re-establishment
25:44
of settlements in Gaza that were taken
25:47
down when Israel disengaged from the
25:49
Strip. And also,
25:52
Prime Minister Netanyahu knows that when the
25:54
war is over, officially over, so
25:56
there is some kind of an agreement, there will
25:58
be a new war. Teen
30:01
accounts are private by default, helping
30:04
to protect them from unwanted contact. And
30:07
teenagers under 16 require
30:09
parental approval to change these protections. Instagram
30:12
Teen Accounts. New
30:14
built-in protections for parents' peace
30:17
of mind. Learn
30:19
more at instagram.com/teenaccounts. Did
30:25
you know that there are everyday actions you
30:27
can take to help keep energy reliable and
30:29
electricity costs down? Try PGE's new home energy
30:32
analysis tool and get personalized tips to lower
30:34
your bill. Visit
30:36
portlandgeneral.com/take action and
30:38
save. Here's
30:43
what else you should know today. On
30:46
Thursday, an independent panel
30:48
reviewing the failures that led
30:50
to the attempted assassination of
30:52
former President Donald Trump in
30:54
Butler, Pennsylvania, said that agents
30:56
involved in the security planning did
30:58
not take responsibility in the lead-up to
31:01
the event, nor did they
31:03
own the failures in the aftermath. The
31:06
panel, which included former Department of
31:09
Homeland Security Secretary Janet Napolitano, called
31:12
on the Secret Service to replace
31:14
its leadership with people from
31:16
the private sector and shed
31:19
much of its historic role
31:21
investigating financial crimes to focus
31:23
almost exclusively on its protective
31:25
mission. And
31:28
federal prosecutors have charged a
31:30
man they identified as an
31:32
Indian intelligence officer with trying
31:35
to orchestrate from abroad an
31:37
assassination on U.S. soil. An
31:40
indictment unsealed in Manhattan on
31:42
Thursday said that the man,
31:44
Vikash Yadav, whom authorities
31:46
believe is currently in India, directed
31:49
the plot that targeted a New
31:51
York-based critic of the Indian government,
31:54
a Sikh lawyer and political activist. The
31:57
charges are part of an escalating response from
32:00
the U.S. and Canada to
32:02
what those governments see as
32:04
brazenly illegal conduct by India,
32:07
a long-time partner. Today's
32:13
episode was produced by Mooj
32:16
Zaidi, Rob Zipko, Diana Nguyen,
32:18
and Eric Krupke. It
32:21
was edited by Paige Cowett and
32:23
MJ Davis-Lynn, with help
32:25
from Chris Haxl. It
32:27
contains original music by Rowan
32:29
Nemestow and Diane Wong, and
32:32
was engineered by Chris Wood. Our
32:35
theme music is by Jim Brunberg and Ben
32:37
Landsberg of Wonderly. Special
32:39
thanks to Patrick Kingsley. Remember
32:50
to catch a new episode of
32:52
The Interview right here tomorrow. David
32:55
Marchese talks to adult film
32:57
star turned influencer Mia Khalifa.
33:00
I am so ashamed of the things that
33:03
I've said and thought about myself and allowed
33:05
others to say and jokes
33:07
that I went along with and contributed to
33:09
about myself or about other women or anything
33:12
like that. I'm extremely ashamed of that.
33:24
That's it for The Daily. I'm
33:27
Sabrina Taverneesi. See you on
33:29
Monday. We
33:38
just got another email from somebody who said they thought I
33:40
was bald. I have
33:43
an apparently crazy bald energy that I bring
33:45
to this podcast. What do you think is
33:48
bald seeming about you? I
33:50
think for me, they think of me as
33:52
a wacky sidekick, which is a bald energy.
33:56
I think so. I don't think of...
33:58
I don't associate wacky and bald. Because
34:00
I'm thinking Jeff Bezos. I'm like, I'm
34:02
like, I know a lot of like
34:04
very hardcore ball interesting. So do you
34:06
think that maybe people think that I
34:08
sound like a sort of Titan of
34:10
industry plutocrat? I would not
34:13
say that's the energy you're giving is plutocrat
34:15
energy, but really because I
34:17
just fired 6,000 people to show
34:19
that I could you did
34:21
order me to come to the office today. I
34:24
did. I said there's a return to office and effect immediately.
34:27
No questions. I'm
34:34
Kevin is a tech columnist from the
34:36
New York Times. I'm Casey noon from
34:38
platformer and this is hard for this
34:40
week. Are we reaching the AI endgame
34:42
a new essay from the CEO of
34:44
Antropic has Silicon Valley talking then uber
34:46
CEO Dara Koshra Shaw. He joins us
34:48
to discuss his company's new partnership with
34:50
Waymo and the future of autonomous vehicles
34:52
and finally internal TikTok documents tell us
34:54
exactly how many videos you need to
34:56
watch to get hooked and so I
34:58
did very brave. God help me. Well,
35:10
Kevin the AI race continues to accelerate
35:12
and this week the news is coming
35:14
from an anthropic now last year you
35:16
actually spent some time inside this company
35:18
and you called it the white hot
35:21
center of AI doomerism. Yes. Well the
35:23
headline of my piece called it the
35:25
white hot center of AI doomerism. Just
35:27
want to clarify blame the headline. Well,
35:29
you know reporters don't often write our
35:32
own headlines. So I just feel the
35:34
need to clarify that fair enough. But
35:36
the story does talk about how many
35:38
of the people you met inside this
35:40
company seemed strangely pessimistic
35:42
about what they were building. Yeah, it
35:45
was a very interesting reporting experience because
35:47
I got invited to spend you know
35:49
several weeks just basically embedded at anthropic
35:51
as they were gearing up to launch
35:54
an update of their chatbot Claude and
35:57
I sort of expected it you know, they would
35:59
go in and try. to impress me with how
36:01
great Claude was and talk about all the amazing
36:03
things it would allow people to do. And then
36:05
I got there and it was like, all they
36:07
wanted to do was talk about how scared they
36:09
were of AI and of releasing these systems into
36:11
the wild. I compared it in the piece to
36:13
like being a restaurant critic who shows up at
36:15
like a buzzy new restaurant and all
36:17
anyone wants to talk about is food poisoning. And
36:20
so for this reason, I was very
36:22
interested to see over the past
36:25
week, the CEO of Anthropic, Mario
36:27
Amadei, write a 13,000 word
36:30
essay about his vision of the
36:32
future. And in
36:35
this essay, he says that he is
36:37
not an AI doomer does not think
36:39
of himself as one but actually thinks
36:42
that the future is quite bright and
36:44
might be arriving very quickly. And
36:46
then shortly after that, Kevin, the company
36:49
put out a new policy, which they
36:51
call a responsible scaling policy that I
36:53
thought had some interesting things to say
36:55
about ways to safely build AI systems.
36:58
So we want to talk about this today for
37:00
a couple reasons. One is
37:02
that AI CEOs have kept telling
37:04
us recently that major changes are
37:06
right around the corner. Sam Altman
37:08
recently had a blog post where
37:10
he said that an artificial super
37:12
intelligence could be just a few
37:15
thousand days away. And now
37:17
here Amadei is saying that AGI could
37:19
arrive in 2026, which
37:22
check your calendar, Kevin, that is in 14 months. Certainly
37:25
there is a case that this is just hype.
37:28
But even so, there are some very wild
37:30
claims in here that I do think deserve
37:32
broader attention. The second reason
37:34
that we want to talk about this
37:36
today is that anthropic is really the
37:38
flip side to a story that we've
37:40
been talking about for the past year
37:42
here, which is what happened to open
37:45
AI during and after Sam Altman's temporary
37:47
firing as CEO and proper was started
37:49
by a group of people who left
37:51
open AI primarily over safety concerns. And
37:53
recently, several more members of open AI's
37:55
founding team and their safety research teams
37:57
have gone over to anthropic. And so
37:59
in a way, Kevin, anthropic is an
38:02
answer to the question of what would
38:04
have happened if OpenAI's executive team hadn't
38:06
spent the past few years falling apart?
38:09
And while they are still the underdog compared to
38:11
OpenAI, is there a chance
38:13
that anthropic is the team that builds
38:15
AGI first? So that's what we want
38:17
to talk about today, but I want
38:19
to start by just talking about this
38:21
essay. Kevin, what did Dario Amadei have
38:24
to say in his essay, Machines of
38:26
Loving Grace? Yeah, so the first thing
38:28
that struck me is he's clearly reacting
38:30
to this perception, which I
38:32
may have helped create through my story last
38:34
year that sort of he and anthropic are
38:36
just doomers, right? That they are just a
38:39
company that goes around warning about how badly
38:41
AI could go if we're not careful. And
38:44
what he says in this essay that I thought was
38:46
really interesting and important is, we're
38:49
going to keep talking about the risks of
38:51
AI. This is not him saying, I don't
38:53
think this stuff is risky. I've been taken
38:56
out of context and I'm actually an AI
38:58
optimist. What he says is it's important to
39:00
have both, right? You can't just be going
39:02
around warning about the doom all the time.
39:04
You also have to have a positive vision
39:06
for the future of AI, because
39:10
that's not only what inspires and motivates
39:12
people, but it matters what
39:14
we do. I thought that
39:16
was actually the most important thing that he did
39:18
in this essay was he basically said, look,
39:21
this could go well or it could go
39:23
badly. And whether it
39:25
goes well or badly is up to us.
39:27
This is not some inevitable force. Sometimes people
39:29
in the AI industry, they have a habit
39:31
of talking about AI as if it's just
39:33
kind of this disembodied force that is just
39:35
going to happen to us. Inevitably.
39:37
Yes, and we either have to sort of like
39:39
get on the train or get run over by
39:41
the train. And what Dario says is
39:43
actually different. He says, here's a
39:45
vision for how this could go well, but
39:47
it's going to take some work to get there. It
39:50
also made me realize that for the past
39:52
couple of years, I have heard much more
39:54
about how AI could go wrong than how
39:56
it could go right from the AI CEOs.
39:59
As much as these guys get knocked for
40:01
endlessly hyping up their products, they also have,
40:03
I think to their credit, spent a lot
40:05
of time trying to explain to people that
40:08
this stuff is risky. And so there was
40:10
something almost counterintuitive about Dario coming
40:12
out and saying, wait, let's get really specific
40:14
about how this could go well. Totally. So
40:16
I think the first thing that's worth pulling
40:19
out from this essay is the timelines. Because
40:21
as you said, Dario Amade is claiming that
40:23
powerful AI, which is sort of his term,
40:25
he doesn't like AGI, he thinks it sounds
40:28
like too sci-fi, but powerful AI,
40:30
which he sort of defines as like an
40:33
AI that would be smarter than a Nobel
40:35
Prize winner in basically any field and
40:37
that it could basically control tools, go
40:40
do a bunch of tasks simultaneously. He
40:42
calls this sort of a country of
40:44
geniuses in a data center. That's sort
40:46
of his definition of powerful AI. And
40:49
he thinks that it could arrive as soon as 2026. I
40:52
think there's a tendency sometimes to be cynical
40:54
about people with short timelines like these, like,
40:57
oh, these guys are just saying this stuff
40:59
is going to arrive so soon because they
41:01
need to raise a bunch of money for
41:03
their AI companies. And, you know, maybe that
41:05
is a factor. But I
41:07
truly believe that at least Dario
41:09
Amade is sincere and serious about
41:12
this. This is not a
41:14
drill to him. And Anthropic is actually
41:16
making plans, scaling teams, and building products
41:18
as if we are headed into a
41:21
radically different world very soon, like within the
41:23
next presidential term. Yeah. And look, Anthropic is
41:25
raising money right now. And that does give
41:27
Dario motivation to get out there in the
41:30
market and start talking about curing cancer and
41:32
all these amazing things that he thinks that
41:34
that AI can do. At the
41:36
same time, you know, I think
41:39
that we're in a world where the discourse
41:41
has been a little bit poisoned by folks
41:43
like Elon Musk, who are constantly going
41:46
out into public, making bold claims about things that
41:48
they say are going to happen, you know, within
41:50
six months or a year, and then truly just
41:52
never happen. And our understanding of Dario based on
41:54
our own conversations with them and of people who
41:56
work with them is like, he is not that
41:59
kind of person. And this is not somebody who
42:01
lets his mouth run away with him. When
42:03
he says that he thinks this stuff could
42:06
start to arrive in 14 months, I actually
42:08
do give some credibility. Yeah. And and, you
42:10
know, you can argue with the time scales
42:12
and plenty of smart people disagree about this.
42:14
But I think it's worth taking this seriously, because
42:17
this is the head of one of the leading
42:19
AI labs, sort of giving you his thoughts on
42:21
not just what AI is going to change about
42:23
the world, but when that's going to happen. And
42:26
what I liked about this essay was that it
42:28
wasn't trying to sell me a vision of a
42:30
glorious AI future. Right. Dario says, you know, all
42:32
or some or none of this might come to
42:35
pass. But it was basically a thought
42:37
experiment. He has this idea in the
42:39
essay about what he calls the compressed
42:41
21st century. He
42:44
basically says, what if all AI
42:46
does is allow
42:48
us to make 100 years worth
42:50
of progress in the next 10
42:53
years in fields like biology? What
42:55
would that change about the world? And I thought that
42:57
was a really interesting way to frame it. Give
43:00
us some examples, Kevin, of what Dario says
43:02
might happen in this compressed 21st century. So
43:05
what he says in this essay
43:07
is that if we do get
43:09
what he calls powerful AI relatively
43:11
soon, that in the sort of
43:13
decade that follows that, we would
43:15
expect things like the prevention and
43:17
treatment of basically all natural infectious
43:20
disease, the elimination of most
43:22
types of cancer, very sort of
43:24
good embryo screening for genetic diseases
43:26
that would make it so that
43:28
more people didn't die of these
43:30
sort of hereditary things. He
43:33
talks about there being improved treatment for
43:35
mental health and other ailments. Yeah, I
43:37
mean, and a lot of this comes
43:39
down to just understanding the human brain,
43:41
which is an area where we still
43:43
have a lot to learn. And the
43:45
idea is if you have what he
43:47
calls this country of geniuses that's just
43:49
operating on a server somewhere, and they
43:52
are able to talk to each other,
43:54
to dream, to suggest ideas, to give
43:56
guidance to human scientists in labs to
43:58
run experiments, then you have this massive
44:00
compression. effect and all of a sudden
44:02
you get all of these benefits really
44:04
soon. And you know, obviously the headline
44:06
grabbing stuff is like, you know, Dario
44:08
thinks we're going to cure all cancer
44:10
and we're going to cure Alzheimer's disease.
44:13
Obviously those are huge, but there's also
44:15
kind of the more mundane stuff like
44:17
do you struggle with anxiety? Do you
44:19
have other mental health issues? Like are
44:21
you like mildly depressed? It's possible that
44:23
we will understand the the neural circuitry
44:25
there and be able to develop treatments
44:27
that would just lead to a general
44:29
rise in happiness. And that really struck
44:31
me. Yeah. And it sounds when you
44:34
just describe it that way, it sounds
44:36
sort of utopian and crazy, but what
44:38
he points out and what I actually
44:40
find compelling is like most scientific progress
44:42
does not happen in a straight line,
44:44
right? You have these kind of moments
44:46
where there's a breakthrough that enables a
44:48
bunch of other breakthroughs. And
44:50
we've seen stuff like this already happen
44:52
with AI, like with AlphaFold, which won
44:54
the freaking Nobel Prize this year in
44:57
chemistry, where you don't just have a
44:59
cure for one specific disease, but you
45:01
have a way of potentially discovering cures
45:03
for many kinds of diseases all at
45:05
once. There's a part of an essay
45:07
that I really liked where he points
45:09
out that CRISPR was
45:11
something that we could have invented long
45:13
before we actually did. But essentially, no
45:15
one had noticed the things they needed
45:17
to notice in order to make it
45:19
a reality. And he posits that there
45:21
are probably hundreds of other things like
45:23
this right now that just no one
45:25
has noticed yet. And if you had
45:27
a bunch of AI agents
45:29
working together in a room and they were
45:31
sufficiently intelligent, they would just notice those things
45:34
and we'd be off to the races. Right.
45:36
And what I liked about this section of
45:38
the essay was that it didn't try to
45:40
claim that there was some, you know,
45:43
completely novel thing that would
45:45
be required to result
45:47
in the changed world that he envisions. Right.
45:49
All that would need to happen for
45:52
society to look radically different 10
45:54
or 15 years from now in Dario's
45:57
mind is for that sort of base
45:59
rate. of discovery to accelerate rapidly due
46:01
to AI. Yeah. Now let's take a
46:03
moment to acknowledge folks in the audience
46:06
who might be saying, oh my gosh,
46:08
will these guys stop it with the
46:10
AI hype? They're accepting every premise that
46:12
these AI CEOs will just shovel it.
46:15
They can't get enough and it's irresponsible.
46:17
These are just stochastic parrots, Kevin. They
46:19
don't know anything. It's not intelligence and
46:21
it's never going to get any better
46:23
than it is today. And I just
46:26
want to say I hear you and
46:28
I see you and our email address
46:30
is Ezra Klein show and playtime suck
46:32
up. But here's
46:35
the thing. You can look at
46:37
the state of the art right now.
46:39
And if you just extrapolate what is
46:41
happening in 2024 and you assume some
46:44
rate of progress beyond where we currently
46:46
are, it seems likely to
46:48
me that we do get into a world where
46:50
you do have these sort of simulated PhD students
46:52
or maybe simulated super geniuses and they are able
46:55
to realize a lot of these kinds of statistics.
46:57
Now maybe it doesn't happen in five, 10 years.
46:59
Maybe it takes a lot longer than that. But
47:01
I just wanted to underline like we are not
47:04
truly living in the realm of fantasy. We are
47:06
just trying to get a few
47:08
years and a few levels of
47:10
advancement beyond where we are right
47:12
now. Yeah. And Dario does in
47:14
his essay make some caveats about
47:16
things that might constrain the rate
47:18
of progress in AI like regulation
47:21
or clinical trials taking a long
47:23
time. You know, he also
47:25
talks about the fact that some people may just opt
47:28
out of this whole thing
47:30
like they just may not want anything to
47:32
do with AI. There might be some political
47:34
or cultural backlash that sort of slows down
47:36
the rate of progress. And
47:38
he says you know like that could actually
47:41
constrain this and we need to think about
47:43
some ways to address that. Yeah. So
47:45
that is sort of the suite of things
47:47
that Dario thinks will benefit our lives. You
47:49
know there's a bunch more in there. You
47:51
know he thinks it will help with climate
47:54
change other issues. But
47:56
the essay has five parts and there was
47:58
another part of the essay. that really caught
48:00
my attention, Kevin. And it is
48:03
a part that looks a little bit
48:05
more seriously at the risks of this
48:07
stuff, because any super genius that was
48:09
sufficiently intelligent to cure cancer could otherwise
48:11
wreak havoc in the world. So
48:13
what is his idea for ensuring that AI
48:16
always remains in good hands? So he admits
48:18
that he's not like a geopolitics expert. This
48:20
is not his forte. Unlike the two of
48:22
us. Right. And
48:25
there have been, look, a lot of
48:27
people theorizing what the
48:30
politics of advanced AI are going
48:32
to look like. Dario says that
48:34
his best guess currently about how
48:37
to prevent AI from empowering autocrats
48:39
and dictators is through what he
48:41
calls an Entente strategy. Basically, you
48:44
want a bunch of democracies to
48:46
come together to secure their supply
48:48
chain, to block adversaries from getting
48:51
access to things like GPUs and
48:53
semiconductors, and that
48:55
you could basically bring countries
48:57
into this democratic alliance and
49:00
ice out the more authoritarian regimes from
49:02
getting access to this stuff. But I
49:04
think this was not the
49:06
most fleshed out part of the argument. Yeah,
49:09
well, and I appreciate that
49:11
he is at least making an
49:13
effort to come up with ideas
49:16
for how would you prevent AI
49:18
from being misused. But as I
49:20
was reading the discussion around the
49:22
blog post, I found this interesting
49:24
response from a guy named Max
49:26
Tegmark. Max is a professor at
49:28
MIT who studies machine learning. And
49:31
he's also the president of something called the
49:33
Future of Life Institute, which is a sort
49:35
of nonprofit focused on AI safety. And
49:38
he really doesn't like this idea
49:40
of what Dario calls the Entente,
49:42
the group of these democracies working
49:44
together. And he says he
49:47
doesn't like it because it essentially sets
49:49
up and accelerates a race. It
49:51
says to the world that
49:53
essentially whoever invents super powerful
49:55
AI first will win forever.
49:58
Because in this view, AI is essentially the
50:00
final technology that you ever need to invent, because
50:02
after that it'll just invent anything else it
50:04
needs. And he calls that
50:07
a suicide race. And the reason is this,
50:09
and he has a great quote, horny
50:11
couples know that it is easier to make a
50:14
human level intelligence than to raise and align it.
50:16
And it is also easier to make an AGI
50:18
than to figure out how to align or control
50:20
it. Wow. I
50:23
never thought about it like that. Yeah, you probably never thought I
50:25
would say horny couple on the show, but I just did. So
50:29
Kevin, what do you make of
50:31
this sort of feedback? Is there
50:33
a risk there that this effectively
50:35
serves as a starter pistol that
50:38
leads maybe our adversaries to
50:40
start investing more in AI and sort
50:42
of racing against us and triggering
50:44
some sort of doom spiral? Yeah, I mean, look,
50:46
I don't have a problem with China racing us
50:49
to cure cancer using AI, right? If
50:51
they get there first, like more power to them. But
50:54
I think the more serious risk is that
50:56
they start building the kind of AI that
50:58
serves Chinese interests, right? That it becomes a
51:01
tool for surveillance and control
51:03
of people rather than some of these more
51:05
sort of democratic ideals. And this is actually
51:07
something that I asked Dario about back last
51:09
year when I was spending all that time
51:12
at Anthropik, because this is
51:14
the most common criticism of Anthropik is like, well,
51:16
if you're so worried about AI and
51:19
all the risks that it could pose, like why are
51:21
you building it? And I asked him about this, and
51:23
his response was, he basically said, look, there's this problem
51:25
of, in AI
51:27
research of kind of intertwining, right? Of the
51:30
same technology that sort of advances the
51:32
state of the art in AI also
51:34
allows you to advance the state of
51:37
the art in AI safety, right? The
51:39
same tools that make the language models
51:41
more capable also make it possible to
51:43
control the behavior of the language models.
51:46
And so these things kind of go
51:48
hand in hand. And if
51:50
you want to compete on the frontier of
51:52
AI safety, you also have to compete on
51:54
the frontier of AI capabilities. Yeah, and I
51:56
think it's an idea worth considering. To me,
51:59
it just sounds like. like, wow, you are
52:01
really standing on a knife's edge there. If
52:03
you're saying in order to
52:05
have any influence over the future, we
52:07
have to be right at the frontier
52:10
and maybe even gently advance the
52:12
frontier and yet somehow not
52:14
accidentally trigger a race where all of
52:16
a sudden everything gets out of control.
52:19
But I do accept and respect that that
52:21
is Dario's viewpoint. But isn't that kind of
52:23
what we observed from the last couple of
52:26
years of AI progress, right? Like OpenAI, it
52:28
got out there with Chat GPT before
52:31
any of the other labs had released
52:33
anything similar. And Chat GPT
52:35
kind of set the tone for all of
52:37
the products that followed it. And so I
52:39
think the argument from Anthropik would be like,
52:43
yes, we could sort of be way behind the state
52:45
of the art. That would probably make us safer than
52:47
someone who was actually advancing the state of the art.
52:50
But then we missed the chance to kind of
52:52
set the terms of what future AI products from
52:54
other companies will look like. So it's sort of
52:56
like using a soft power in an effort to
52:58
influence others. Yeah, and the way they
53:00
put this to me last year was that they wanted
53:02
instead of there to be just
53:04
a race for raw capabilities of AI
53:06
systems, they wanted there to be a
53:09
safety race, right? Where companies would start
53:11
competing about whose models were the safest
53:13
rather than whose models could do
53:15
your math homework better. So let's talk
53:17
about the safety race and the other
53:19
thing that Anthropik did this week to
53:22
lay out a future vision for AI. And
53:24
that was with something that has, I'll say
53:26
it, kind of a boring name, the responsible
53:29
scaling policy. I understand. This
53:31
maybe wasn't going to come up over drinks
53:33
at the club this weekend. Yeah, but I
53:36
think this is something that people should pay attention
53:38
to because it's an example of what you just
53:40
said, Kevin. It is Anthropik trying to use some
53:42
soft power in the world to say, hey, if
53:44
we went a little bit more like this, we
53:47
might be safer. All right, so talk about what's
53:49
in the responsible scaling policy that Anthropik released this
53:51
week. Well, let's talk about what it is. subjected
54:00
to more scrutiny and they should have more
54:02
safeguards added to them. They put this out
54:04
a year ago and it
54:06
was actually a huge success in
54:09
this sense. Kevin open AI went
54:11
on to release its own version
54:13
of it. And then Google deep
54:15
mine released a similar scaling
54:18
policy as well this spring. So
54:21
now when Tropic is coming back just over
54:23
a year later and they say, we're going
54:25
to make some refinements. And
54:27
the most important thing that they say is
54:29
essentially we have identified two
54:32
capabilities that we think would be
54:34
particularly dangerous. And so if anything
54:36
that we make displays these capabilities,
54:39
we are going to add a
54:41
bunch of new safeguards. The
54:43
first one of those is if a
54:46
model can do its own AI research
54:48
and development, that is going to
54:50
start ringing a lot of alarm bells
54:52
and they're going to put many
54:54
more safeguards on it. And second, if
54:57
one of these models can
54:59
meaningfully assist someone who has
55:02
a basic technical background in
55:04
creating a chemical, biological, radiological
55:06
or nuclear weapon, then
55:08
they would add these new safeguards. What are
55:10
these safeguards? Well, they have a super long
55:12
blog post about it. You can look it
55:14
up, but it includes basic things like taking
55:17
extra steps to make sure that a foreign
55:19
adversary can't steal the model weights, for example,
55:21
or otherwise hack into the systems and run
55:23
away with it. Right. And this is some
55:25
of it is similar to things that were
55:27
proposed by the Biden White House in its
55:29
executive order on AI last year. This
55:32
is also, these are some of the steps that came
55:34
up in SB 1047, the
55:37
AI regulation that was
55:39
vetoed by Governor Newsom in California recently. So
55:41
these are ideas that have been floating out
55:43
there in the sort of
55:46
AI safety world for a while. But Anthropic
55:48
is basically saying, we are going to proactively
55:50
commit to doing this stuff even
55:52
before a government requires us to. Yeah. There's
55:54
a second thing I like about this, and
55:56
it relates to this SB 1047 that we
55:58
talked about on the show. Something that a
56:01
lot of folks in Silicon Valley didn't like
56:03
about it was the way that it tried
56:05
to identify danger. And it was not because
56:07
of a specific harm that a model could
56:09
cause. It was by saying,
56:11
well, if a model costs a certain
56:13
amount of money to train, right? Or
56:16
if it is trained with a certain amount
56:18
of compute, those were the proxies that
56:20
the government was trying to use to understand why
56:23
this would be dangerous. And
56:25
a lot of folks in Silicon Valley said,
56:27
we hate that because that has nothing to
56:29
do with whether these things could cause harm
56:31
or not. So what anthropic is doing here
56:33
is saying, well, why don't we try to
56:35
regulate based on the anticipated harm? Obviously, it
56:37
would be bad if you could log
56:40
on to Claude, anthropic's rival to chat GBT, and
56:42
said, hey, help me build a radiological weapon, which
56:44
is something that I might type into Claude because
56:46
I don't even know the difference between a radiological
56:48
weapon and a nuclear weapon. Do you? I hope
56:51
you never learn. I
56:53
hope I don't either. Because sometimes I have bad days, Kevin,
56:56
and I get to scheming. So
56:58
for this reason, I think that governments, regulators around
57:00
the world might want to look at this approach
57:02
and say, hey, instead of trying to regulate this
57:04
based on how much money AI labs are spending
57:06
or like how much compute is involved, why
57:08
don't we look at the harms we're trying
57:11
to address and say, hey, if you build
57:13
something that could cause this kind of harm,
57:15
you have to do X, Y, and Z.
57:17
Yeah, that makes sense to me. So I
57:19
think the biggest impact that both the sort
57:21
of essay that Dario wrote and this responsible
57:23
scaling policy had on me was not about
57:25
any of the actual specifics of the idea.
57:27
It was purely about the timescales and the
57:29
urgency. It is one thing to hear a
57:31
bunch of people telling you that
57:33
AI is coming and that it's going to be
57:35
more powerful than you can imagine, sooner than you
57:37
can imagine. But if you
57:40
actually start to internalize that and
57:42
plan for it, it just
57:44
feels very different. If
57:47
we are going to get powerful AI
57:49
sometime in the next, let's call it
57:51
two to 10 years, you
57:53
just start making different choices. Yeah, I
57:56
think it becomes sort of the calculus.
57:58
I can imagine it affecting. what
58:00
you might want to study in college if
58:02
you are going to school right now. I
58:05
have friends who are thinking
58:07
about leaving their jobs because they think the place
58:10
where they are working right now will
58:12
not be able to compete in
58:14
a world where AI is very
58:16
widespread. So yes, you're absolutely starting
58:19
to see it creep into the
58:21
calculus. I don't
58:23
know kind of what else it could do. There's
58:26
no real call to action here
58:28
because you can't really do very
58:30
much until this world begins to
58:32
arrive. But I do think psychologically,
58:34
we want people to at least imagine,
58:37
as you say, what it would be
58:39
like to live in this world because
58:41
I have been surprised
58:43
at how little discussion this has been
58:45
getting. Yeah, I totally agree. I mean,
58:47
to me, it feels like we
58:50
are entering, I wouldn't call it like
58:52
an AI end game because I think
58:55
we're closer to the start than the
58:57
end of this transformation. But it does
58:59
feel like something is happening. I'm starting
59:01
to notice AI's effects in my life
59:03
more. I'm starting to feel more dependent
59:06
on it. And I'm also like, I'm
59:08
kind of having an existential crisis. Really?
59:10
Like not a full blown one, but
59:12
like typically I'm a guy who likes
59:14
to plan. I like to strategize. I
59:16
like to have like a five year
59:18
and a 10 year plan. And I've
59:20
just found that my own certainty about
59:22
the future and my ability to plan
59:24
long-term is just way lower
59:27
than it has been for any time
59:29
that I can remember. That's interesting. I mean, for myself,
59:31
I feel like that has always been true. In
59:34
1990, I did not know what things were gonna look like in
59:36
2040. And I would be
59:38
really surprised by a lot of things that have happened along the
59:40
way. But yeah, there's a
59:42
lot of uncertainty out there. It's scary, but I also
59:44
like, do
59:47
you not feel a little bit excited about it?
59:50
Of course. Look,
59:52
I love software. I love tools. I wanna live
59:54
in the future. And it's already happening to me.
59:57
There is a lot of that uncertainty and like that stuff for
59:59
you. It just freaks me out. But
1:00:02
if we could cure cancer, if we
1:00:04
could cure depression, if we could cure
1:00:06
anxiety, you'd be talking about the greatest
1:00:09
advancement to human well-being, certainly
1:00:11
in decades, maybe that we've ever seen. Yeah. I
1:00:14
mean, I have some priors
1:00:17
on this because my dad died of a
1:00:19
very rare form of cancer that was, he
1:00:24
was a sub 1% type of cancer.
1:00:27
And when he got sick, it was like,
1:00:30
I read all the clinical trials and
1:00:32
it was just like, there hadn't been
1:00:34
enough people thinking about this specific type
1:00:36
of cancer and how to cure
1:00:38
it because it was not breast cancer, it
1:00:40
was not lung cancer, it was not something
1:00:42
that millions of Americans get. And
1:00:45
so there just wasn't the kind of
1:00:47
brain power devoted to trying to solve
1:00:49
this. Now it has subsequently, it hasn't
1:00:51
been solved, but there are now treatments
1:00:53
that are in the pipeline that didn't
1:00:55
exist when he was sick. And I
1:00:58
just constantly am wondering like,
1:01:01
if he had gotten sick now instead
1:01:03
of when he did, like maybe
1:01:05
he would have lived. And I think
1:01:08
that is one of the things that
1:01:10
makes me really optimistic about AI is
1:01:12
just like, maybe
1:01:14
we just do have the brain power or
1:01:16
we will soon have the brain power to
1:01:18
devote, world-class research teams
1:01:21
to these things that might not affect
1:01:23
millions of people, but that do affect
1:01:25
some number of people. Absolutely. I just,
1:01:27
I don't know, it really, I
1:01:30
got kind of emotional reading
1:01:32
this essay because it was just like, obviously it's,
1:01:35
I'm not someone who believes all the
1:01:37
hype, but I'm like, I assign some
1:01:40
non-zero probability to the possibility
1:01:42
that he's right, that all this stuff could happen.
1:01:44
And I just find that so much more
1:01:48
interesting and fun to think about than like
1:01:50
a world where everything goes off the rails.
1:01:52
Well, it's just the first time that we've
1:01:55
had a truly positive, transformative
1:01:57
vision for the world. world coming
1:01:59
out of Silicon Valley in a
1:02:01
really long time. In fact, this
1:02:03
vision, it's more positive and optimistic
1:02:05
than anything that has been like
1:02:08
in the presidential campaign. It's
1:02:10
like when the presidential candidates talk about the future of
1:02:12
this country, it's like, well, we'll
1:02:14
give you this tax break, right? Or we'll
1:02:16
make this other policy change. Nobody's
1:02:19
talking about how they're gonna fricking cure cancer, right?
1:02:21
So I think, of course we're drawn
1:02:23
to this kind of discussion because it
1:02:25
feels like there are some people in
1:02:28
the world who are taking really, really
1:02:30
big swings, and if they connect, then
1:02:32
we're all gonna benefit. Yeah, yeah. Yeah.
1:02:41
When we come back, why Uber has way more
1:02:44
autonomous vehicles on the road than it used to.
1:02:58
OK, so one of the biggest developments
1:03:00
over the past few months in tech
1:03:02
is that self-driving cars now are actually
1:03:04
working. Yeah, but this is no longer
1:03:06
in the realm of sci-fi. Yes, so
1:03:09
we've talked, obviously, about the self-driving cars
1:03:12
that you can get in San Francisco now. It used
1:03:15
to be two companies, Waymo, and Waverly. And it's a
1:03:17
big deal. And it's a big deal. And it's a
1:03:19
big deal. And it's a big deal. And it's a
1:03:21
big deal. And it's a big deal. And it's a
1:03:23
big deal. And it's a big deal. And it's a
1:03:26
big deal. And it's a big deal. It's gonna be
1:03:28
two companies, Waymo and Cruise. Now it's just Waymo. And
1:03:31
there have also been a bunch of different
1:03:33
autonomous vehicle updates from other companies that are
1:03:35
involved in this space. And the
1:03:37
one that I found most interesting recently was
1:03:39
about Uber. Now, as you
1:03:42
will remember, Uber used to try
1:03:44
to build its own robotaxes. They
1:03:46
gave that back in 2020. That
1:03:49
was the year they sort of sold off their autonomous
1:03:52
driving division to a startup called
1:03:54
Aurora after losing just an
1:03:56
absolute ton of money on it. But
1:03:59
now they are back. back in
1:04:01
the game and they just recently
1:04:03
announced a multi-year partnership with Cruise,
1:04:05
the self-driving car company. They also
1:04:08
announced an expanded partnership with Waymo,
1:04:10
which is going to allow Uber
1:04:12
riders to get AVs in Austin,
1:04:14
Texas and Atlanta, Georgia. They've
1:04:16
been operating this service in Phoenix since
1:04:19
last year and that's going to keep
1:04:21
expanding. They also announced that self-driving Ubers
1:04:23
will be available in Abu Dhabi through
1:04:26
a partnership with the Chinese AV company
1:04:28
WeRide. And they've also
1:04:30
made a long-term investment in Wave,
1:04:32
which is a London-based autonomous driving
1:04:34
company. So they are investing
1:04:37
really heavily in this and they're doing it
1:04:39
in a different way than they did back
1:04:41
when they were trying to build their own
1:04:43
self-driving cars. Now they are essentially saying, we
1:04:45
want to partner with every company that we
1:04:47
can that is making self-driving cars. Yeah, so
1:04:49
this is a company that many people take
1:04:51
several times a week, Uber, and
1:04:54
yet I feel like it sometimes is
1:04:56
a bit taken for granted. While
1:04:59
we might just focus on the cars you
1:05:01
can get today, they are thinking very long-term
1:05:03
about what transportation is going to look like
1:05:05
in five or 10 years. And increasingly for
1:05:07
them, it seems like autonomous vehicles are a
1:05:09
big part of that answer. Yeah, and what
1:05:11
I found really interesting, so Tesla had this
1:05:13
RoboTaxi event last week where Elon Musk talked
1:05:15
about how you'll soon be able to hail
1:05:17
a self-driving Tesla. And what
1:05:19
I found really interesting is that Tesla's
1:05:21
share price plummeted after that event, but
1:05:23
Uber's stock price rose to an all-time
1:05:25
high. So clearly people think that, or
1:05:27
at least some investors think that Uber's
1:05:29
approach is better here than Tesla's. It's
1:05:32
the sort of thing, Kevin, that makes
1:05:34
me want to talk to the CEO
1:05:36
of Uber. And lucky for you, he's
1:05:38
here. Oh, thank goodness. So today we're
1:05:40
going to talk with Uber CEO Dara
1:05:42
Khazurshawi. He took over at Uber in
1:05:44
2017 after a bunch
1:05:46
of scandals led the founder of Uber,
1:05:48
Travis Kalanick, to step down. He
1:05:51
has made the company profitable for the first time in its
1:05:53
history, and I think a lot of people think he's
1:05:55
been doing a pretty good job over there. And
1:05:58
he is leading this charge. into autonomous
1:06:00
vehicles, and I'm really curious to hear
1:06:02
what he makes, not just of Uber's
1:06:04
partnership with Waymo, but of sort of
1:06:06
the whole self-driving car landscape. Let's bring
1:06:08
him in. Let's do it. ["The
1:06:11
Star-Spangled Banner"] Dara
1:06:18
Kazurshawi, welcome to Hard Fork. Thank you for
1:06:20
having me. So you were previously on the
1:06:22
board of the New York Times Company until
1:06:24
2017, when you stepped down right
1:06:27
after taking over at Uber. I assume
1:06:30
you still have some pull with our bosses, though,
1:06:32
because of your years of service. So can you
1:06:34
get them to build us a nicer studio? I
1:06:37
didn't have pull when I was on the board,
1:06:39
and I certainly have zero pull now. I've got
1:06:41
negative pull, I think. They're taking revenge on me.
1:06:45
Well, since you left the board, they're making
1:06:47
all kinds of crazy decisions, like letting us
1:06:49
start a podcast and stuff. Yeah. Oh, my
1:06:52
god. But all right, so we are going
1:06:54
to talk today about your new partnership with
1:06:56
Waymo and the autonomous driving future.
1:06:58
I would love to hear the story of
1:07:01
how this came together, because I think for
1:07:03
people who've been following this space for a
1:07:05
number of years, this was surprising. Uber and
1:07:07
Waymo have not historically had a great relationship.
1:07:10
The two companies were embroiled in litigation and
1:07:12
lawsuits and
1:07:14
trade secret theft and things like that. There was
1:07:17
a big deal. And
1:07:19
so how did they approach
1:07:21
you? Did you approach them? How did this
1:07:23
partnership come together? I guess it's time healing.
1:07:25
Right? When
1:07:27
I came on board, we thought
1:07:29
that we wanted to establish a better
1:07:31
relationship with Google generally, Waymo generally. And
1:07:34
even though we were working on our
1:07:36
own self-driving technology, it was
1:07:38
always within the context of we were
1:07:41
developing our own, but we want to work with third parties as well.
1:07:44
One of the disadvantages of developing our own
1:07:46
technology was that some
1:07:48
of the other players of Waymo's of the world, et cetera, hurt
1:07:51
us, but didn't necessarily believe
1:07:53
us. It's difficult to work
1:07:55
with players that you compete. So
1:07:58
one of the first decisions that we made
1:08:00
that we made was we can't be
1:08:02
in between here. Either you have to
1:08:04
go vertical or you
1:08:06
have to go platform strategy. You can't achieve
1:08:10
both and we have to make it back. We either
1:08:12
have to do our own thing or we have to
1:08:14
do it with partners. Yeah, absolutely. And
1:08:16
so that strategic kind of
1:08:18
fork became quite apparent to me.
1:08:21
And then the second was just, what are we good at? Listen,
1:08:24
I'll be blunt, we sucked at
1:08:26
hardware, right? We try to
1:08:29
apply software principles to hardware. It
1:08:31
doesn't work. Hardware is a different
1:08:33
place, different demand in terms
1:08:35
of perfection, et cetera. And
1:08:38
ultimately that fork, do we go vertical?
1:08:40
And there are very few companies that
1:08:42
can do software and hardware. Well, Apple,
1:08:44
Tesla are arguably one of the few
1:08:46
in the world. And
1:08:49
we decided to make a bet on the platform. And
1:08:52
so once we made that bet, we
1:08:54
went out and identified who are the leaders.
1:08:56
And so I think that's why we got
1:08:58
Google to be a bigger shareholder. And then
1:09:00
over a period of time, we built relationships.
1:09:02
And I do think there's a synergy between
1:09:05
the two. So it just makes sense, the relationship,
1:09:09
and we're very, very excited to, on
1:09:12
a forward basis, expand it pretty significantly. So
1:09:14
this was, I feel like maybe your
1:09:17
most consequential decision to date as
1:09:19
the CEO of this company. If
1:09:21
you believe that Google is a big part
1:09:23
of this company, if
1:09:26
you believe that AVs are gonna become the norm
1:09:28
for many people hailing a ride in 10 or
1:09:30
15 years, it's
1:09:32
conceivable that they might open up the Waymo
1:09:34
app, and not the Uber app. Waymo has
1:09:37
an app to order cars. I use it
1:09:39
fairly regularly. So what gave
1:09:41
you the confidence that in that world,
1:09:43
it will still be Uber that is
1:09:45
the app that people are turning to
1:09:47
and not Waymo or whatever other apps
1:09:49
might have arisen for other AV companies?
1:09:52
The first is that it's not a
1:09:54
binary outcome. Okay, I think that a
1:09:56
Waymo app and an Uber app can
1:09:58
coexist. We saw it. in my
1:10:00
old job in the travel business, right? I ran
1:10:02
Expedia and there's this dramatic,
1:10:04
is Expedia going to put the hotel chains
1:10:06
out of business or the hotel chains going
1:10:09
to put Expedia out of business? The fact
1:10:11
is both thrived. And there's a
1:10:13
set of customers who have booked through Expedia. There's
1:10:15
a set of customers who books hotel direct and
1:10:18
both businesses have grown and interactivity
1:10:20
in general has grown. Same
1:10:22
thing if you look in food, right?
1:10:24
McDonald's has its own app. It's a
1:10:26
really good app. It has a loyalty
1:10:28
program. Starbucks has its own app, has
1:10:30
a loyalty program, yet both are engaging
1:10:32
with us through the Uber Eats marketplace.
1:10:35
So my conclusion was that there isn't an
1:10:37
either or. I do believe there will be
1:10:39
other companies. There'll be cruises and there'll be
1:10:41
wee rides and waves, et cetera. There'll be
1:10:44
other companies and self-driving choices. And
1:10:46
the person who wants utility, speed,
1:10:48
ease, familiarity will choose Uber and
1:10:50
both can coexist and both can
1:10:52
thrive and both are really going
1:10:54
to grow because autonomous will be
1:10:56
the future eventually. So tell us
1:10:58
more about the partnership with Waymo
1:11:00
that is going to take place
1:11:03
in Austin and Atlanta. Who is
1:11:05
actually paying for the
1:11:07
maintenance of the cars? Does Uber have to
1:11:09
sort of make sure that there's no trash
1:11:12
left behind in the cars? What is Uber
1:11:14
actually doing in addition to just making these
1:11:16
rides available through the app? Sure. So I
1:11:18
don't want to talk about the economics because
1:11:20
they're confidential in terms of
1:11:22
the deal, but in those
1:11:24
two cities, Waymo will
1:11:26
be available exclusively through the
1:11:28
Uber app and we will
1:11:30
also be running the fleet
1:11:32
operations as well. So depots, recharging,
1:11:34
cleaning, if something gets lost, making
1:11:36
sure that it gets back to
1:11:38
its owner, et cetera. And
1:11:41
Waymo will provide the software driver, will
1:11:44
obviously provide the hardware, repair the hardware,
1:11:46
et cetera. And then we will be
1:11:48
doing the upkeep and operating the networks.
1:11:51
For riders, if you want to get in
1:11:53
a Waymo in one of those cities through
1:11:55
Uber, is there an option to specifically request
1:11:57
a self-driving Waymo or is
1:11:59
it? just kind of chance. Like if
1:12:02
the car that's closest to you happens to be
1:12:04
a Waymo, that's the one you get. Right now,
1:12:06
the experience, for example, in Phoenix, is that it's
1:12:08
by chance. I think you got one by chance,
1:12:10
and you can say, yes, I'll do it or
1:12:12
not. And I think that's what we're going
1:12:14
to start with. But there may be some people who only
1:12:16
want Waymos, and there are some people who may not want
1:12:18
Waymos. And we'll solve for that over a period of time.
1:12:20
It could be personalizing preferences,
1:12:23
or it could be what you're talking about, which is
1:12:25
I only want a Waymo. Do the passengers get rated
1:12:27
by the self-driving car the way that they would in
1:12:30
a human-driven Uber? Not yet,
1:12:32
but that's not a bad idea. What
1:12:34
about tipping? If I get out of a self-driven Uber,
1:12:37
is there an option to tip the car if it did a good job? I'm
1:12:39
sure we could build that. Why not? I
1:12:42
don't know. I do wonder if people are going
1:12:44
to tip machines. I don't think
1:12:47
it's likely, but you never know. It sounds crazy,
1:12:49
but at some point, someone is going to start
1:12:51
asking because they're going to realize it's just free
1:12:53
margin. It's like even if only 100 customers do
1:12:55
it in a whole year. I don't know. It's
1:12:57
just free money. Totally. The good news is tipping
1:12:59
100% of tips go to drivers now, and we
1:13:01
definitely want to keep that. So we like the
1:13:03
tipping habits, but whether people tip machines is TBD.
1:13:06
Yeah. And how big are these fleets? I think
1:13:08
I read somewhere recently that Waymo has about 700
1:13:10
self-driving cars operating
1:13:12
nationwide. How many AVs are we talking
1:13:14
about in these cities? We're starting in
1:13:16
the hundreds, and then we'll expand
1:13:19
from there. I
1:13:21
know you don't want to discuss the economics, even though
1:13:23
I would love to learn what the split is there.
1:13:25
I'm not going to tell you. But
1:13:27
you did recently talk about the margins on
1:13:30
autonomous rides being lower than the
1:13:32
margins on regular Uber rides for
1:13:34
at least a few more years.
1:13:37
That's not intuitive to me because in an autonomous
1:13:39
ride, you don't have to pay the driver. So
1:13:41
you would think the margin would be way higher
1:13:44
for Uber. But why would you make less money
1:13:46
if you don't have to pay a driver? So
1:13:48
generally, our design spec in terms of how we
1:13:50
build businesses is any newer business, we're
1:13:52
going to operate at a lower margin while we're
1:13:54
growing that business. You don't want
1:13:56
it to be profitable day one. And that's my
1:13:59
attitude with autonomous, which is a good one. again,
1:14:01
get it out there, introduce it to as many
1:14:03
people as possible. At a maturity
1:14:05
level, generally, if you look at our take
1:14:07
rate around the world, it's about 20%,
1:14:09
we get 20%, the driver gets 80%. We
1:14:13
think that's a model that makes sense for any
1:14:15
autonomous partner going forward. And that's, that's what we
1:14:17
expect. I kind of don't care, honestly, what the
1:14:19
margins are for the next five years. The question
1:14:22
is, can I get lots of supply? Can
1:14:24
it be absolutely safe? And,
1:14:27
you know, does that 2080 split
1:14:29
look reasonable going forward? And I think it does.
1:14:32
Yeah. I want to ask
1:14:34
about Tesla. You mentioned them a little earlier.
1:14:37
They held an event recently
1:14:39
where they unveiled their plans
1:14:41
for a robotaxi service. Do
1:14:44
you consider Tesla a competitor? Well,
1:14:47
they certainly could be right if they develop
1:14:49
their own AV vehicle and
1:14:51
they decide to go
1:14:54
direct only through the Tesla
1:14:56
app, they would be a competitor. And
1:14:58
if they decide to work with us,
1:15:01
then we would be a partner as
1:15:03
well. And to some extent, again, both can be
1:15:05
true. So I don't think it's going to be
1:15:07
an either or I think
1:15:09
Elon's vision is pretty compelling, especially
1:15:11
like you might have these, uh,
1:15:13
cyber shepherds or these, these owners
1:15:16
of these fleets, et cetera. Those
1:15:18
owners, if they want to
1:15:20
have maximum earnings on those fleets, will
1:15:23
want to pull those fleets on
1:15:25
Uber, but at this point it's unknown what
1:15:27
his intentions are. There's this
1:15:29
big debate that's playing out right
1:15:31
now about who has the better,
1:15:34
uh, AV strategy between Waymo and Tesla
1:15:36
in the sense that the Waymos have
1:15:39
many, many sensors on them. The vehicles
1:15:41
are much more expensive to produce. Uh,
1:15:44
Tesla is trying to get
1:15:46
to full autonomy using only
1:15:48
its cameras, um, and software
1:15:51
and, uh, Andre Carpathi, the AI researcher recently
1:15:53
said that Tesla was going to be in
1:15:55
a better position in the long run because
1:15:57
it ultimately just had a software problem. Whereas
1:15:59
Waymo has a hardware problem, and those are
1:16:02
typically harder to solve. I'm curious
1:16:04
if you have a view on this,
1:16:06
whether you think one company is likely
1:16:08
to get to a better scale based
1:16:10
on the approach that they're taking with
1:16:12
their hardware and software. I mean,
1:16:14
I think that hardware costs scale
1:16:16
down over a period of time.
1:16:18
So sure, Waymo has a hardware
1:16:21
problem, but they can solve it.
1:16:23
I mean, the history of
1:16:25
of compute and hardware is like the
1:16:27
costs come down very, very significantly. The
1:16:30
Waymo solution is working right now, so it's
1:16:32
not theory, right? And I think the
1:16:34
differences are bigger, which is Waymo has
1:16:37
more sensors, has cameras, has LIDAR, so
1:16:39
there's a certain redundancy there. Waymo
1:16:42
generally has more compute, so
1:16:44
to speak. So the inference
1:16:46
of that computer is going to
1:16:49
be better. Right. And Waymo
1:16:51
also has a high definition of maps
1:16:54
that essentially makes the problem
1:16:56
of recognizing what's happening in the real
1:16:58
world a much simpler problem. So
1:17:01
under Elon's model, the weight that
1:17:03
the software has to carry is
1:17:05
very, very heavy versus
1:17:07
the Waymo and most other
1:17:09
player model where you
1:17:11
don't have to kind of weigh as much on training
1:17:13
and you make the problem much simpler as
1:17:16
a compute problem to understand. I
1:17:19
think eventually both will get there. But
1:17:21
if you had to guess, who's going to get to
1:17:24
sort of a viable scale first? Listen, I think
1:17:26
I think Elon eventually will get to a viable
1:17:28
scale. But for the next five years, I bet
1:17:30
on Waymo and we are betting on Waymo. I'll
1:17:33
say this. I don't want to get into an autonomous Tesla
1:17:35
in the next five years. I'm going to let somebody else
1:17:37
can test that out. I'm not going to
1:17:39
be an early adopter. FSD is getting pretty good. Have
1:17:41
you used it recently? I have not used it recently.
1:17:43
Yeah. All right. Yeah, it's really good. Now, again, it's
1:17:46
the first example, the cost of a solid state LIDAR now
1:17:49
is 500, 600. Right.
1:17:51
So why wouldn't you put that
1:17:54
into your sensor stack? It's not
1:17:56
that expensive. And for a
1:17:58
fully self drive. I think that makes a
1:18:00
lot of sense to me. Now, Elon has
1:18:02
accomplished the unimaginable many, many, many times, so
1:18:07
I wouldn't bet against him. Yeah, I
1:18:09
don't know. This is always, you know, my
1:18:11
secret dream for you, you know, obviously you should stay at
1:18:13
Uber as long as you want. When you're done with that,
1:18:15
I actually do think you should run Tesla because I think
1:18:17
you would be, just as you've done Uber, you'd be
1:18:20
willing to make some of the sort of easy compromises like, just
1:18:23
put a $500 freaking LiDAR on the thing and we'd go much faster.
1:18:25
So anyway, what do you think about that? I have a full-time job
1:18:27
and I'm very happy with it. Thank you. Well,
1:18:30
the Tesla board is listening. I don't
1:18:32
know if the Tesla board listens to
1:18:34
you two. Good point. I
1:18:38
made too many Kennedy Chokes. We're opening up
1:18:40
the board meeting with an episode of Fart
1:18:42
Fork, everybody. You think you learned a lot
1:18:44
from this show. What's
1:18:46
your best guess for when, say, 50% of Uber
1:18:48
rides in the U.S. will
1:18:50
be autonomous? I'd say close to
1:18:53
eight to ten years is my best guess, but I
1:18:55
am sure that'll be wrong. Probably
1:18:58
closer to ten. Close
1:19:01
to ten? Okay, interesting. Most
1:19:03
people have overestimated, you
1:19:06
know, again, it's a wild guess. The probabilities
1:19:08
of your being a rider are just as
1:19:10
much as mine. I'm curious if
1:19:12
we can sort of get into a
1:19:14
future imagining mode here. Like, in
1:19:17
the year, whether it's ten years or fifteen
1:19:19
years or twenty years from now when maybe
1:19:21
a majority of rides in at least big
1:19:23
cities in the U.S. will be autonomous. Do
1:19:27
you think that changes the city at all?
1:19:29
Like, do the roads look different? Are there
1:19:31
more cars on the road? Are there fewer
1:19:33
cars on the road? What does that even
1:19:35
look like? So I think that the cities
1:19:38
will have much, much more space
1:19:41
to use. Parking often
1:19:43
takes up 20, 30%
1:19:45
of the square miles in a city,
1:19:48
for example, and that parking space will
1:19:50
be open for living, parks, et cetera.
1:19:52
So there's no doubt that it will
1:19:54
be a better world. You
1:19:56
will have greener, cleaner cities, and you'll
1:19:59
never have to go there. Park again, which I think is pretty cool.
1:20:02
I'm very curious what you think
1:20:05
about the politics of autonomy in
1:20:07
transportation. In the early days of
1:20:09
Uber, there was a lot of backlash and
1:20:11
resistance from taxi drivers. And,
1:20:13
you know, they saw Uber as a threat
1:20:16
to their livelihoods. There were some, you know,
1:20:18
well-publicized cases of sort of sabotage and big
1:20:20
protests. Do you anticipate there
1:20:22
will be a backlash from
1:20:24
either drivers or the public to the
1:20:27
spread of AVs as they start to
1:20:29
appear in more cities? I think there
1:20:31
could be. And what I'm hoping is
1:20:34
that we avoid the backlash by having
1:20:36
the proper conversations. Now, historically, society as
1:20:39
a whole, we've been able to adjust to
1:20:41
job displacement because it does happen gradually. And
1:20:45
even in a world where there's greater automation now
1:20:47
than ever before, employment rates,
1:20:49
etc., are at historically great
1:20:51
levels. But the fact is
1:20:53
that AI is going to displace jobs.
1:20:55
What does that mean? How quickly should
1:20:57
we go? How do we think about
1:21:00
that? Those are discussions that we're going to have. And
1:21:02
if we don't have the discussions, sure, there will be
1:21:04
backlash. There's always backlash against societal
1:21:06
change that's significant. Now,
1:21:09
we now work with taxis in San Francisco and
1:21:12
taxi drivers who use Uber make more than 20
1:21:14
percent more than the ones who don't. So there
1:21:17
is a kind of solution space where
1:21:20
new technology and established
1:21:22
players can win. But
1:21:25
I don't know exactly what that looks like. But
1:21:27
that calculus does not apply to self-driving. You know,
1:21:29
it's not like the Uber driver who's been driving
1:21:31
an Uber for 10 years and that's their main
1:21:33
source of income can just start driving a self-driving
1:21:35
way. But you don't need a driver. No, you
1:21:37
don't need a driver. It's not just that they
1:21:39
have to switch the app they're using, it's that
1:21:41
it threatens to put them out of a job.
1:21:43
Well, listen, could they be part
1:21:46
of fleet management, cleaning, charging, etc.?
1:21:49
That's a possibility. We are
1:21:51
now working with some of our
1:21:53
drivers. They're doing AI map labeling
1:21:55
and training of AI models, etc.
1:21:57
So we're expanding the solution. set
1:21:59
of work on demand work that
1:22:01
we're offering our drivers because there
1:22:03
is part of that work, which
1:22:06
is driving maybe going away or the growth
1:22:08
in that work is going to slow down
1:22:10
at least over the next 10 years. And
1:22:13
then we'll look to adjust. But listen, these are
1:22:15
issues that are real and I
1:22:17
don't have a clean answer for them at this
1:22:19
point. Yeah. You brought
1:22:21
up shared rides earlier and back
1:22:24
in the day, I think when Uber X first rolled
1:22:26
out shared rides, I did that a couple of times
1:22:28
and then I don't know, I got a raise at
1:22:31
my job and I thought from here on out, I think it's just
1:22:33
going to be me in the car. How
1:22:35
popular do you think you can make shared
1:22:37
rides and is there anything that you can
1:22:40
do to make that more appealing?
1:22:42
Well, I think the way that we have to make
1:22:44
it more appealing is to reduce the penalty, so
1:22:47
to speak, of the shared rides. I think the number one
1:22:49
reason why people use Uber is they want to save time,
1:22:51
they want to have their time back. And
1:22:53
a shared ride would, you
1:22:55
would get about a 30% decrease in
1:22:58
price historically, but there could be a 50 to 100%
1:23:01
time penalty. We're working now. Well, you might
1:23:03
end up sitting next to Casey Newton. That
1:23:06
would be cool. That would be amazing.
1:23:08
Although I would feel very short. Otherwise,
1:23:10
I would have no complaints. People so
1:23:13
far we've heard don't have a problem
1:23:15
with company. It really is time and
1:23:17
they don't mind riding with other people.
1:23:19
There's a certain sense of satisfaction with
1:23:21
riding with other people, but we're now
1:23:23
working with both algorithmically and
1:23:25
I think also fixing the product. Previously,
1:23:28
you would choose a shared ride
1:23:30
and you get an upfront discount. So your
1:23:32
incentive as a customer is to get the discount,
1:23:35
but not to get a shared ride. So we
1:23:37
would have customers gaming the system. They get a
1:23:39
shared ride at 2 a.m. when they know they're
1:23:41
not going to be matched up, et cetera. Now
1:23:43
you get a smaller discount and you
1:23:45
get a reward, which is a higher discount
1:23:48
if you're matched. So part of it is
1:23:50
we're not customers aren't working against us and
1:23:52
we're not working against customers, but we're working
1:23:54
on tech. We are reducing the time penalty,
1:23:56
which is we avoid these weird
1:23:59
routes, et cetera. that's going to cost you 50% of
1:24:01
your time or 100% of your time. Now,
1:24:04
in Autonomous, if
1:24:06
we are the only player that then has the liquidity
1:24:09
to introduce shared Autonomous into
1:24:11
cities, that lowers congestion, lowers the price,
1:24:14
that's another way in which our marketplace can add
1:24:16
value to the ecosystem. Speaking
1:24:19
of shared rides, Uber just
1:24:21
released a new airport shuttle service
1:24:23
in New York City. It
1:24:25
costs $18 a person. You book a seat.
1:24:28
It goes on a designated route
1:24:30
on a set schedule. I
1:24:33
don't have a question. I just wanted to congratulate you on
1:24:35
inventing a bus. It's a
1:24:37
better bus. You know exactly when it's coming, picking
1:24:40
up. Just knowing exactly where your
1:24:42
bus is, pick up, knowing what
1:24:44
your path is, real time, it just gives
1:24:46
a sense of comfort. We think this can
1:24:48
be a pretty cool product. And again, is
1:24:51
bus going to be hugely profitable for us long
1:24:53
term? I don't know, but it will introduce us
1:24:55
to a bigger audience to
1:24:57
come into the Uber ecosystem. And
1:25:00
we think it can be good for cities as well. If
1:25:02
you're in Miami, by the way, over the weekend, we
1:25:05
got buses to the Taylor Swift concert as well. So
1:25:07
I'm just saying. Well, I mean, look, it should not
1:25:09
be hard to improve on the experience of a city
1:25:11
bus. Yeah. Like, do you know what I mean? So
1:25:14
actually, I love city buses. When was the last time
1:25:16
you were on a city bus? Well, I took the
1:25:18
train here. So it wasn't a bus, but it was
1:25:20
transit. He doesn't take shared. He doesn't take bus. This
1:25:23
guy is like, I like to ride public
1:25:25
transit. You're an elitist. No, I would love to see a
1:25:27
picture of you on a bus sometimes in the past five
1:25:29
years, because I'm pretty sure that's never happened. Let me ask
1:25:31
you this. I think we can make the experience better. So
1:25:34
far, I've resisted giving you any product feedback, Dara.
1:25:36
But I had this one thing that I have
1:25:39
always wanted to know the explanation for, and it
1:25:41
says, at some point in the past couple years, you
1:25:43
all, when I ordered an Uber, started sending me a
1:25:46
push notification saying that the driver
1:25:48
was nearby. And I'm
1:25:50
the sort of person, when I've ordered an Uber, Dara, I'm
1:25:52
going to be there when the driver pulls. I'm not making
1:25:54
this person wait. OK, I'm going to respect their time. And
1:25:57
what I've learned is when you tell me the driver is What
1:26:00
that means is they're at least three minutes away and they
1:26:02
might be two miles away. And what
1:26:05
I want to know is why do you send me that notification? We
1:26:08
want you to be prepared to not keep
1:26:10
the driver waiting. Maybe we should personalize it.
1:26:12
I would love that. I think that's a
1:26:14
good question, which is depending on whether or
1:26:16
not you keep the driver waiting. I think
1:26:18
that is one of the cool things with
1:26:20
AI algos that we can do. At
1:26:22
this point, you're right. The experience
1:26:24
is not quite optimized. But
1:26:27
it's for the driver. It's for the driver. No, I get it.
1:26:29
And if I were a driver, I would be happy that
1:26:31
you were sending that. But you also send me this
1:26:33
notification that says the driver's arriving. And that's when I'm like,
1:26:36
OK, it's time to go downstairs. But it sounds like
1:26:38
we're making progress on this. I think the
1:26:40
algorithm just likes you. It just wants to have a conversation with
1:26:42
you. They know that I love my rides. Well,
1:26:45
Casey has previously talked about how he doesn't like his Uber
1:26:47
drivers to talk to him. This is
1:26:49
a man who doesn't share. Listen,
1:26:52
this man likes to coast through life in
1:26:54
a corset and bubble. Here's
1:26:57
what I'm saying. If you're on your way to the
1:26:59
airport at 6.30 in the morning, do you truly want
1:27:01
a person you've never met before asking you who you're
1:27:03
going to vote for in the election? Is that an
1:27:05
experience that anyone enjoys? By the way, I drive. I
1:27:07
drove. And reading the rider
1:27:10
as to whether they want to have
1:27:12
a conversation or not, I was not
1:27:14
good at the art of conversation
1:27:17
as a driver. Were you two talking to him? No,
1:27:19
no. Hey, how's it going? Are you having a good
1:27:21
day? Going to work. And then I just shut up.
1:27:24
Yeah. And have a nice day. To me, that's ideal.
1:27:27
I don't know if that's how I feel. No, that's perfect. That's
1:27:29
going to give you all the information that you need. I'll be your
1:27:32
driver any day. This is Casey's real attraction
1:27:34
to self-driving cars, is that he never has to talk
1:27:36
to another human. Look, you can make fun of me
1:27:38
all you want. I am not the only person who
1:27:40
feels this way. Let me tell you. When I check
1:27:42
into a hotel, same thing. Did you have a nice
1:27:44
day? Yeah, but where are
1:27:46
you coming in from? Let's not get into it. I
1:27:49
would love to see you checking into a hotel. So did you have
1:27:51
a nice day? And you're like, well, let me tell you about this
1:27:53
board meeting I just went to. Because the pressure I'm under, you don't
1:27:56
want to hear about it. All
1:27:58
right, well, I think we're at time. Thank
1:28:01
you so much for coming. Really appreciate it. It
1:28:03
was fun. When
1:28:05
we come back, well, AI is driving progress
1:28:07
and it's driving cars. Now we're going to
1:28:09
find out if it can drive Casey insane.
1:28:13
He watched 260 TikTok
1:28:15
videos and he'll tell you all about
1:28:19
it. Well,
1:28:25
Casey, aside from all the drama in
1:28:27
AI and self-driving cars this week, we
1:28:29
also had some news about TikTok. One
1:28:32
of the other most powerful AI forces on earth.
1:28:34
No, truly. Yes. I
1:28:36
ironically believe that. Yeah, that was not a joke. So
1:28:39
this week we learned about some
1:28:41
documents that came to light as
1:28:43
part of a lawsuit that is
1:28:45
moving through the courts right now.
1:28:48
As people will remember, the federal government
1:28:50
is still trying to force ByteDance to
1:28:52
sell TikTok. But last week,
1:28:54
13 states and the District of
1:28:56
Columbia sued TikTok, accusing the company
1:28:58
of creating an intentionally addictive app
1:29:00
that harmed children. And
1:29:02
Kevin, and this is my favorite part
1:29:04
of this story, is that Kentucky Public
1:29:06
Radio got ahold of these court documents
1:29:08
and they had many redactions. You know,
1:29:10
often in these cases, the most interesting
1:29:12
sort of facts and figures will just
1:29:14
be redacted for who knows what reason.
1:29:16
But the geniuses over at Kentucky Public
1:29:18
Radio just copy and pasted everything in
1:29:20
the document. And when they pasted it,
1:29:23
everything was totally visible. This keeps
1:29:25
happening. I feel like every year
1:29:27
or two we get a story about
1:29:29
some failed redaction. Like is it that hard
1:29:31
to redact a document? I'll say this. I hope
1:29:33
it always remains this hard to redact a document
1:29:35
because I read stuff like this,
1:29:37
Kevin, and I'm in heaven. Yes. So
1:29:40
they got ahold of these documents. They copied
1:29:42
and pasted. They figured out what was behind
1:29:44
sort of the black boxes in the redacted
1:29:46
materials. And it was pretty
1:29:48
juicy. These documents included details like
1:29:51
TikTok's knowledge of a high number
1:29:53
of underage kids who were
1:29:55
stripping for adults on the platform. The
1:29:57
adults who were paying them in digital.
1:30:01
These documents also claimed that TikTok
1:30:03
had adjusted its algorithm to prioritize
1:30:05
people they deemed beautiful. And
1:30:08
then there was this stat that I
1:30:10
know you honed in on, which was
1:30:12
that these documents said, based on internal
1:30:14
conversations, that TikTok had figured out exactly
1:30:16
how many videos it needed to show
1:30:18
someone in order to get them hooked
1:30:20
on the platform. And
1:30:22
that number is 260. 260
1:30:25
is what it takes. You
1:30:27
know, it reminds me, this is sort of ancient, but do you remember the
1:30:29
commercial in the 80s where they would say, like, how
1:30:31
many licks does it take to get to the center of a
1:30:33
Tootsie Pop? Yes. To me, this is
1:30:35
the sort of 2020's equivalent. How
1:30:39
many TikToks do you have to watch
1:30:41
until you can't look away ever again?
1:30:43
Yes. So this is,
1:30:45
according to the company's own research, this is
1:30:47
about the tipping point where people start to
1:30:49
develop a habit or an addiction of going
1:30:51
back to the platform, and
1:30:54
they sort of become sticky in the parlance of
1:30:56
social media apps. In the disgusting
1:30:58
parlance of social media apps, it becomes
1:31:00
sticky. So, Casey,
1:31:02
when we heard about this
1:31:04
magic number of 260 TikTok videos,
1:31:06
you had what I thought was
1:31:08
an insane idea. Tell us
1:31:11
about it. Well, Kevin, I thought if 260
1:31:13
videos is all it takes, maybe I should
1:31:15
watch 260 TikToks, and here's why. I
1:31:19
am an infrequent user of TikTok.
1:31:21
I would say once
1:31:23
a week, once every two weeks, I'll check in,
1:31:25
I'll watch a few videos, and I would say
1:31:28
I generally enjoy my experience, but not to the
1:31:30
point that I come back every day. And
1:31:33
so I've always wondered what I'm missing
1:31:35
because I know so many folks that
1:31:37
can't even have TikTok on their phone
1:31:40
because it holds such a power over
1:31:42
them, and they feel like
1:31:44
the algorithm gets to know them so quickly
1:31:46
and so intimately that it can only be
1:31:49
explained by message. So
1:31:51
I thought if I've not been able to
1:31:53
have this experience just sort of normally using
1:31:56
TikTok, what if I tried
1:31:59
to consume two... 260
1:32:01
tiktoks as quickly as I possibly could
1:32:03
and just saw what would happen after
1:32:05
that not all heroes wear capes Okay,
1:32:08
so Casey you watched 260
1:32:11
tiktok videos last night. Yeah, tell
1:32:13
me about it. So I did create
1:32:15
a new account So I started fresh.
1:32:17
I didn't just reset my algorithm Although
1:32:19
that is something that you can do
1:32:21
in tiktok and I decided
1:32:23
a couple of things one is I was
1:32:26
not going to follow Anyone like no friends,
1:32:28
but also no influencers no enemies No enemies
1:32:30
and I also was not going to do
1:32:32
any searches right a lot of the ways
1:32:34
that tiktok will get to know you Is
1:32:36
if you do a search and
1:32:38
I thought I want to get
1:32:41
the sort of broadest most mainstreamy
1:32:43
experience of tiktok that I can so
1:32:46
that I can develop a better sense
1:32:48
of how does it sort of Walk
1:32:51
me down this funnel toward my eventual interest
1:32:53
Whereas if I just follow ten friends and
1:32:55
did like three searches for my favorite subjects
1:32:57
Like I probably could have gone there faster
1:33:00
And so do you know the very first thing that tick
1:33:03
tock showed me Kevin? What's that? It showed me a 19
1:33:05
year old boy flirting with an 18 year old girl trying
1:33:07
to get her phone number And
1:33:09
when I tell you I could not have been any less interested
1:33:11
in this content. It was aggressively
1:33:14
straight Yes, and it was
1:33:16
very young and it had nothing to
1:33:18
do And
1:33:23
so over the next several hours this total
1:33:25
process I did About
1:33:27
two and a half hours last
1:33:30
night, and I did another 30
1:33:32
minutes this morning And I would
1:33:34
like to share you know maybe
1:33:36
the first Nine
1:33:38
or ten things that tick tock showed me
1:33:40
again You know that the assumption is it
1:33:42
knows basically nothing about me Yes, and I
1:33:44
do think there is something quite revealing about
1:33:47
an algorithm that knows nothing Throwing
1:33:49
spaghetti at you seeing what will stick and
1:33:51
then just picking up the spaghetti afterwards and
1:33:53
saying well What is it you know that
1:33:55
I thought was interesting so here's what it
1:33:57
showed me second video a disturbing
1:34:00
911 call, like a very upsetting sort
1:34:02
of domestic violence situation, skip. Three,
1:34:05
two people doing trivia on a diving board and
1:34:07
like the person who loses has to jump off
1:34:09
the diving board. Okay, fine. Four,
1:34:11
just free booted clip of audition
1:34:13
for America's Got Talent. Five,
1:34:17
vegetable mukbang. So just a guy
1:34:19
who had like rose and rose
1:34:21
of beautiful multicolored vegetables in front
1:34:23
of them who was just eating
1:34:25
them. Six, a comedy
1:34:28
skit, but it was like running on
1:34:30
top of a Minecraft video. So
1:34:32
one of my key takeaways after
1:34:34
my first six or seven TikTok videos was
1:34:37
that it does actually assume that you're quite
1:34:39
young, right? That's why it started out by
1:34:41
showing me teenagers. And as I would go
1:34:43
through this process, I found that over and
1:34:45
over again, instead of just showing me a
1:34:48
video, it would show me a video that
1:34:50
had been chopped in half and on top
1:34:52
was whatever the sort of core content was.
1:34:54
And below would be someone is playing Subway
1:34:57
Surfers, someone is playing Minecraft or someone is
1:34:59
doing those sort of oddly satisfying
1:35:01
things. This is a growth hack. I'm
1:35:04
combing through a rug or whatever. And
1:35:06
it's like, it's literally people trying to
1:35:08
hypnotize you, right? It's like, if you
1:35:10
just see the, oh,
1:35:12
someone is trying to smooth something out
1:35:14
or someone is playing with like slime.
1:35:17
Have you seen the soap cutting? Soap
1:35:19
cutting is huge. Again, there
1:35:21
is no content to it. It is
1:35:23
just trying to stimulate you on some
1:35:25
sort of like lizard brain level. It
1:35:27
feels vaguely narcotic. Absolutely. It is like,
1:35:29
yes. It is just purely a drug.
1:35:32
Video number seven, an ad. Video
1:35:34
number eight, a dad
1:35:36
who was speaking in Spanish and dancing, I mean,
1:35:38
it was very cute. Now, can I ask you
1:35:40
a question? Are you doing
1:35:43
anything other than just swiping from one video to
1:35:45
the next? Are you liking anything? Are you saving
1:35:47
anything? Are you sharing anything? Because all of that
1:35:49
gets interpreted by the algorithm as like a signal
1:35:52
to keep showing you more of that kind of
1:35:54
thing. Absolutely. So for the first 25 or so
1:35:56
videos, I did not like anything, but because I
1:35:58
truly didn't. like anything, like nothing was really doing
1:36:00
it for me. But my intention was always like,
1:36:03
yes, when I see something I like, I'm gonna
1:36:05
try to reward the algorithm, give it a like,
1:36:07
and I will maybe get more like that. So
1:36:10
the process goes on and
1:36:12
on. And I'm
1:36:14
just struck by the absolute
1:36:16
weirdness and
1:36:19
disconnection of everything in the feed. At
1:36:22
first, truly nothing has any relation to
1:36:24
anything else. And it sort of feels
1:36:26
like you've put your brain into like
1:36:28
a Vitamix, you know? Where it's like,
1:36:30
swipe, here's a clip from friends. Swipe,
1:36:32
kids complaining about school. Swipe, Mickey Mouse
1:36:34
has a gun and he's in a
1:36:36
video game. Those are three videos that
1:36:39
I saw in a row. And
1:36:41
the effect of it is just like
1:36:43
disorienting, right? And I've had this experience
1:36:45
when you like go onto YouTube but
1:36:47
you're not logged in, you know,
1:36:49
on like a new account. And it's sort of just, it's
1:36:51
just showing you sort of a random assortment of things that
1:36:54
are popular on YouTube. It does feel very much like they're
1:36:56
just firing in a bunch of
1:36:58
different directions, hoping that something will stick. And
1:37:01
then it can sort of, it can then
1:37:03
sort of zoom in on that thing. Yes, absolutely.
1:37:05
Now I will add that in the
1:37:07
first 30 or so videos,
1:37:10
I saw two things that I thought were
1:37:12
like actually disturbing and bad. What were they?
1:37:14
Like things that should never have been shown
1:37:16
to me. Was it a clip from the
1:37:18
All In podcast? Yes, no.
1:37:21
Fortunately, it didn't get that bad. But
1:37:23
one, there was a clip of a
1:37:25
great in like a busy city and
1:37:27
there was air blowing up from the
1:37:29
great. And the TikTok was just women
1:37:31
walking over the great and their skirts
1:37:33
blowing up. That seems bad. That's horrible.
1:37:35
That was in the first 20 videos
1:37:37
that I saw. Wow. This
1:37:39
video, okay. I guess if you like that video, it says a lot
1:37:41
about you, right? But it's like that. The
1:37:44
second one, and I truly, I do not even know if
1:37:46
we are, we'll want
1:37:48
to include this on our podcast because I
1:37:50
can't even believe that I'm saying that I
1:37:52
saw this, but it is true. It
1:37:55
was an AI voice of someone
1:37:57
telling an erotic story, which
1:37:59
involves involved incest and it was
1:38:02
shown over a video of someone
1:38:04
making soap. Wow. Like,
1:38:07
what? This is dark
1:38:10
stuff. This is dark stuff. Now, at
1:38:12
what point did you start to wonder if
1:38:14
the algorithm has started to pick up on
1:38:16
your clues that you were giving it? Well,
1:38:18
so I was desperate to find out this
1:38:20
question because I am gay and I wondered
1:38:22
when I was going to see the first
1:38:24
gay content, like when it was actually just
1:38:27
going to show me two gay men who
1:38:29
were talking about gay concerns and it
1:38:32
did not happen. Ever?
1:38:34
No. It never quite got
1:38:36
there. On this morning... In 260 videos. It's
1:38:38
over 260 videos. Now, it did show me queer people. Actually,
1:38:42
do you know the first queer person,
1:38:44
identifiably queer person that the TikTok algorithm
1:38:46
showed me? Are you familiar
1:38:48
with the very popular TikTok meme from this
1:38:51
year, very Jameer, very mindful? a
1:38:58
piece of sponsored content, and she was trying to sell me
1:39:01
a Lenovo laptop. And that
1:39:03
was the queer experience that I
1:39:05
got in my romp through the
1:39:08
TikTok algorithm. Now, it did
1:39:10
eventually show me a couple of queer people. It
1:39:12
showed me one. And
1:39:19
then it showed me a video by
1:39:21
Billie Eilish, a queer pop star. And
1:39:24
I did like that video. And now Billie Eilish was
1:39:26
one of the most famous pop stars in the entire
1:39:28
world. I mean, like, truly, like on the Mount Rushmore
1:39:30
of famous pop stars right now. So it makes a
1:39:33
lot of sense to me that TikTok would show me
1:39:35
also incredibly popular with teenagers. And so
1:39:37
I liked one Billie Eilish video and then
1:39:39
that was when the floodgates opened and it
1:39:41
was like, okay, here's a lot of that.
1:39:44
Just from like sort of scrolling it. No,
1:39:46
we did not get to the gay zone.
1:39:51
Now, I did notice the algorithm adapting to me.
1:39:53
So something about me was because again, I was
1:39:55
trying to get through a lot of videos in
1:39:57
a relatively short amount of time. And TikTok now
1:39:59
will often show you three, four, five minute long
1:40:01
videos, I frankly did not have the time for
1:40:03
that. The longer I scrolled, the shorter the videos
1:40:05
were that I got. And I do feel like
1:40:07
the content aged up a little bit. You know,
1:40:09
it started showing me a category
1:40:11
of content that I call people being weird
1:40:13
little freaks, you know, is like somewhat.
1:40:17
These are some real examples. A man dressed
1:40:19
as the cat in the hat dancing to
1:40:22
Sierra's song Goodies. OK. There was
1:40:24
a man in a horse costume
1:40:26
playing the Addams Family theme song
1:40:28
on an accordion using a toilet
1:40:30
lid for percussion. This
1:40:34
is the most important media platform in
1:40:36
the world. Yes. Hours a
1:40:38
day, teenagers are staring at this. And
1:40:41
this is one of the. We
1:40:44
are so screwed. Yeah, you know,
1:40:47
it it figured out that I was more likely
1:40:49
to like content about animals than other things. So
1:40:51
there started to become a lot of dogs doing
1:40:53
cute things, cats doing cute things or, you know,
1:40:55
other other things like that. But,
1:40:57
you know, there was also just a lot of like, here's
1:41:00
a guy going to a store and showing you objects
1:41:02
from the store or like here is a guy telling
1:41:04
you a long story. Can
1:41:06
I ask you a question? Like, was it any any in these
1:41:08
260 videos? Were there any that you
1:41:10
thought like that is a great video? I
1:41:15
don't know if I saw anything truly great. I
1:41:17
definitely saw some animal videos that if I showed
1:41:19
them to you, you would laugh. Or you would
1:41:21
say that was cute. There was stuff that that
1:41:23
gave me an emotional response. And I would say
1:41:25
particularly as I got to the end of this
1:41:27
process, I was seeing stuff that I enjoyed a
1:41:29
bit more. But there I
1:41:32
did this morning. I decided to
1:41:34
do something, Kevin, because I got so frustrated
1:41:36
with the algorithm. I thought it is time
1:41:38
to give the algorithm a piece of data about me. So do
1:41:40
you know what I did? What did you do? I searched the
1:41:42
word gay. Very subtle.
1:41:45
Which like in fairness is an insane
1:41:48
search query. Because what is TikTok supposed to show me
1:41:50
in response? You can show me all sorts of things.
1:41:52
But on my like real TikTok account, it just shows
1:41:54
me your creators all the time. And they're doing all
1:41:56
sorts of things. They're singing, they're dancing, they're telling jokes.
1:41:58
They're telling stories. So I was like, I would like
1:42:01
to see a little bit of stuff like that. Do
1:42:03
you know the first clip that
1:42:06
came up for me when
1:42:08
I searched gay on TikTok to train my algorithm?
1:42:10
What was it? It was a clip from an
1:42:12
adult film. Now, like
1:42:14
explicit, unblurred, it was
1:42:17
from, and I don't know this,
1:42:19
I've only read about this, but apparently at the
1:42:21
start of some adult films, before the explicit stuff,
1:42:23
there'll be some sort of story content, that sort
1:42:25
of establishes the premise of the scene. And
1:42:27
this was sort of in that vein. But
1:42:30
I thought it, if I just
1:42:32
sort of said off-handed, oh,
1:42:35
TikTok, yeah, I bet if you just search gay,
1:42:37
they'll just show you porn. People
1:42:39
would say, it sounds like you're being insane. Why
1:42:42
would you say that? That's being insane. Obviously, they're
1:42:44
probably showing you their most famous
1:42:46
queer creator, something like that. No, they
1:42:48
literally just showed me porn. So
1:42:52
it was like, again, so much of this process for
1:42:54
me was hearing the
1:42:56
things that people say about TikTok, assuming
1:42:58
that people were sort of exaggerating or being
1:43:01
too hard on it, and then having the
1:43:03
experience myself and saying like, oh
1:43:05
no, it's actually like that. That was interesting. An
1:43:07
alternative explanation is that the algorithm is actually really,
1:43:09
really good, and the reason it show you all
1:43:11
the videos of people being weird little freaks is
1:43:13
because you are actually a weird little freak. That's
1:43:16
true, I will accept those allegations. I will not
1:43:18
fight those allegations. So,
1:43:20
okay, you watched 260 videos, you
1:43:22
reached this magic number that is supposed to
1:43:24
get people addicted to TikTok. Are
1:43:27
you addicted to TikTok? Kevin,
1:43:29
I'm surprised and frankly
1:43:32
delighted to tell you, I have never
1:43:34
been less addicted to TikTok than
1:43:36
I have been after going through this experience.
1:43:38
Do you remember back when people would
1:43:41
smoke cigarettes a lot, and if a parent caught a
1:43:43
child smoking, the thing that they would do is they
1:43:45
say, you know what? You're gonna smoke this whole pack,
1:43:47
and I'm gonna sit in front of you, and you're
1:43:49
gonna smoke this whole pack of cigarettes, and the accumulated
1:43:51
effect of all that stuff that you're breathing into your
1:43:53
lungs, by the end of that, the teenager says, Dad,
1:43:56
I'm never gonna smoke again. This
1:43:59
is how I feel. It
1:44:01
cured your addiction. After watching hundreds
1:44:03
of these TikToks. So, okay, you are not
1:44:05
a TikTok addict. In fact, it seems like
1:44:07
you are less likely to become a TikTok
1:44:09
power user than you were before this experiment.
1:44:11
I think that's right. Did this experiment change
1:44:14
your attitudes about whether TikToks should be banned
1:44:16
in the United States? I
1:44:19
feel so bad saying it, but I think the answer
1:44:21
is yes. Like, not ban it,
1:44:23
right? Like, you know, my feelings about
1:44:25
that still have much more to
1:44:27
do with like free speech. Freedom
1:44:30
of expression. And I think that a
1:44:32
ban raises a lot of questions that the United
1:44:34
States approach to this issue, it just makes me
1:44:36
super uncomfortable with. You can go back through our
1:44:38
archive to hear a much longer discussion about that.
1:44:41
But if I
1:44:44
were a parent of a teen who
1:44:46
had just been given their first smartphone,
1:44:48
hopefully not any younger than like 14,
1:44:52
it would change the way that I talk with them
1:44:54
about what TikTok is. And it would change the way
1:44:56
that I would check in with them about what they
1:44:59
were seeing, right? Like I would say, you are
1:45:01
about to see something that is going to make you
1:45:03
feel like your mind is in a blender and it
1:45:06
is going to try to addict you. And here's how
1:45:08
it is gonna try to addict you. And
1:45:10
I might sit with my child and might
1:45:12
do some early searches to try to precede
1:45:14
that feed with stuff that was good and
1:45:17
would give my child a greater chance of
1:45:19
going down some positive rabbit holes and seeing
1:45:21
less of, you know, some of the more
1:45:23
disturbing stuff that I saw there. So if
1:45:25
nothing else, like I think it was a
1:45:27
good educational exercise for me to go through.
1:45:29
And if there is someone in your life,
1:45:32
particularly a young person who is spending a
1:45:34
lot of time on TikTok, I
1:45:36
would encourage that you go through this process yourself
1:45:38
because these algorithms are changing all the time. And
1:45:40
I think you do wanna have a sense of
1:45:43
what is it like this very week if you
1:45:45
really wanna know what it's gonna be showing your
1:45:47
kid. Yeah, I mean, I will say, you know,
1:45:49
I spent a lot of time on TikTok. I
1:45:53
don't recall ever getting
1:45:56
done with TikTok and
1:45:58
being sort of... happy and
1:46:00
fulfilled with how I spent the time. There's
1:46:03
a vague sense of shame about it. There's
1:46:06
a vague sense that sometimes it helps me turn
1:46:08
my brain off at the end of a stressful
1:46:10
day. It has this sort of narcotic
1:46:12
effect on me. And
1:46:16
sometimes it's calming, and sometimes I find things
1:46:18
that are funny, but rarely do I come
1:46:20
away saying that was the best possible use
1:46:22
of my time. There is something
1:46:24
that happens when you adopt this
1:46:27
sort of algorithm first, vertical
1:46:30
video, mostly short form, infinite
1:46:32
scroll. You put all of
1:46:34
those ingredients into a bag,
1:46:36
and what comes out does
1:46:38
have this narcotic effect, as
1:46:40
you say. Well, Casey,
1:46:42
thank you for exposing your brain
1:46:44
to the TikTok algorithm for the
1:46:46
sake of journalism. I appreciate you.
1:46:49
And I will be donating it to
1:46:51
science when my life ends. People
1:46:54
will be studying your brain after you die. I
1:46:56
feel fairly confident. I don't know why they'll be
1:46:58
studying your brain, but there will be research teams
1:47:00
looking at it. Can't wait to hear which I'll
1:47:03
find out. I'm Casey. I'm
1:47:05
the director of the The
1:47:20
Hard Fork is produced by Whitney Jones and Rachel
1:47:22
Cohn, where edited by Jen Poient. Today's
1:47:25
show was engineered by Alyssa Moxley. Original
1:47:27
music by Mary Lozano, Sophia
1:47:30
Landman, Diane Wong, Rowan Nemestow,
1:47:32
and Dan Powell. Our
1:47:34
audience editor is Nuggler Loegely. Video production by
1:47:36
Ryan Manning and Chris Schott. As
1:47:39
always, you can watch this full episode
1:47:41
on YouTube at youtube.com/hard fork. Special
1:47:44
thanks to Paula Schumann, Hui Wing Tam, Dalia
1:47:47
Hadad, and Jeffrey Miranda. You can email
1:47:49
us at hardfork at nytimes.com. Thanks
1:47:52
for watching. I'll see you next time. Bye. And
1:48:10
now, a next-level moment from AT&T Business.
1:48:12
Say you've sent out a gigantic shipment
1:48:15
of pillows, and they need to be
1:48:17
there in time for International Sleep Day.
1:48:19
You've got AT&T 5G, so you're fully
1:48:21
confident. But the vendor isn't responding. And
1:48:23
International Sleep Day is tomorrow! Luckily, AT&T
1:48:25
5G lets you deal with any issues
1:48:27
with ease, so the pillows will get
1:48:29
delivered and everyone can sleep soundly. Especially
1:48:32
you. AT&T 5G requires a compatible plan
1:48:34
and device. 5G is not available everywhere.
1:48:36
See att.com/5G for you for details.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More