Sarah’s Substack
Subscribe
Sign in
Home
Archive
About
Latest
Top
Discussions
Why do people disagree about when powerful AI will arrive?
My best attempt to distill the cases for short and long(ish) AGI timelines.
Jun 2
•
Sarah
36
Share this post
Sarah’s Substack
Why do people disagree about when powerful AI will arrive?
Copy link
Facebook
Email
Notes
More
4
January 2025
A defence of slowness at the end of the world
Since learning of the coming AI revolution, I’ve lived in two worlds.
Jan 29
•
Sarah
124
Share this post
Sarah’s Substack
A defence of slowness at the end of the world
Copy link
Facebook
Email
Notes
More
10
Don’t sell yourself short
And other advice for the newly AI-concerned.
Jan 16
•
Sarah
37
Share this post
Sarah’s Substack
Don’t sell yourself short
Copy link
Facebook
Email
Notes
More
8
Are AI safetyists crying wolf?
Fear of AI is not just another tech-panic.
Jan 8
•
Sarah
21
Share this post
Sarah’s Substack
Are AI safetyists crying wolf?
Copy link
Facebook
Email
Notes
More
9
December 2024
On futile rage against the chatbots
When AI says it better.
Dec 13, 2024
•
Sarah
7
Share this post
Sarah’s Substack
On futile rage against the chatbots
Copy link
Facebook
Email
Notes
More
1
November 2024
I read every major AI lab’s safety plan so you don’t have to
AI labs acknowledge that they are taking some very big risks. What do they plan to do about them?
Nov 29, 2024
•
Sarah
35
Share this post
Sarah’s Substack
I read every major AI lab’s safety plan so you don’t have to
Copy link
Facebook
Email
Notes
More
12
#17 Fun Theory with Noah Topper
The Fun Theory Sequence is one of Eliezer Yudkowsky's cheerier works, and considers questions such as 'how much fun is there in the universe?', 'are we…
Nov 8, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
1:25:53
October 2024
#16 John Sherman on the psychological experience of learning about x-risk and AI safety messaging strategies
John Sherman is the host of the For Humanity Podcast, which (much like this one!) aims to explain AI safety to a non-expert audience.
Oct 30, 2024
•
Sarah
1
Share this post
Copy link
Facebook
Email
Notes
More
52:49
#15 Should we be engaging in civil disobedience to protest AGI development?
StopAI are a non-profit aiming to achieve a permanent ban on the development of AGI through peaceful protest.
Oct 20, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
1:18:20
#14 Buck Shlegeris on AI control
Buck Shlegeris is the CEO of Redwood Research, a non-profit working to reduce risks from powerful AI.
Oct 16, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
50:52
September 2024
#13 Aaron Bergman and Max Alexander debate the Very Repugnant Conclusion
In this episode, Aaron Bergman and Max Alexander are back to battle it out for the philosophy crown, while I (attempt to) moderate.
Sep 8, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
1:53:51
August 2024
#12 Deger Turan on all things forecasting
Deger Turan is the CEO of forecasting platform Metaculus and president of the AI Objectives Institute.
Aug 21, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
54:21
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts