Sarah’s Substack
Subscribe
Sign in
Home
Archive
About
Latest
Top
Discussions
On futile rage against the chatbots
When AI says it better.
Dec 13, 2024
•
Sarah
6
Share this post
Sarah’s Substack
On futile rage against the chatbots
Copy link
Facebook
Email
Notes
More
1
November 2024
I read every major AI lab’s safety plan so you don’t have to
AI labs acknowledge that they are taking some very big risks. What do they plan to do about them?
Nov 29, 2024
•
Sarah
23
Share this post
Sarah’s Substack
I read every major AI lab’s safety plan so you don’t have to
Copy link
Facebook
Email
Notes
More
6
#17 Fun Theory with Noah Topper
The Fun Theory Sequence is one of Eliezer Yudkowsky's cheerier works, and considers questions such as 'how much fun is there in the universe?', 'are we…
Nov 8, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
1:25:53
October 2024
#16 John Sherman on the psychological experience of learning about x-risk and AI safety messaging strategies
John Sherman is the host of the For Humanity Podcast, which (much like this one!) aims to explain AI safety to a non-expert audience.
Oct 30, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
52:49
#15 Should we be engaging in civil disobedience to protest AGI development?
StopAI are a non-profit aiming to achieve a permanent ban on the development of AGI through peaceful protest.
Oct 20, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
1:18:20
#14 Buck Shlegeris on AI control
Buck Shlegeris is the CEO of Redwood Research, a non-profit working to reduce risks from powerful AI.
Oct 16, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
50:52
September 2024
#13 Aaron Bergman and Max Alexander debate the Very Repugnant Conclusion
In this episode, Aaron Bergman and Max Alexander are back to battle it out for the philosophy crown, while I (attempt to) moderate.
Sep 8, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
1:53:51
August 2024
#12 Deger Turan on all things forecasting
Deger Turan is the CEO of forecasting platform Metaculus and president of the AI Objectives Institute.
Aug 21, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
54:21
June 2024
#11 Katja Grace on the AI Impacts survey, the case for slowing down AI & arguments for and against x-risk
Katja Grace is the co-founder of AI Impacts, a non-profit focused on answering key questions about the future trajectory of AI development, which is…
Jun 20, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
1:16:34
#10 Nathan Labenz on the current AI state-of-the-art, the Red Team in Public project, reasons for hope on AI x-risk & more
Nathan Labenz is the founder of AI content-generation platform Waymark and host of The Cognitive Revolution Podcast, who now works full-time on tracking…
Jun 9, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
1:54:22
May 2024
#9 Sneha Revanur on founding Encode Justice, California's SB-1047, and youth advocacy for safe AI development
Sheha Revanur is a the founder of Encode Justice, an international, youth-led network campaigning for the responsible development of AI, which was among…
May 15, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
49:50
April 2024
#8 Nathan Young on forecasting, AI risk & regulation, and how not to lose your mind on Twitter
Nathan Young is a forecaster, software developer and tentative AI optimist.
Apr 21, 2024
•
Sarah
Share this post
Copy link
Facebook
Email
Notes
More
1:28:07
Share
Copy link
Facebook
Email
Notes
More
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts