Find Thousands of Podcast Partners
Podcast Image

Consistently Candid

AI safety, philosophy and other things.
Categories

Last Episode Date: 8 November 2024

Total Episodes: 17

Collaboration
Podcast Interviews
Affiliate and Join Ventures
Sponsorships
Promo Swaps
Feed swaps
Guest/Interview swaps
Monetization
Advertising and Sponsors
Affiliate and JVs
Paid Interviews
Products, Services or Events
Memberships
Donations
8 November 2024
#17 Fun Theory with Noah Topper

The Fun Theory Sequence is one of Eliezer Yudkowsky's cheerier works, and considers questions such as 'how much fun is there in the universe?', 'are we having fun yet' and 'could we be having more fun?'. It tries to answer some of the philosophical quandries we might encounter when envisioning a post-AGI utopia. In this episode, I discussed Fun Theory with Noah Topper, who loyal listeners will remember from episode 7, in which we tackled EY's equally interesting but less fun essay, A List of Lethalities. Follow Noah on Twitter and check out his Substack!

85 min
30 October 2024
#16 John Sherman on the psychological experience of learning about x-risk and AI safety messaging strategies

John Sherman is the host of the For Humanity Podcast, which (much like this one!) aims to explain AI safety to a non-expert audience. In this episode, we compared our experiences of encountering AI safety arguments for the first time and the psychological experience of being aware of x-risk, as well as what messaging strategies the AI safety community should be using to engage more people. Listen & subscribe to the For Humanity Podcast on YouTube and follow John on Twitter!

52 min
20 October 2024
#15 Should we be engaging in civil disobedience to protest AGI development?

StopAI are a non-profit aiming to achieve a permanent ban on the development of AGI through peaceful protest. In this episode, I chatted with three of founders of StopAI – Remmelt Ellen, Sam Kirchner and Guido Reichstadter. We talked about what protest tactics StopAI have been using, and why they want a stop (and not just a pause!) in the development of AGI. Follow Sam, Remmelt and Guido on TwitterMy Twitter 

78 min
16 October 2024
#14 Buck Shlegeris on AI control

Buck Shlegeris is the CEO of Redwood Research, a non-profit working to reduce risks from powerful AI. We discussed Redwood's research into AI control, why we shouldn't feel confident that witnessing an AI escape attempt would persuade labs to undeploy dangerous models, lessons from the vetoing of SB1047, the importance of lab security and more. Posts discussed:The case for ensuring that powerful AIs are controlledWould catching your AIs trying to escape convince AI developers to slow down or undeploy?You can, in fact, bamboozle an unaligned AI into sparing your lifeFollow Buck on Twitter and subscribe to his Substack!

50 min
8 September 2024
#13 Aaron Bergman and Max Alexander debate the Very Repugnant Conclusion

In this episode, Aaron Bergman and Max Alexander are back to battle it out for the philosophy crown, while I (attempt to) moderate. They discuss the Very Repugnant Conclusion, which, in the words of Claude, "posits that a world with a vast population living lives barely worth living could be considered ethically inferior to a world with an even larger population, where most people have extremely high quality lives, but a significant minority endure extreme suffering." Listen to the end to hear my uninformed opinion on who's right.Read Aaron's blog post on suffering-focused utilitarianism Follow Aaron on Twitter Follow Max on TwitterMy Twitter

113 min
21 August 2024
#12 Deger Turan on all things forecasting

Deger Turan is the CEO of forecasting platform Metaculus and president of the AI Objectives Institute. In this episode, we discuss how forecasting can be used to help humanity coordinate around reducing existential risks, Deger's advice for aspiring forecasters, the future of using AI for forecasting and more!Enter Metaculus's Q3 AI Forecasting Benchmark TournamentGet in touch with Deger: deger@metaculus.com 

54 min
20 June 2024
#11 Katja Grace on the AI Impacts survey, the case for slowing down AI & arguments for and against x-risk

Katja Grace is the co-founder of AI Impacts, a non-profit focused on answering key questions about the future trajectory of AI development, which is best known for conducting the world's largest survey of machine learning researchers. We talked about the most interesting results from the survey, Katja's views on whether we should slow down AI progress, the best arguments for and against existential risk from AI, parsing the online AI safety debate and more! Follow Katja on Twitter Katja's SubstackMy Twitter

76 min
9 June 2024
#10 Nathan Labenz on the current AI state-of-the-art, the Red Team in Public project, reasons for hope on AI x-risk & more

Nathan Labenz is the founder of AI content-generation platform Waymark and host of The Cognitive Revolution Podcast, who now works full-time on tracking and analysing developments in AI. We chatted about where we currently stand with state-of-art AI capabilities, whether we should be advocating for a pause on scaling frontier models, Nathan's Red Team in Public project, and some reasons not be a hardcore doomer!Follow Nathan on TwitterListen to The Cognitive Revolution

114 min
15 May 2024
#9 Sneha Revanur on founding Encode Justice, California's SB-1047, and youth advocacy for safe AI development

Sheha Revanur is a the founder of Encode Justice, an international, youth-led network campaigning for the responsible development of AI, which was among the sponsors of California's proposed AI bill SB-1047. We chatted about why Sheha founded Encode Justice, the importance of youth advocacy in AI safety, and what the movement can learn from climate activism. We also dug into the details of SB-1047 and answered some common criticisms of the bill!Follow Sneha on Twitter: https://twitter.com/SnehaRevanur Learn more about Encode Justice: https://encodejustice.org/ 

49 min
21 April 2024
#8 Nathan Young on forecasting, AI risk & regulation, and how not to lose your mind on Twitter

Nathan Young is a forecaster, software developer and tentative AI optimist. In this episode, we discussed how Nathan approaches forecasting, why his p(doom) is 2-9%, whether we should pause AGI research, and more!Follow Nathan on Twitter: Nathan 🔍 (@NathanpmYoung) / X (twitter.com) Nathan's substack: Predictive Text | Nathan Young | SubstackMy Twitter: sarah ⏸️ (@littIeramblings) / X (twitter.com) 

88 min
Contact Us
First
Last
Discover New Podcast Partnerships

Subscribe To Our Weekly Newsletter

Get notified about new partnerships

Enter your name and email For Gifts, Deals and Prizes