podcast

The Compendium - Connor Leahy and Gabriel Alfour

30.03.2025
Listen to the episode on your favorite platforms:
  • Apple Podcasts
  • Spotify
  • Castbox
  • Pocket Casts
  • Overcast
  • Castro
  • RadioPublic

Connor Leahy and Gabriel Alfour, AI researchers from Conjecture and authors of "The Compendium," joinus for a critical discussion centered on Artificial Superintelligence (ASI) safety and governance. Drawing from their comprehensive analysis in "The Compendium," they articulate a stark warning about the existential risks inherent in uncontrolled AI development, framing it through the lens of "intelligence domination"—where a sufficiently advanced AI could subordinate humanity, much like humans dominate less intelligent species.

SPONSOR MESSAGES:

***

Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich.

Goto https://tufalabs.ai/

***

TRANSCRIPT + REFS + NOTES:

https://www.dropbox.com/scl/fi/p86l75y4o2ii40df5t7no/Compendium.pdf?rlkey=tukczgf3flw133sr9rgss0pnj&dl=0

https://www.thecompendium.ai/

https://en.wikipedia.org/wiki/Connor_Leahy

https://www.conjecture.dev/about

https://substack.com/@gabecc​

TOC:

1. AI Intelligence and Safety Fundamentals

[] 1.1 Understanding Intelligence and AI Capabilities

[] 1.2 Emergence of Intelligence and Regulatory Challenges

[] 1.3 Human vs Animal Intelligence Debate

[] 1.4 AI Regulation and Risk Assessment Approaches

[] 1.5 Competing AI Development Ideologies

2. Economic and Social Impact

[] 2.1 Labor Market Disruption and Post-Scarcity Scenarios

[] 2.2 Institutional Frameworks and Tech Power Dynamics

[] 2.3 Ethical Frameworks and AI Governance Debates

[] 2.4 AI Alignment Evolution and Technical Challenges

3. Technical Governance Framework

[] 3.1 Three Levels of AI Safety: Alignment, Corrigibility, and Boundedness

[] 3.2 Challenges of AI System Corrigibility and Constitutional Models

[] 3.3 Limitations of Current Boundedness Approaches

[] 3.4 Abstract Governance Concepts and Policy Solutions

4. Democratic Implementation and Coordination

[] 4.1 Governance Design and Measurement Challenges

[] 4.2 Democratic Institutions and Experimental Governance

[] 4.3 Political Engagement and AI Safety Advocacy

[] 4.4 Practical AI Safety Measures and International Coordination

CORE REFS:

[] The Compendium (2023), Leahy et al.

https://pdf.thecompendium.ai/the_compendium.pdf

[] Geoffrey Hinton Leaves Google, BBC News

https://www.bbc.com/news/world-us-canada-65452940

[] ARC-AGI, Chollet

https://arcprize.org/arc-agi

[] A Brief History of Intelligence, Bennett

https://www.amazon.com/Brief-History-Intelligence-Humans-Breakthroughs/dp/0063286343

[] Statement on AI Risk, Center for AI Safety

https://www.safe.ai/work/statement-on-ai-risk

[] Machines of Love and Grace, Amodei

https://darioamodei.com/machines-of-loving-grace

[] The Techno-Optimist Manifesto, Andreessen

https://a16z.com/the-techno-optimist-manifesto/

[] Techno-Feudalism, Varoufakis

https://www.amazon.co.uk/Technofeudalism-Killed-Capitalism-Yanis-Varoufakis/dp/1847927270

[] Introducing Superalignment, OpenAI

https://openai.com/index/introducing-superalignment/

[] Three Laws of Robotics, Asimov

https://www.britannica.com/topic/Three-Laws-of-Robotics

[] Symbolic AI (GOFAI), Haugeland

https://en.wikipedia.org/wiki/Symbolic_artificial_intelligence

[] Intent Alignment, Christiano

https://www.alignmentforum.org/posts/HEZgGBZTpT4Bov7nH/mapping-the-conceptual-territory-in-ai-existential-safety

[] Large Language Model Alignment: A Survey, Jiang et al.

http://arxiv.org/pdf/2309.15025

[] Constitutional Checks and Balances, Bok

https://plato.stanford.edu/entries/montesquieu/

<trunc, see PDF>

email
Auto light/dark, in dark modeAuto light/dark, in light modeDark modeLight mode

© 2020–2025 PC.ST

Developed by — Pavel Kozlov

Design by — Bonkers!

The Compendium - Connor Leahy and Gabriel Alfour
01:37:10