Podcast Image

Artificiality: Minds Meeting Machines

Artificiality was founded in 2019 to help people make sense of artificial intelligence. We are artificial philosophers and meta-researchers. We believe that understanding AI requires synthesizing research across disciplines: behavioral economics, cognitive science, complexity science, computer science, decision science, design, neuroscience, philosophy, and psychology. We publish essays, podcasts, and research on AI including a Pro membership, providing advanced research to leaders with actionable intelligence and insights for applying AI. Learn more at www.artificiality.world.
Categories

Last Episode Date: No Date found.

Total Episodes: Not Available

Collaboration
Podcast Interviews
Affiliate and Join Ventures
Sponsorships
Promo Swaps
Feed swaps
Guest/Interview swaps
Monetization
Advertising and Sponsors
Affiliate and JVs
Paid Interviews
Products, Services or Events
Memberships
Donations
Michael Levin—The Future of Intelligence: Synthbiosis
5 February 2025
Michael Levin—The Future of Intelligence: Synthbiosis

At the Artificiality Summit 2024, Michael Levin, distinguished professor of biology at Tufts University and associate at Harvard's Wyss Institute, gave a lecture about the emerging field of diverse intelligence and his frameworks for recognizing and communicating with the unconventional intelligence of cells, tissues, and biological robots. This work has led to new approaches to regenerative medicine, cancer, and bioengineering, but also to new ways to understand evolution and embodied minds. He sketched out a space of possibilities—freedom of embodiment—which facilitates imagining a hopeful future of "synthbiosis", in which AI is just one of a wide range of new bodies and minds. Bio: Michael Levin, Distinguished Professor in the Biology department and Vannevar Bush Chair, serves as director of the Tufts Center for Regenerative and Developmental Biology. Recent honors include the Scientist of Vision award and the Distinguished Scholar Award. His group's focus is on understanding the biophysical mechanisms that implement decision-making during complex pattern regulation, and harnessing endogenous bioelectric dynamics toward rational control of growth and form. The lab's current main directions are: - Understanding how somatic cells form bioelectrical networks for storing and recalling pattern memories that guide morphogenesis; - Creating next-generation AI tools for helping scientists understand top-down control of pattern regulation (a new bioinformatics of shape); and - Using these insights to enable new capabilities in regenerative medicine and engineering. www.artificiality.world/summit

78 min
Artificiality Keynote at the Imagining Summit 2024
28 January 2025
Artificiality Keynote at the Imagining Summit 2024

Our opening keynote from the Imagining Summit held in October 2024 in Bend, Oregon. Join us for the next Artificiality Summit on October 23-25, 2025! Read about the 2024 Summit here: https://www.artificiality.world/the-imagining-summit-we-imagined-and-hoped-and-we-cant-wait-for-next-year-2/ And join us for the 2025 Summit here: https://www.artificiality.world/summit/

14 min
DeepSeek: What Happened, What Matters, 
and Why It’s Interesting
28 January 2025
DeepSeek: What Happened, What Matters, 
and Why It’s Interesting

First: - Apologies for the audio! We had a production error… What’s new: - DeepSeek has created breakthroughs in both: How AI systems are trained (making it much more affordable) and how they run in real-world use (making them faster and more efficient) Details - FP8 Training: Working With Less Precise Numbers - Traditional AI training requires extremely precise numbers - DeepSeek found you can use less precise numbers (like rounding $10.857643 to $10.86) - Cut memory and computation needs significantly with minimal impact - Like teaching someone math using rounded numbers instead of carrying every decimal place - Learning from Other AIs (Distillation) - Traditional approach: AI learns everything from scratch by studying massive amounts of data - DeepSeek's approach: Use existing AI models as teachers - Like having experienced programmers mentor new developers: - Trial & Error Learning (for their R1 model) - Started with some basic "tutoring" from advanced models - Then let it practice solving problems on its own - When it found good solutions, these were fed back into training - Led to "Aha moments" where R1 discovered better ways to solve problems - Finally, polished its ability to explain its thinking clearly to humans - Smart Team Management (Mixture of Experts) - Instead of one massive system that does everything, built a team of specialists - Like running a software company with: - 256 specialists who focus on different areas - 1 generalist who helps with everything - Smart project manager who assigns work efficiently - For each task, only need 8 specialists plus the generalist - More efficient than having everyone work on everything - Efficient Memory Management (Multi-head Latent Attention) - Traditional AI is like keeping complete transcripts of every conversation - DeepSeek's approach is like taking smart meeting minutes - Captures key information in compressed format - Similar to how JPEG compresses images - Looking Ahead (Multi-Token Prediction) - Traditional AI reads one word at a time - DeepSeek looks ahead and predicts two words at once - Like a skilled reader who can read ahead while maintaining comprehension Why This Matters - Cost Revolution: Training costs of $5.6M (vs hundreds of millions) suggests a future where AI development isn't limited to tech giants. - Working Around Constraints: Shows how limitations can drive innovation—DeepSeek achieved state-of-the-art results without access to the most powerful chips (at least that’s the best conclusion at the moment). What’s Interesting - Efficiency vs Power: Challenges the assumption that advancing AI requires ever-increasing computing power - sometimes smarter engineering beats raw force. - Self-Teaching AI: R1's ability to develop reasoning capabilities through pure reinforcement learning suggests AIs can discover problem-solving methods on their own. - AI Teaching AI: The success of distillation shows how knowledge can be transferred between AI models, potentially leading to compounding improvements over time. - IP for Free: If DeepSeek can be such a fast follower through distillation, what’s the advantage of OpenAI, Google, or another company to release a novel model?

25 min
Hans Block & Moritz Riesewieck: Eternal You
25 January 2025
Hans Block & Moritz Riesewieck: Eternal You

We’re excited to welcome writers and directors Hans Block and Moritz Riesewieck to the podcast. Their debut film, ‘The Cleaners,’ about the shadow industry of digital censorship premiered at the Sundance Film Festival in 2018 and has since won numerous international awards and been screened at more than 70 international film festivals. We invited Hans and Moritz to the podcast to talk about their latest film, Eternal You, which examines the story of people who live on as digital replicants—and the people who keep them living on. We found the film to be quite powerful. At times inspiring and at others disturbing and distressing. Can a generative ghost help people through their grief or trap them in it? Is falling for a digital replica healthy or harmful? Are the companies creating these technologies benefitting their users or extracting from them? Eternal You is a powerful and important film. We highly recommend taking the time to watch it—and allowing for time to digest and consider. Hans and Moritz have done a brilliant job exploring a challenging and delicate topic with kindness and care. Bravo. ------------ If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

44 min
How AI Affects Critical Thinking and Cognitive Offloading
25 January 2025
How AI Affects Critical Thinking and Cognitive Offloading

Briefing: How AI Affects Critical Thinking and Cognitive Offloading What This Paper Highlights - The study explores the growing reliance on AI tools and its effects on critical thinking, specifically through cognitive offloading—delegating mental tasks to AI systems. - Key finding: Frequent AI tool use is strongly associated with reduced critical thinking abilities, especially among younger users, as they increasingly rely on AI for decision-making and problem-solving. - Cognitive offloading acts as a mediating factor, reducing opportunities for deep, reflective thinking. Why This Is Important - Shaping Minds: Critical thinking is central to decision-making, problem-solving, and navigating misinformation. If AI reliance erodes these skills, it has profound implications for education, work, and citizenship. - Generational Divide: Younger users show higher dependence on AI, suggesting that future generations may grow less capable of independent thought unless deliberate interventions are made. - Education and Policy: There’s an urgent need for strategies to balance AI integration with fostering cognitive skills, ensuring users remain active participants rather than passive consumers. What’s Curious and Interesting - Cognitive Shortcuts: Participants increasingly trust AI to make decisions, yet this trust fosters "cognitive laziness," with many users skipping steps like verifying or analyzing information. - AI’s Double-Edged Sword: While AI improves efficiency and provides tailored solutions, it also reduces engagement in activities that develop critical thinking, like analyzing arguments or synthesizing diverse viewpoints. - Education as a Buffer: People with higher educational attainment are better at critically engaging with AI outputs, suggesting that education plays a key role in mitigating these risks. What This Tells Us About the Future - Critical Thinking at Risk: AI tools will only grow more pervasive. Without proactive efforts to maintain cognitive engagement, critical thinking could erode further, leaving society more vulnerable to misinformation and manipulation. - Educational Reforms Needed: Active learning strategies and media literacy are essential to counterbalance AI’s convenience, teaching people how to engage critically even when AI offers "easy answers." - Shifting Cognitive Norms: As AI takes over more routine tasks, we may need to redefine what skills are critical for thriving in an AI-driven world, focusing more on judgment, creativity, and ethical reasoning. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking by Michael Gerlichhttps://www.mdpi.com/2075-4698/15/1/6

30 min
J. Craig Wheeler: The Path to Singularity
19 January 2025
J. Craig Wheeler: The Path to Singularity

We’re excited to welcome Craig Wheeler to the podcast. Craig is an astrophysicist and Professor at the University of Texas at Austin. Over his career, he has made significant contributions to our understanding of supernovae, black holes, and the nature of the universe itself. Craig’s new book, The Path to Singularity: How Technology Will Challenge the Future of Humanity, offers an exploration of how exponential technological change could upend life as we know it. Drawing on his background as an astrophysicist, Craig examines how humanity’s current trajectory is shaped by forces like AI, robotics, neuroscience, and space exploration—all of which are advancing at speeds that may outpace our ability to adapt. The book is an extension of a course Craig taught at UT Austin, where he challenged students to project humanity’s future over the next 100, 1,000, and even 100,000 years. His students explored ideas about AI, consciousness, and human evolution, ultimately shaping the themes that inspired the book. We found it fascinating, as he says in the interview, that the majority of the scenarios projected into the future were not positive for humanity. We wonder: Who wants to live in a dystopian future? And, for those of us who don’t: What can we do about it? This led to our interest in talking with Craig. We hope you enjoy our conversation with Craig Wheeler. --------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

51 min
AI Agents & the Future of Human Experience + Always On AI Wearables + Artificiality Updates for 2025
17 January 2025
AI Agents & the Future of Human Experience + Always On AI Wearables + Artificiality Updates for 2025

Science Briefing: What AI Agents Tell Us About the Future of Human Experience * What These Papers Highlight - AI agents are improving but far from capable of replacing human tasks. Even the best models fail at simple things humans find intuitive, like handling social interactions or navigating pop-ups. - One paper benchmarks agent performance in workplace-like tasks, showing just 24% success on even simple tasks. The other argues that agents alone aren’t enough—we need a broader system to make them useful. * Why This Matters - Human Compatibility: Agents don’t just need to complete tasks—they need to work in ways that humans trust and find relatable. - New Ecosystems: Instead of relying on better agents alone, we might need personalized digital “Sims” that act as go-betweens, understanding us and adapting to our preferences. - Humor in Failure: From renaming a coworker to "solve" a problem to endlessly struggling with pop-ups, these failures highlight how far AI still is from grasping human context. * What’s Interesting - Humans vs. Machines: AI performs better on coding than on “easier” tasks like scheduling or teamwork. Why? It’s great at structure, bad at messiness. - Sims as a Bridge: The idea of digital versions of ourselves (Sims) managing agents for us could change how we relate to technology, making it feel less like a tool and more like a collaborator. - Impact on Trust: The future of agents will hinge on whether they can align with human values, privacy, and quirks—not just perform better technically. *What’s Next for Agents - Can agents learn to navigate our complexity, like social norms or context-sensitive decisions? - Will ecosystems with Sims and Assistants make AI feel more human—and less robotic? - How will trust and personalization shape whether people actually adopt these systems? Product Briefing: Always On AI Wearables * What’s new: - New AI wearables launched at CES 2025 that continuously listen. From earbuds (HumanPods) to wristbands (Bee Pioneer) to stick-it-to-your-head pods (Omi), these cheap hardware devices are attempting to be your always-listening assistants. * Why This Matters - From Wake Words to Always-On: These devices listen passively—no activation required—requiring the user to opt-out by muting rather than opting in. - Privacy? Pfft: Not only are these devices small enough to hide and record without anyone knowing. The Omi only turns on a light when it is not recording. - Razor-Razorblade Model: With hardware prices below $100, these devices are priced to all for easy experimentation—the value is in the software subscription. * What’s Interesting - Mind-reading?: Omi claims to detect brain signals, allowing users to think their commands instead of speaking. - It’s About Apps: The app store is back as a business model. But are these startups ready for the challenge? - Memory Prosthetics: These devices record, transcribe, and summarize everything—generating to do lists and more. * The Human Experience - AI as a Second Self?: These devices don’t just assist; they remember, organize, and anticipate—how will that reshape how we interact with and recall our own experiences? - Can We Still Forget?: If everything in our lives is logged and searchable, do we lose the ability to let go? - Context Collapse: AI may summarize what it hears, but can it understand the complexity of human relationships, emotions, and social cues?

27 min
Doyne Farmer: Making Sense of Chaos
12 December 2024
Doyne Farmer: Making Sense of Chaos

We’re excited to welcome Doyne Farmer to the podcast. Doyne is a pioneering complexity scientist and a leading thinker on economic systems, technological change, and the future of society. Doyne is a Professor of Complex Systems at the University of Oxford, an external professor at the Santa Fe Institute, and Chief Scientist at Macrocosm. Doyne’s work spans an extraordinary range of topics, from agent-based modeling of financial markets to exploring how innovation shapes the long-term trajectory of human progress. At the heart of Doyne’s thinking is a focus on prediction—not in the narrow sense of forecasting next week’s market trends, but in understanding the deep, generative forces that shape the evolution of technology and society. His new book, Making Sense of Chaos: A Better Economics for a Better World, is a reflection on the limitations of traditional economics and a call to embrace the tools of complexity science. In it, Doyne argues that today’s economic models often fall short because they assume simplicity where there is none. What’s especially compelling about Doyne’s perspective is how he uses complexity science to challenge conventional economic assumptions. While traditional economics often treats markets as rational and efficient, Doyne reveals the messy, adaptive, and unpredictable nature of real-world economies. His ideas offer a powerful framework for rethinking how we approach systemic risk, innovation policy, and the role of AI-driven technologies in shaping our future. We believe Doyne’s ideas are essential for anyone trying to understand the uncertainties we face today. He doesn’t just highlight the complexity—he shows how to navigate it. By tracking the hidden currents that drive change, he helps us see the bigger picture of where we might be headed. We hope you enjoy our conversation with Doyne Farmer. ------------------------------ If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music

55 min
James Boyle: The Line—AI And the Future of Personhood
28 September 2024
James Boyle: The Line—AI And the Future of Personhood

We're excited to welcome Jamie Boyle to the podcast. Jamie is a law professor and author of the thought-provoking book The Line: AI and the Future of Personhood. In The Line, Jamie challenges our assumptions about personhood and humanity, arguing that these boundaries are more fluid than traditionally believed. He explores diverse contexts like animal rights, corporate personhood, and AI development to illustrate how debates around personhood permeate philosophy, law, art, and morality. Jamie uses fascinating examples from science fiction, legal history, and philosophy to illustrate the challenges we face in defining the rights and moral status of artificial entities. He argues that grappling with these questions may lead to a profound re-examination of human identity and consciousness. What's particularly compelling about Jamie’s approach is how he frames this as a journey of moral expansion, drawing parallels to how we've expanded our circle of empathy in the past. He also offers surprising insights into legal history, revealing how corporate personhood emerged more by accident than design—a cautionary tale as we consider AI rights. We believe this book is both ahead of its time and right on time. It sharpens our understanding of difficult concepts—namely, that the boundaries between organic and synthetic are blurring, creating profound existential challenges we need to prepare for now. To quote Jamie from The Line: "Grappling with the question of synthetic others may bring about a reexamination of the nature of human identity and consciousness. I want to stress the potential magnitude of that reexamination. This process may offer challenges to our self conception unparalleled since secular philosophers declared that we would have to learn to live with a god shaped hole at the center of the universe." Let's dive into our conversation with Jamie Boyle. If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music #artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds

58 min
Shannon Vallor: The AI Mirror
13 September 2024
Shannon Vallor: The AI Mirror

We're excited to welcome to the podcast Shannon Vallor, professor of ethics and technology at the University of Edinburgh, and the author of The AI Mirror. In her book, Shannon invites us to rethink AI—not as a futuristic force propelling us forward, but as a reflection of our past, capturing both our human triumphs and flaws in ways that shape our present reality. In The AI Mirror, Shannon uses the powerful metaphor of a mirror to illustrate the nature of AI. She argues that AI doesn’t represent a new intelligence; rather, it reflects human cognition in all its complexity, limitations, and distortions. Like a mirror, AI is backward-looking, constrained by the data we’ve already provided it. It amplifies our biases and misunderstandings, giving us back a shallow, albeit impressive, reflection of our intelligence. We think this is one of the best books on AI for a general audience that has been published this year. Shannon’s mirror metaphor does more than just critique AI—it reassures. By casting AI as a reflection rather than an independent force, she validates a crucial distinction: AI may be an impressive tool, but it’s still just that—a mirror of our past. Humanity, Shannon suggests, remains something separate, capable of innovation and growth beyond the confines of what these systems can reflect. This insight offers a refreshing confidence amidst the usual AI anxieties: the real power, and responsibility, remains with us. Let’s dive into our conversation with Shannon Vallor. ----------------- If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds. Subscribe to get Artificiality delivered to your email Learn about our book Make Better Decisions and buy it on Amazon Thanks to Jonathan Coulton for our music #artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds

56 min
Contact Us
First
Last
Discover New Podcast Partnerships

Subscribe To Our Weekly Newsletter

Get notified about new partnerships

Enter your name and email For Gifts, Deals and Prizes