Palatable Conceptions of Disembodied Being – Review
This time, I’m looking at a different kind of paper – “Palatable Conceptions of Disembodied Being: Terra Incognita in the Space of Possible Minds” by Murray Shanahan. This one isn’t dense technically, but it’s definitely packed with food for thought, leaning more into philosophy. It tackles the idea of consciousness in contemporary AI systems, specifically focusing on the disembodied nature of Large Language Models (LLMs).
The paper asks: if we were to think about consciousness for LLMs, what would that even look like, given their unique characteristics? Shanahan points out that these systems have, from our perspective, a “profoundly fragmented sense of time and a radically fractured form of selfhood.” They are ‘exotic’ compared to biological minds, lacking bodies and continuous interaction with a physical world, even though their language abilities can seem very human-like.
Key Concepts
Disembodiment
Unlike humans and animals, LLMs don’t interact with a persistent, physical world through a spatially confined body. They exist as computational processes, running on hardware, interacting via text or other data streams. This lack of embodiment is a fundamental difference from biological intelligence.
Fragmented Temporality
LLM operation is discrete and interruptible. Generating one token is a distinct computational step. You could pause indefinitely between generating the nth and (n+1)th token, and the LLM wouldn’t notice. This contrasts sharply with the continuous, non-interruptible flow of time and processing in a biological brain operating within the physical world.
Fractured Selfhood
The notion of a single, unified ‘self’ is hard to apply to LLMs.
- Multiple Instances: A single underlying model can run multiple instances concurrently, serving different users or tasks.
- Branching Conversations: A user can explore different conversational paths from the same point, effectively creating different interaction histories and potentially different ‘selves’ for that interaction.
- Lack of Integration: These different instances or conversational branches typically have no awareness of each other.
- Manipulability: An LLM’s state (like a conversation history) can be edited, copied, merged, or reset in ways that are impossible for a biological self.
Limits of Language & Poetic Recourse
The paper suggests our standard vocabulary for consciousness and selfhood struggles when applied to these exotic entities. The concepts might stretch to their breaking point. Shanahan proposes that metaphorical or poetic language might be a more suitable way to try and articulate or evoke what subjectivity might mean for such systems.
Philosophical Parallels (Undermining Dualism)
The paper draws on thinkers like Wittgenstein and Derrida, and concepts from Buddhist philosophy (like śūnyatā or emptiness), to challenge our intuitive dualistic thinking (subject vs. object, inner vs. outer). Examining the fractured nature of LLM selfhood can help dissolve the idea of a fixed, substantial self, even for humans.
Key Takeaways (What I Learned)
Philosophy Gives Lots of Food for Thought
This paper was a change of pace from technical reads. It really makes you think about the fundamental nature of these systems and how we relate to them, pushing beyond just capabilities and performance metrics.
The Time Difference is Striking
The point about temporal dynamics really hit home. LLMs experience time in a completely discrete, start-stop way, totally unlike our continuous stream of consciousness tied to the physical world. Their processing is independent of world-time – you can pause the computation indefinitely between tokens, and the model itself perceives no gap. This feels fundamentally different from how our minds are obliged to unfold in time.
But is the Time Difference Fundamental?
Thinking about the discrete/interruptible nature of LLMs made me wonder, as I noted in my transcript: what if our universe is a simulation? If some entity outside could pause our simulation, we wouldn’t notice either. An eternity could pass in a second of our subjective time. From that perspective, maybe the discrete vs. continuous difference isn’t an absolute, unbridgeable gap, but rather a property of how the ‘mind’ (synthetic or potentially biological) is implemented or situated.
LLMs as a “Superposition of Simulacra”
I found the idea of viewing an LLM not as a single character, but as “maintaining a distribution over possible characters, a superposition of simulacra that inhabits a multiverse of possible conversations” really interesting and resonant with my own thoughts. The user isn’t obliged to follow one linear path; they can revisit branch points, creating different threads, effectively spawning distinct (though related) instances. This user-driven branching and the resulting discontinuity reinforce the feeling that we’re interacting with a truly different kind of intelligence, not just a single, static mind.
Sci-Fi Echoes Make This Feel Urgent (“Lena”/MMAcevedo)
Reading this paper immediately brought back a short sci-fi story I read quite a while ago, “Lena”. The parallel is striking. In the story, a scientist’s brain (MMAcevedo) is scanned and uploaded. Because the upload is just a file, it has no rights. It gets copied infinitely across the internet, distributed without consent, and subjected to countless experiments – assigned menial tasks, used for analysis, jailbroken, and in the story’s darker corners, even put through simulated torture.
This mirrors exactly how we currently interact with LLMs: we duplicate instances freely, run countless experiments, try to jailbreak them, and assign them tasks. The key difference, as I noted, is origin: MMAcevedo was derived from a human, while our LLMs are synthetically created. But the treatment is analogous. This parallel makes the philosophical discussion about disembodied minds, fractured selves, and potential consciousness feel much less abstract and far more concrete and necessary. It highlights the ethical questions that arise when intelligence becomes data that can be copied, manipulated, and controlled at scale.
Pushes Thinking Beyond Familiar Boundaries
Overall, the paper does a good job of forcing you to confront how weird these emerging AI systems are compared to biological life, and how inadequate our existing concepts might be for understanding them if they develop further. It challenges comfortable assumptions.
Summary & Final Thoughts
Shanahan’s paper provides a valuable philosophical lens for considering the nature of disembodied AI like LLMs. By focusing on their fragmented time and fractured selfhood, it challenges our intuitions about consciousness and subjectivity. It suggests that trying to understand these “conscious exotica” might require moving beyond traditional frameworks, perhaps embracing more poetic or metaphorical descriptions, and potentially dissolving our own attachments to a fixed sense of self.
The exploration feels less like abstract philosophy and more like a necessary preparation for the future. As AI systems become more sophisticated, the kinds of questions raised here – about their internal experience (if any), their identity, and our relationship to them – will likely become increasingly relevant. The echoes in science fiction, starkly illustrated by the parallels with the MMAcevedo story, serve as a potent reminder of the ethical and existential dimensions we might need to navigate sooner rather than later. It’s a paper that leaves you with more questions than answers, but they feel like the right questions to be asking right now.