The other day I wrote about beard-spiders and what I thought of them. Or at least, the things about beard-spiders that I made a mental note to think about at some future time.
Well, that time is now. Ish. Probably not, actually, but at least I’m going to write about basic minds, which I started thinking about because of the beard-spiders.
Let me say from the outset that I may very well misuse some terminology, so feel free to correct me in the comments below. Anyway this really is just a bit of me trying to get a grip on some concepts in a sort of stream-of-consciousness way without drowning myself in jargon, so try not to get too worked up about it.
Nagel’s paper, as I mentioned the other day, has had a remarkable staying power. In it he takes aim at reductionist theories of mind, which attempt to address the classic mind-body problem of consciousness: what is the relationship between consciousness and subjective experience, and our physical brain and body? Reductionists want to solve this problem by reducing away the complicated bit of that equation — consciousness.
Nagel argues that no reductionist formulation can eliminate subjective experience — that if we consider some organism to have any level of conscious experience, then there must be “something it is like to be that organism”. He proposes that attempts to eliminate subjective experience from the problem are doomed to failure:
“[Subjective experience] is not captured by any of the familiar, recently devised reductive analyses of the mental, for all of them are logically compatible with its absence. It is not analyzable in terms of any explanatory system of functional states, or intentional states, since these could be ascribed to robots or automata that behaved like people though they experienced nothing. It is not analyzable in terms of the causal role of experiences in relation to typical human behavior—for similar reasons. I do not deny that conscious mental states and events cause behavior, nor that they may be given functional characterizations. I deny only that this kind of thing exhausts their analysis.” (p. 436)
The case of the bat appears shortly after this introduction. If we presume that bats have some level of subjective experience, that there is something it is like to be bat, then we would imagine that their experience is something very different from our own. Bats have different bodies, different senses, and live in entirely different environs than we humans do. Even if we amuse ourselves by examining what our lives would be like if we did the things that bats do, it would still be impossible, in Nagel’s view, for us to “know what it is like for a bat to be a bat” — we would still only know what it would be like for a human to be a bat.
This inability to access the experiences of the bat leaves us with a conundrum, if we are to develop a physicalist conception of mind that includes subjective experience. Nagel argues that experience is something fundamentally irreducible — that unlike other phenomena we might describe with physical theories, subjective experience is impossible to describe in a way which would be comprehensible to another species. There’s no way to replace the subjective with the objective when it comes to experience.
That’s the core of it, in my view. I was going to continue here and summarise some more, and then delve into Daniel Dennett’s reply, but that’s pretty irrelevant to this post. Also I’m lazy.
What interests me still about this paper, despite its age, is that I enjoy its ability to provoke debate and thought despite what is, at its core, a quite simple argument using a very accessible premise. To me it’s powerful philosophical writing — the author presents his views openly and with clarity, and the topic is explained through a relatable example which gradually builds upon itself until you reach a conclusion. Whether that conclusion is agreeable or not is up to the reader, but I feel most would agree that at least the journey to reach it was interesting and enjoyable.
So what I hope to do, inspired by Nagel’s example, is to try to understand my own objections to the current way of thinking in cognitive science and philosophy of mind by using a simple example that I can actually grasp. In cognitive science and philosophy today we see a lot of excitement around enactivism, the view that cognition is not a process undergone by a mind being given a world and imposing concepts and schemas on it, but that it arises from the active engagement of an organism with the world around it. Organisms engage with the world through sensorimotor interactions with their environment, the world affords us the ability to engage in certain actions — and all this allows us to enact the world and thus experience it.
I’ll be the first to admit that there’s an appealing sense of action (enaction) to this view. We’re not mere computational machines performing operations on sensory data we’re given by the world — we’re organisms interacting, discovering our environments, moving amongst not a static and lifeless external world, but actually a rich tapestry of information and experience that informs every second of our being. Sounds great, doesn’t it?
Yet there’s something fundamental in this picture that leaves me wanting. I’m not sure what it is exactly, but the predominant feeling I get is one of anticlimax. I guess I remain skeptical that enactivism actually moves us any closer to an understanding of mind and experience, and I feel it simply kicks the mind-body problem can down the lane rather than actually chuck it in the recycle bin. Or possibly more accurately, rejects the mind-body problem entirely and attempts to replace it with a body-body problem. I’ve heard various arguments about this from trusted friends on various occasions, but my skepticism remains hard to dislodge.
There’s a few reasons that this might be — I often have trouble unpacking enactivist definitions of behaviour and cognition, for example. I also have certain objections to the ever-intensifying antirepresentationalist stance which in my view dismisses quite legitimate objections raised even by enactivism-friendly philosophers (Andy Clarke being one example). But both of these points have been addressed a great deal by others with much greater philosophical acumen than myself, so I’ve been looking for something to talk about that would capture more of my own experiences and interests, and not just be a second-rate Ned Block (among others).
A few years ago I posted briefly about a book that was coming out called Radicalizing Enactivism: Basic Minds Without Content. When a book releases with a provocative title like that it’s no surprise that tons of interested parties wanted to review it. The authors come under fire in some of these accounts for their dismissal of the idea that basic minds are capable of sense-making, as this would presume that basic minds are capable of interpretation. Or something, I’m still reading this stuff.
Some would prefer going all the way down the rabbit hole, and deciding that basic minds can participate in sense-making and that “the problem of mind is that of the problem of life” (Alva Noë, Out of Our Heads, 2009, p. 41). I’m not sure how to think about this just yet, and I’m hoping some people will direct me toward some interesting debates on this front.* Others would shy away from that point, leaving the sense-making and content and intentionality to adult human minds.
Either way, I’m filled with questions. How do we characterise basic minds? What do enactivists and representationalists make of them, and having had time to work out some examples, what do I think of them?
As far as I can make out, basic minds are an important concept when speaking about enactivism, at least in the Radical Embodied Cognition sort of way espoused by Daniel Hutto and others. Hutto’s project seems all about characterising cognition as being entirely free of content, or as he puts it, ‘[rejecting] the thesis that Cognition Involves Content, in its unrestricted form’. My initial reaction to this was immediate and forceful — of course cognition can have content, you weirdo! — but as with everything it depends on how you define content, and cognition for that matter. In order to understand this project, I also need to understand basic minds and their place in the argument.
That being the case there’s a lot of work to be done on my part to understand exactly what’s being talked about here, and hence my recent nostalgic revisiting of Nagel. At this point I feel if I’m going to develop an understanding of these concepts I need to follow Nagel and grab a substantive example and follow it through until something illuminates my thoughts more clearly. More than anything I want to discover the root of my objections and find out whether they hold any water. It’s quite possible that they don’t, but I feel either way that it’s a process worth going through for my own edification.
As part of this little project I’m building up quite a reading list of books and articles — some of these will be re-acquisitions, like Mind in Life and a few related tomes which have unaccountably vanished from my possession somehow. In particular I’m interested in Lawrence Shapiro’s Embodied Cognition which apparently spends some time focusing on Randall Beer’s work, and if there’s one thing I’m always happy to do it’s to read about Randy’s papers on minimally cognitive agents. Other suggestions on minimal cognition and enactivism, basic minds, etc. are more than welcome.
It’ll probably be a little while before my next post, so I’ll leave you with some interesting critiques of enactivism from people who actually know what they’re talking about:
Jesse Prinz (1): www.theassc.org/files/assc/2627.pdf
Jesse Prinz (2): http://subcortex.com/IsConsciousnessEmbodiedPrinz.pdf
EDIT: Another good one from Xabier Barandiaran: https://xabierbarandiaran.files.wordpress.com/2009/07/barandiaran_-_2014_-_enactivism_without_autonomy_-_aisb50-s25-barandiaran-extabs.pdf
*Shout-out here to Tom Froese who pointed me to this quote and related points through one of his papers which I can’t find just now. You write a lot of papers Tom.