Date: 2nd June 2021
Speaker of the day: Mog Stapleton
Articles: Varela, “Whence Perceptual Meaning? A Cartography of Current Ideas” (1992)
Abstract: We have discussed how the relations between cognitivism, connectionism and enactive approach have changed since Whence Perceptual Meaning? A Cartography of Current Ideas has first been written in the mid-eighties. Which of the recent advances in the cognitive sciences could be compatible with the spirit of enaction?
Keywords: cognitivism, connectionism, predictive processing, deep learning, enaction, neurophenomenology, microphenomenology, free energy principle, ecological psychology, representations, sense-making
On the face of it, the paper reads as a kind of a précis of The Embodied Mind (TEM) which was first published in 1991. Whence Perceptual Meaning? A Cartography of Current Ideas was published in 1992, but it comes out of a conference that was held in 1987. The paper differs from being an exact précis of TEM in not expressly going into connecting enactive ideas with Buddhism, and it does not discuss the rejection of adaptationism in the evolutionary science.
Given that the paper is now about 30 years old, we might just assume that it is useful as a historical summary of the positions that were the current mainstream in the cognitive science (CS) at the time. It is, of course, more than a summary paper: its themes characterize the spirit of the enactive thought. These topics are especially salient considered in the light of the introduction paper to the volume of the forementioned conference. Its co-authors, Varela and Jean-Pierre Dupuy, run through not merely the CS aspect, but acknowledge how these themes resonate with the approaches in the continental philosophy as well as in evolutionary biology.
The paper’s main topics are the following:
1) Rejection of the idea that cognition is about representing an objective external world.
2) Rejection of the separation between form and meaning.
3) Advocation of the importance of context and history.
4) Rejection of the attitude that cognitive systems are problem-solvers of pre-given problems. Rather, natural cognitive systems are taken to pose the problems to be solved: these emerge from their contextual background in the process of “bringing forth” a world.
Varela identifies three waves of the CS that emerged from the cybernetic movement:
Cognitivism: It is a move from the cyberneticians’ aim to mechanize the mind by producing a model of the mind toward producing models for the mind, i.e., seeing the abstraction in itself as constituting what mind or cognition is. Technologically speaking, this approach has been very successful. But notably, these kinds of AI are only weakly artificially intelligent. They have not brought us closer to seeing any kind of artificial system that exhibits genuine intelligence to the kind found in (even very simple) natural cognitive systems. Nevertheless, this research programme has been greatly influential not just among the researchers but has strongly affected the way that the general public thinks and talks about the cognition as well. Even the enactivists have to be vigilant, so as to not end up phrasing things in a way that would make them look like being committed to the approaches that they do not hold.
Connectionism: Today, neural networks are at the foundation of machine learning, and currently, the “new kid on the block” is deep learning. While the latter may be able to address some of the Varela’s concerns about context and history, it is unclear whether it can take into account his bigger concern about the AI systems acting on a pre-given world. This means that meaning does not originate in the system but is instead hoisted upon it from the outside (in this regard, deep learning appears to be no different from the connectionism and cognitivism, as they were 30 years ago).
Enaction: Varela does not reject representations in principle: his objection is to understanding the mind fundamentally in terms of representations, whether carried by symbols or states of the system, because they necessarily bracket out the context and bodily and social history.
There is a steadily increasing grip of the enactivists who worked to understand cognition in terms of these ideas. But even now, they still do not make up the mainstream CS. Information processing terminology continues to pervade our common sense; we still see people write about the brain and cognitive processing using the computer metaphor, and it is not always clear whether they really see it as a metaphor rather than a deep insight into cognition. However, with the rising popularity of predictive processing (PP), ecological psychology and the affordance theory, the rejection of cognition as representing an objective external world is becoming a more mainstream acceptable view in the CS community. Even if the named approaches eventually turn out to be wrong, they are still helping to shape our contemporary cognitive scientific, and perhaps even popular, sense in a way that pushes us further away from the chicken and egg positions.
In respect of the importance of context and history, the following developments are taking place:
a) Introception research shows that the idea that internal physiological changes and differences take a part in our cognition is taken seriously.
b) It is becoming more common to see articles on how the non-neural cells, such as glial cells, microglia and other immune cells, interact with our nervous system and hence play critical roles in the functioning of our brain and cognition.
c) A greater acceptance of the social, cultural and historical influences on cognition can be seen in the analytic approaches, which form the bulk of the philosophy of CS.
d) A much stronger recognition of the role that the environment has in perception and cognition might be attributable to a greater engagement of ecological psychology and philosophy in general in the CS.
1) Sebastjan Vörös: Francisco’s papers are brimming with interesting new ideas that play around with the philosophical notions, and they do so in a scientifically informed manner that yet manages to break through sedimented and well-defined conceptual structures. There is a strong creative impetus. To what degree do you think that Francisco wanted this paper to become a full-blown scientific paradigm? I wonder whether enaction has shared the same fate as cybernetics: in the article itself, Francisco writes that there is a price every mature science has to pay when passing from an exploratory stage to a research programme – a cloud becomes a crystal. Enactivist circles are nowadays well-distinguished, and some of the originally creative ideas are perhaps becoming clearer and better applicable to specific problem areas, which, however, comes at the expense of their prolific momentum. John Stewart was, for example, averse to talking about enactivism, asserting that Varela only ever spoke about enaction.
Mog: Similarly to cybernetics, these ideas are going off in various directions, being taken up by different groups that do not neccessarily let aboard all of the enactive themes. Aside from that, I think that enactivism is in fact far from being dissolved: there is an abundance of active enactivist research and reading groups, and, for instance, Di Paolo, Cuffari and de Jaegher have recently published their work Linguistic Bodies, where they are expanding these ideas to the areas not yet explored by the enactive thought.
Evan Thompson: An earlier version of this paper has been prepared as a report for the Shell oil corporation, and was commisioned by Peter Schwartz around 1985. When I arrived in Paris in 1986 to start working on TEM, Francisco gave me two texts as reference points: this report paper, and a transcript of lectures that he had given on Buddhism and CS. So TEM is an interweaving, amplification and rewriting of those texts.
In the first version of today’s paper, he called his approach hermeneutic, with an explicit reference to Ricoeur, Gadamer, and phenomenology. Not content with the phrasing, he later replaced it with the term enactive. He might have taken it from Jerome Bruner, who had used it before, and employed it in a new way (as he usually did with the words and concepts that he resonated with). Bruner’s works on educational theories of development were popular in the 1980’s, and I think that the ideas of the two are resonant. For all that, Francisco may have also come up with the word independently – I do not know whether he was acquainted with Bruner’s works.
Now, what exactly does enaction signify? It is true, we never used the term enactivism, but in the pressure-cooker context of anglophone philosophy, one is constrained to state their thesis, argument or concept, so things have a tendency to become –isms. For me, enaction has always been a framework where many different ideas, not all of which may be in harmony with each other, can be developed and proliferate in different ways. What is vital about the enactive thinking right now is precisely that it has had further creative development, which then comes into interaction with other areas in development as well, such as ecological psychology, PP and free energy principle. Surely, a lot of this might be confused and not really in sync with the core of enactive thinking, but at the same time it is different from cognitivism which defines itself in terms of a particular hypothesis, that is to say, that the mind is a computational or representational system. The enactive approach is not defined on the basis of that kind of claim, except perhaps in the more general sense that the origins of meaning and cognition have to be rooted in an autonomous system. That is what still distinguishes it from deep learning, the latter being a bated connectionism: its back-propagation learning algorithm is a super high-powered computation, but at the same time it is brittle, it cannot construct meaning, it is prone to miscategorization, and it encodes all of human biases – which is all in perfect keep with Francisco’s critique of connectionism in this paper.
2) Natalie Depraz, Mareike Smolka and Mathis Trautwein: PP is a hot paradigm in CS. To what extent, if at all, is it compatible with enaction?
Natalie: It is difficult to consider predictive machines to be completely deterministic, even though the process is done without any subject being involved. We need to check whether the results of the processing are completely predictable (a result of linear processes between the input and the output), or if there is something else that goes on and brings out something that is not included intrinsically in that process. In this second case, I would say that it could be compatible with something that we enact, as not being completely predicted from the start.
Gábor Karsai: The classical interpretation of PP is that it is still representational and has some internalist commitments. This is the critique of PP from an enactivist perspective. There are approaches to PP, however, that try to invite enactive or embodied outlook on non-representational ideas. I am personally not convinced by these more recent approaches to PP; there seems to be a gap between what PP is about, and how enaction or this new model of the mind and the relationship between the mind and the body is envisioned.
Evan on predictive processing: PP is a new term for a very old idea. What is new in PP now is the use of Bayesian probability theory, where you have prior and posterior probabilities, and you are constantly updating the error signal you are getting (i.e., sensory input vs. your expectation). Bayesian framework is a particular way of treating the brain as an expectational or predictive system, and there is no issue with using this purely formal approach in PP as a heuristic tool, given certain kinds of questions and constrained experimental situations. The more substantive issues arise when you claim that this is what brain biologically is, as a cognitive system. With that, you lose sight of the brain as an autonomous system, as you are now treating it as a heteronomous one. If it is an autonomous system, you have to ask where the priors come from. A typical answer given in PP is that their origins are in the development and evolution. But that just kicks the problem back: you need to understand how there could be a system such that it could evolve and develop so as to have priors in the first place. And that is not the kind of question that the PP framework is very good at addressing.
The third element that enters the picture comes from Karl Friston’s work in neuroscience, where he tries to analyze the brain using the ideas about free energy from the field of termodynamics. He goes as far as to generalize it to a theory of life as such. But using this kind of dynamics for such purpose only leads to a loss of history: the system is not strongly path- or history-dependent, because it is returning to certain types of average behaviours over and over again. The enactive approach has always foregrounded the idea that living cognitive systems modulate the parameters and constraints of their coupling with the environment in a way that is greatly path- and history-dependent. Think of trauma and how you adapt or do not adapt after experiencing it. Or what it is to learn a second language and then move abroad and spend the rest of your life speaking that second language. Or to learn a new sensorimotor skill in your 50’s or 60’s, such as a new type of dance or a martial art. These kinds of behaviours that living systems are specialized in are not understandable within a framework that is limited to a non-equilibrium steady state regime – such treatment does not give us a good grip on sense-making.
Evan on enaction and neurophenomenology: In Steps to a Science of Interbeing, Francisco presents the evolution of research and ideas. Neurophenomenology is introduced as a further development – or even a superseding of – the enactive approach. I personally was never convinced by the move that Francisco has made here. The microphenomenological method is a method, meaning that it is good for certain questions (say, for sleep and dream reports), bad for the others, and has a lot of epistemological problems – questions need to be raised on its procedure, which consists of asking the subjects about the unfolding experience in a way that orients them to arguably confabulate the aspects of it, whereas epistemology is presented as an uncovering of a pre-existing experience. This is arguably problematic with regards to the question of sense-making. The idea that we could put microphenomenological interviews together with a deep learning algorithm, which is not sensitive to sense-making, and in such a way account for sense-making, just seems to me as not workable at all.
Sebastjan: It probably also depends on what one means by phenomenology: in the Varelian setting, it is sedimented in the form of microphenomenology, which is a specific type of phenomenology, very different from, say, Merleau-Pontian phenomenology (which is more (meta)philosophical).
Urban Kordeš: If you take the original idea of free energy very seriously, it is an idea that makes you take a step back and see a whole system, and as such enables you to ask the following array of questions: So the task of the system is to keep a minimal amount of energy – but then, what is that system? Is this autonomy? Do we need autonomy? Etc. Instead of drawing a line around enaction, we can try and point out where the questions of PP are, hoping that PP is really a slippery slope towards enaction. It is the closest thing to enaction that we have seen so far, and what differentiates it from the latter might very well be a baggage of the researchers doing PP, not so much a drawback of the theory itself.
3) Timotej Prosen: There have been many attempts to wed enaction with representationalism of some sort. Representationalism is an anathema for some, but it seems to me after reading this paper that the main reproach that the enactive thought has to classical CS is not representationalism as such, but the lack of account of how the representations are acquired, without resorting to a pre-defined problem space and supervised learning. Could one say that the enactive approach therefore does not preclude talk of representations, but only shifts the focus to the process of their genuine, i.e., unsupervised learning?
Shaun Gallagher: There is a whole long history to the use of the term representation. Some empirical scientists complain that they cannot do their science without this concept. There are cognitivist views that take representations to be mechanisms, the actual things that explain how cognition works. Another idea of representation is that it is rather a product of cognitive mechanisms that might not be representational. And then, there is representation in a much broader sense, where we can talk about external representations, that is, that some things out there in the world represent other things – in language, of course. I find the most troubling conception being that first one of representations as doing some of the work in cognition or being part of what the explanation is: that is the kind of representation I would like to reject. The second idea (the representation as a product of a cognitive system) is a kind of an in-between concept. And I am perfectly okay with the external representations.
Evan: Again, it is useful to think about the origins of the word and the context of its use, especially in the way that Francisco is writing about it. It is a term that really has its home in the representational theory of the mind, and then the updating of that theory via the notion of computation. Here, the idea of what it is to be a mind is to instantiate and to manipulate representations. To object to the use of representation as a heuristic explanatory notion would be misguided, because the concept is a tool, and if it is useful in that way, it is fine. The problem comes when you then turn around and make it into a mechanism, or reify it as what the mind fundamentally is. You are taking out a loan on meaning: you have to be prepared to say what the structure or format of the representational system is, how that is substantiated biologically, and where the meaning comes from. And the fact is that we do not have any theory as to how content is created in the interaction of the brain and the rest of the body, and the environment. We also do not have a well-formed understanding of what “the code” or the format of representations in the brain would be. From the enactive perspective, for these more fundamental underlying questions about sense-making, cognition and meaning, you have to go to the proper enactive notions, such as autonomy and sense-making. As Francisco shows in the paper, cognitivism, connectionism and enaction relate to each other by a succesive imbrication: moving in, you are bracketing things, and then you can use the word representation; but if you get fixated on that and you actually think that you have a grip on what meaning is, then you are making an illegitimate move.
4) Wolfgang Lukas, Mary Reese, Mareike Smolka and Viktorija Lipič: How can one directly apply the practice of enaction beyond the reading, thinking and talking about it?
Gábor: Talking about enaction is not non-enactive, neutral, or requiring an extra dimension of enaction to make it more enactive. There is a danger of dissociating acting per se, and talking on the other hand, thinking that talking is just talking, which can potentially lead to a new dogma of “acting, not just talking.”
Urban: The bulk of my work is to try to direct enactivist ideas towards ourselves as empirical researchers of experience. My question, perhaps for our future sessions, is whether we can see that here, in this debate? Can we see the way that we are enacting our knowledge, our common ground, our common sense-making?