Though some may scoff and grumble, it’s impossible to deny: the human brain resembles a computer. It serves, after all, to process information.
Computationalism continues to spark heated, almost ideological debate. Many of these arguments resurface in discussions of artificial intelligence. Critics often point out that the brain is a biological system, that it doesn’t resemble a typical computer loaded with software, that artificial intelligence has yet to outwit good old common sense…
Philosophical daredevils — and not just any daredevils, but thinkers like the renowned American philosopher John Searle — go so far as to claim that computers don’t really exist at all, because whether something is a computer is merely a matter of rather liberal interpretation of reality.
The nervous system can be studied for many purposes — not only to uncover how cognitive or, more broadly, mental processes unfold. A toxicologist hired by conspirators might, for instance, be interested in the fastest way to poison James Bond and halt the workings of his brain (otherwise Bond might activate a special gadget that…). And while scheming to dominate the world may be more thrilling — and arguably more lucrative — here I’ll focus on the more modest aim of explaining mental processes.
Since the second half of the 20th century, researchers have used computational models — in other words, computer simulations — to pursue this goal. The typical justification for this research practice is straightforward: computer simulations can accurately predict and explain cognitive processes because, in essence, the brain is a computer. To be more precise, one should rather say: the nervous system serves to process information. This view is known as computationalism.
Computationalism continues to spark heated, almost ideological debate. Many of these arguments resurface in discussions of artificial intelligence. Critics often point out that the brain is a biological system, that it doesn’t resemble a typical computer loaded with software, that artificial intelligence has yet to outwit good old common sense… Philosophical daredevils — and not just any daredevils, but thinkers like the renowned American philosopher John Searle — go so far as to claim that computers don’t really exist at all, because whether something is a computer is merely a matter of rather liberal interpretation of reality.
Another great, recently departed philosopher, Hilary Putnam, once said much the same — even backing his point with mathematical reasoning. A skeptic like Saul Kripke would chime in: we can’t even be sure computers are computers, i.e., that they truly carry out what the programmer intended.
I call these philosophers daredevils, because surely they don’t take such claims entirely seriously. Searle probably doesn’t write his texts while staring at the wall in his office — even though he once said the wall could, in principle, be described as a computer running the WordStar text editor. Kripke likely doesn’t genuinely question whether a mobile phone is different from a book that happens to be running Android software poorly when one tries to place a call with it. Surprisingly, these daredevil arguments are difficult to refute — but here, instead of wrestling with them, I’ll briefly reconstruct the reasons why computationalism remains, and will remain, unmatched in the study of how the mind works.
What is Cognition?
It is hard to say what precisely characterizes cognitive processes — for example, in contrast to emotional processes or body temperature regulation, which the brain is also responsible for. One might say that to know anything, one must think. And thoughts must have some content. Yet this brings us to a challenging concept — content — that some researchers of cognitive processes prefer not to touch at all. What to do? Let’s lower the bar. If they won’t talk about content, they are unlikely to deny that cognitive processes must involve the reception and processing of information. Granted, the concept of information is also tricky, but it generates fewer controversies among cognitive scientists or philosophers of mind.
In its simplest, structural sense, we can speak of information when something, namely the information carrier, can change — be in at least two states. These changes must be recorded by another system — only then do they make a meaningful difference. If they are recorded (by any receiver), the states of the carrier can carry information. This is the minimal concept of information: we cannot yet say what the information is about or whether it is true — but true information and information about something are also structural. Structural information is, therefore, the smallest common denominator of all more complex conceptions of information.
Let’s combine both issues: cognitive systems must interact with their environment in such a way that they receive information from it if they are to know their environment. If they are to know their own states, they must also receive information, such as about the position of their head from the sense of balance. However, these states do not immediately cause changes in behavior or reasoning: whenever I look at my uncomfortable chair, I don’t have the same conclusion (“Time to buy something comfortable!”). Sometimes I think about avoiding tripping, other times about where to temporarily place a book. For this to happen, information must be processed, and so regularly trigger changes in other carriers of information. Only then can we speak of information processing.
From Mathematical Machines to Computers
The term “computer” in the Polish language became common relatively late, with its use increasing in the 1980s with the rise of home computers. Prior to that, people spoke of mathematical machines, which should not be confused with mathematical models of computer operation (devices).
A computer, then, is a device — a system with a specific organization — that performs calculations. It has parts that interact with each other — for example, permanent memory carriers. We can say that a computer performs a certain model of calculation only when its operation — everything that can be obtained from it by operating it — can be correctly predicted and explained in terms of that model.
Imagine Faust is given for analysis the “Mephistopheles 2.0” computer, whose function, according to the instructions, is “always to deny.” Faust posits a hypothesis, which he describes as a formal model of calculations, that after entering the word “YES,” Mephistopheles will spit out “NO.” In fact, even after entering “NO,” Mephistopheles continues to answer “NO.” This means the model’s predictions hold true. It is important that Mephistopheles 2.0 is a device that is structured to perform such calculations — that is, it has been designed in such a way to compute this, and not something else. The result of the calculations should also be used to control some other device (which we also understand as being used by the user, influencing them cognitively).
We also have the mathematical theory of computation, the foundation of which is based on the work of the brilliant mathematician Alan Turing (1912–1954). His mathematical model of a machine — later known as the “Turing machine” — formalized the concept of “algorithm” that mathematicians had intuitively used for centuries. An “algorithm” is understood strictly as a set of operations that a Turing machine can perform using a very limited set of basic operations. These correspond to what a clerk might mechanically do using an almost unlimited amount of paper, a pencil, and an eraser: writing, erasing, moving left or right on the paper (or tape). Turing demonstrated that there can also be a universal machine: one that can perform any calculation available to all other Turing machines (each of which performs a set of predetermined operations). He claimed that such a machine could compute anything that could be computed effectively — although there are those who argue that physically possible computations might exist for functions that a Turing machine cannot compute. These are called “hypercomputational functions.”
A lot of ink has been spilled trying to describe all the conditions that physical systems must meet to be considered “computers,” but we can remain at a fairly general level. It is important for a physical system to have a calculating function and a proper structure. It must be a mechanism used to compute according to some mathematical model of computation, and its operations cannot be explained more simply and elegantly. Thus, a bucket of water is not a computer, because the water level does not constitute the expected result unless the bucket is used to perform calculations. Why this qualification? A computer could be hydraulic — for instance, in the mid-20th century, there was a model of the Keynesian economy implemented as a complicated water-based computer, MONIAC. There are also (somewhat humorous) theoretical works showing the possibility of using buckets of water to calculate logical functions that support pattern recognition by neural-like networks. Similarly, a regular stone is not a computer, because when it lies on the meadow, it doesn’t compute any simple constant function (e.g., the fact that it lies to the left of the apple tree doesn’t mean it yields the result “12”). This can be explained more simply. Gravity.
Biological and Electronic Brains
Now that we have an approximate understanding of what it means for a physical device to be a computer, let’s turn to brains. As early as the beginning of the second half of the 20th century, mathematical machines were referred to as “electronic brains.” It was noted that computers solve problems whose nature cannot be predetermined, just like the universal Turing machine. Moreover, a machine’s capabilities stem from a large number of elementary operations — just as the brain’s capabilities arise from the interactions of a large number of relatively simple neurons. At the same time, it was pointed out that machines can use descriptions of other machines, process symbols, or process information.
In short, this argument draws on several characteristics of brains: flexibility or universality; complexity arising from neuron interactions; and information processing. Even if computers have these characteristics, they are not enough to determine whether the brain is a computer. This overlooks whether the brain actually has an internal structure corresponding to a specific model of computation, and whether this structure has been selected — perhaps through natural selection — to perform tasks related to information processing.
Why would the brain serve this purpose? Because cognitive processes, unlike metabolic processes, do not simply involve supplying energy to the organism. Predicting future states of the environment is crucial for the survival of any organism that can somehow prepare for these future states. We cannot explain how organisms prepare for future states — if they are completely new — except by referring to information about the environment.
This suggests that the informational functioning of the brain could indeed be a biological adaptation, ensuring better adaptation to the environment and, for example, allowing animals to plan their movements better. Therefore, the better the movement planning, the larger the brain. (Compare the brain of an elephant to the information-processing system of a moving plant, such as a sundew).
Simulations and Speculations
Hypotheses about the computational nature of the brain have inspired many speculations that could not be directly confirmed by experimental methods. However, one could create computer simulations that provide data so precise that they couldn’t be obtained from studies of biological brains. While the first computational models of the brain — proposed by cybernetics pioneers Warren McCulloch and Walter Pitts in 1943 — may seem like curiosities today, more precise models of both individual neurons and entire neural networks have developed over time.
On the one hand, we have increasingly precise electrophysiological studies of animal and human brains. Laboratories also cultivate cultures of nerve cells, studying their behavior as neural networks (thus directly serving as computers). There are more and more experimental results — and some, like the 2014 Nobel Prize-winning discoveries of cognitive maps in rats’ brains (John O’Keefe, May-Britt Moser, and Edvard Moser), are particularly spectacular. Cognitive maps are simulated computationally, and the results of these simulations are compared with experimental results in animals. Anatomical research is also accelerating — with increasingly detailed attempts to trace the entire neural network in various organisms (the so-called connectome, the map of all neuronal connections). We now know the connectome of the roundworm C. elegans, as well as fragments of the fruit fly’s connectome, the retina, and the primary visual cortex of mice.
On the other hand, computational models — both of individual neurons and of the whole brain — are becoming more precise. There are very accurate models describing the electrophysiological properties of individual neurons, which are used to study, for example, the origin of brain waves that can be recorded by an electroencephalograph. There are also cognitive simulations that approximately describe the workings of the entire brain. The largest and most successful such simulation is SPAUN — a brain model developed in the laboratory of Canadian neuroscientist and philosopher Chris Eliasmith. SPAUN can perform a variety of cognitive tasks, including solving intelligence tests, recognizing handwriting, and learning by conditioning.
Thanks to research that includes both computer models and empirical results, the early cybernetic speculations have transformed into systematic research in computational neuroscience. Computationalism remains an incredibly fertile working hypothesis, and it can only be disproven in one way: by showing that the brain does not process information at all, including information about the environment.
But this is a task for daredevils. ©
Marcin Miłkowski