The Triple Helix @ UChicago

Fall 2017

"The Brain: The Upgrade?" by Clara Sava-Segal


Metaphors describing brain processes using technology, information processing, and computer terminology are common. Read any article in the field of psychology, neuroscience or even neurobiology, and these metaphors are so ordinary and routine that they do not even stand out. Under this framework, brains must be processors of information that storerepresenttransfer and retrieve [1]. And even here, I claim that we are processing. The metaphors depend on the premise that computers are information processors and therefore, behave intelligently [1]. This implies – perhaps erroneously - that anything that can behave intelligently, such as a brain, is also an information processor and therefore, brains and computers are interchangeable. After all, aren’t both “[shuffling] energy around complicated circuits?” [2].

Using such analogies is not new. Metaphors to explain the brain have been commonplace throughout human history, reflecting the most progressive and innovative technology of the time. In the 300s BCE, the brain was compared to the hydraulic pumping system of intelligence, since it was believed that the “humors” – the different fluids moving throughout the body – caused physical and mental functioning. In the 20th century, with the advent of the Morse code, it was believed that neurons passed signals similar to electric wires [1]. Now, in the age of computers, hardware is analogous to the biological brain while software is understood to be mental processes, consciousness and thoughts.

This metaphor may be useful in discussing and explaining the brain, but can also be extremely limiting for framing research questions. The fundamental issue lies at the dichotomy created between software and hardware, i.e. mind-brain dualism. This framing is incorrect – biological processes of the brain are entirely interrelated with the perceptions, thoughts, and general behaviors that they produce [3]. Perhaps, this is the most pervasive in our discussion of memory. The common phraseretrieving something from memory implies that there is a “memory storage” space that we can readily access. However, there are fundamental differences between memory and a safety deposit box. Memories, themselves, are behavioral in essence, yet are stored in enormous, systemic networks. They have even been labeled with fluorescent proteins to be physically traced as they modify over time [4]. Human memories are not as simple as computer information presentation. These differences are becoming increasingly difficult to communicate given that these metaphors have become so tenacious in our discussion of neurological processing.

For a computer, the individual units of information that make a concept are stored as “bits,” which when grouped together to form chunks become “bytes,” that, in turn, create patterns that we term “megabytes” or “gigabytes” [1]. As an analogy, the individual letters (or bits) that make up a word – “c”, “a”, “t” – create a “byte” when placed sequentially, and a “megabyte” photo of a cat is a grand pattern of a million bytes [1]. However, neurons cannot be analogized to a “bit” or a “byte.” There isn’t a neuron encoding a given concept. Countless neurophysiological studies indicate that the memory of an event or even a concept – depending on its level of complexity – activates physiological circuits across various brain regions. At a lower level, groups of neurons fire together to encode any new stimulus or when asked to recall a previously understood topic. The pervasive “Grandma hypothesis” underlines the idea that there is not one neuron that encodes “grandma” as a concept -- after all what would happen if this neuron died? [5]. Even “concept neurons” that respond selectively to a single concept like “Taj Majal” or “George Harrison” [6], aren’t the only “bit” encoding this information. The brain’s sheer complexity is incomparable to the simplicity with which computers are given information, process it and output it. However, making comparisons is an easy trap to fall into. Notably, within this entire explanation, it has been impossible to avoid using computer metaphors.

Brains are flexible. To some extent, input can even modify intracortical circuitry. One study even showed how a brain could be rewired to process visual input in what was developed to be the auditory cortex in ferrets [7]. Computers, on the other hand, don’t change their function or their wiring based what input they receive. Mammalian infants are born with the necessary tools to absorb sensory information and interact effectively. These brain mechanisms allow us to be social learners. With socialization, infants prefer faces that look similar to those they’ve seen previously [8] and languages that they’ve been exposed to. Therefore, we see that infants – at the ultimate information processing stage of their development– combine multi-sensory and environmental stimuli to create large and complicated internal mappings with preferences. While we use the term the “world wide web” as a metaphor for similar expansive mappings of information, the levels of preferences and social interaction in infants are simply not comparable with computers – at least for now.

While artificial intelligence algorithms are continuously improving, with computers being able to beat humans at chess, computers are still behind on a social level. Unlike computers, humans not only process their environments, but do so at a personal level especially with events with emotional valence. No two people have the same memory of an event, and memories are modified when stored. We do not have a direct mental representation in the same way that a computer does. Our representations are incomplete. When we process a memory, we do not bring up the same consistent conceptual mapping each time; if anything, it decays.

To follow this metaphor directly, computerized mechanisms would need to become aligned with biological realities [3]. It should be noted, however, that more and more groups creating artificial intelligence algorithms are looking to the brain to see how biological mechanisms take in and process information. As computers become increasingly brain-like, the metaphors will become progressively more valuable. However, the variation that exists in both human software and its corresponding hardware will continue to be difficult to replicate.


[1] Epstein, Robert. “Your Brain Does Not Process Information and It Is Not a Computer – Robert Epstein | Aeon Essays.” Aeon, Aeon, 8 Nov. 2017,

[2] “Tests Suggest the Methods of Neuroscience Are Left Wanting.” The Economist, The Economist Newspaper, 21 Jan. 2017.

[3] Vlasits, Anna. “Tech Metaphors Are Holding Back Brain Research.” Wired, Conde Nast, 21 June 2017,

[4] Reijmers, L. G., et al. “Localization of a Stable Neural Correlate of Associative Memory.” Science, vol. 317, no. 5842, 2007, pp. 1230–1233., doi:10.1126/science.1143839.

[5] Gross, Charles. “Geneology of Grandmother Cell.” History of Neuroscience, 2002.

[6] Rey, H.G., Ison, M.J., Pedreira, C., Valentin, A., Alarcon, G., Selway, R., Richardson, M.P., Quian Quiroga, R., 2014. Single cell recordings in the human medial temporal lobe. J. Anat.,

[7] Pallas, Sarah L., et al. “Visual Projections Induced into the Auditory Pathway of Ferrets. I. Novel Inputs to Primary Auditory Cortex (AI) from the LP/Pulvinar Complex and the Topography of the MGN-AI Projection.” The Journal of Comparative Neurology, vol. 298, no. 1, Jan. 1990, pp. 50–68., doi:10.1002/cne.902980105.

[8] Bar-Haim, Y., et al. “Nature and Nurture in Own-Race Face Processing.” Psychological Science, vol. 17, no. 2, Jan. 2006, pp. 159–163., doi:10.1111/j.1467-9280.2006.01679.x.

UChicago Triple Helix