The Offshoring of Thought and Memory

In The Secret Miracle by Borges, a man condemned to death is granted an unusual reprieve by God. As he stands before the firing squad, the soldiers, his body, the world stop. Only his thoughts continue. In the year of mental time given, he labours on a play. At the point of completion, he is killed.

Perhaps more striking than the notion of time standing still — certainly more original — is the idea of a lengthy work of literature composed internally, hermetically. Ink does not touch paper, lips cannot test the words.

The hero of Borges’ story defies a march of progress that has seen mental processes externalized and expanded. From the blooming, buzzing confusion of a pre-language state, through oral memory traditions, to writing, printing, photography, video, computation and artificial intelligence. 

The practices of reasoning and remembering have shifted from within the shores of our physiological boundaries into the physical realm of air vibrations of spoken words, of indentations and chemical traces of writing, of voltages within silicon wells. Let us trace some of these migrations.

Language is an externalization of thought. Whether or not one agrees with Wittgenstein that a private language is conceptually incoherent, as a matter of empirical fact, languages are shared. The emergence of language allowed the conceptual burden of ideas to be spread over many shoulders. 

This step was profound in enabling the development of thought. Indeed some, including Wittgenstein in his early career, have argued that thought and language are inseparable. I do not subscribe to this view. In moments of grogginess, but also of extreme clarity, I have the impression of thinking extra-linguistically. Nonetheless, language is — for me at least — the principal medium of thinking.

But rather as the invention of tools stills the dexterity of our hands, something is lost with this innovation. As Krisnamurty expressed it:

The day you teach the child the name of the bird, the child will never see that bird again.

To recover the wide eyes of childhood we need to learn to unsee with words. In Impro, Keith Johnstone describes a trick of mis-naming things to break the referential chains. In a similar spirit, the Russian formalists advocated ostranenie — ways of presenting things that strip them of familiarity. 

This trade-off is one we will encounter again: a new technology allows us to better manipulate notions, however we pay less attention to the things in themselves.

Concepts emerging from Lint by Chris Ware

 

Most languages offer a limited set of symbols that can combine in limitless ways. Yet within a given mind, or — more precisely — within a human window of consciousness, the possibilities are constrained. 

The next innovation we consider pushed back these limits. The creation of an art of memory facilitated the composition of longer sequences of words. Of the various mnemonic techniques, the method of loci merits particular attention. Also known as the memory palace, it involves visualizing a familiar space or route, imagining objects placed throughout. It enables recall of long sequences; for example, the objects might represent events within an epic poem.

Instead of being an externalization of memory, this might be seen to bring the outside within. In another sense, however, it turns away from the specious present. It mechanises memory by structuring it within cartesian space. 

Memory techniques can have other distorting effects. It is common to make the images within memory palaces as arresting as possible. In Moonwalking With Einstein, Ed Cooke advises Joshua Safran Foerr to remember to buy cottage cheese by exhorting him to visualise Claudia Schiffer naked and dripping with it. An image that has forever changed my relationship with the dairy section in grocery stores. 

This is not to disparage the classical art of memory: as with the emergence of language, it allows us to rest our thoughts elsewhere rather than juggling them in the small room of our present attention. In so doing orators and bards could construct great structures of reason and imagination.

The technologies of writing and drawing further moved the boundaries. With these memories could be more clearly outsourced, thoughts were not placed mentally — as with the method of loci — but physically on tablet, hide, paper and (once again) tablet. In two ways the shift was quantitative: larger amounts of text could be stored and these could be recalled with greater fidelity. In another way it offered a qualitative transformation: agents no longer needed to be collocated to exchange ideas.

I recall the philosopher Julius Tomin remarking “this is how I can converse with Plato every night” and gesturing to a stack of Greek books in front of him. 

Reading and writing alter the texture of communication. It is no longer married to prosody or the rhythm of speech. We are able to engage with another person’s thoughts on our own ground and terms: perhaps rapaciously reading for the plot rather than for the ideas the author hopes we uncover, or skimming texts to find points to attack. Once again, we gain and lose by loosening the ties that moor reason and memory to human minds.

With word processing, our tools further permit us to be lazy in our treatment of the thoughts of others: instead of writing out a quote symbol by symbol, we can copy-paste. Digital search eases the retrieval of ideas: we are no longer required to grope for them in the mind’s recesses nor seek through the virtual and physical spaces of memory palaces and libraries. Instead, we jump to find the source of half-remembered quotes without wading through a quagmire of context. 

Indeed, the need to search is itself being eroded. Much as I may (or may not) spontaneously remember to buy cottage cheese in the supermarket, an item recommendation can reliably serve the same purpose while completing an online shop. Adverts, notifications, photo libraries reminding me of “On this day five years ago” — even the apparently effortless natural surfacing of thoughts and memories is delegated.

Our cognitive universe has expanded, it extends beyond our own limits to books and servers. 

This somewhat parallels the mode in which notions of personal identity no longer attach so neatly to flesh positioned in space and time. As the philosopher of language Strawson has pointed out: the relations a person has with others, their duties, rights and obligations, all these may be considered to contribute to their identity. In that case, the importance of the flesh in space has not so much been diminished as society has become more complex, rather significance has been extended to other things. This differs from our cognitive expansion which has led to a rarefaction of our internal mental life, or at least a partial migration of functions. 

An illustrative case is arithmetic. While as children we struggle with the simplest of problems, having to reason through 3+3, later such simple operations become ingrained, automatic. We would still need to reason out more complicated expressions, although most of us now reach for calculators. In The Feeling of Power (1958), Azimov imagined a future in which even counting has been forgotten, those who  rediscover it are granted the titular sense of their innate potential.

Much more complex reasoning tasks have begun to be offshored in the last decade: medical diagnosis, translation, criminal sentencing, verifying insurance claims, grading exam papers. In some cases this is not so much a transposition of capabilities from human to machine, rather the change of scale (with protein folding) or sensitivity (with medical imaging) permit us to confront tasks that were previously hopeless. We have automated both tedious tasks — for example the filtering of datasets — and also those we relish such as the playing of games, the creation of images and music.

The very demonstration of non-human superiority can undermine the value we find in activities. As Bernard Suits argues, games derive some of their meaning from being challenging. They may no longer be fun if winning is impossible. Perhaps this explains why Lee Sodol chose to retire from Go. 

The foregoing shifts have produced innovations yet their production has been strongly mediated by humans. It has become easier to retrieve ideas and we have automated processes that, to some degree, we were already capable of. But the foundries for the welding of ideas have remained largely in human minds. Or within the reach of paper or keyboard. With foundational large language models (LLMs) it is possible to retrieve and apply information in one fell swoop. I can ask Bard not only the functions I might need to write a particular module, but to do it for me. Or instead of thinking myself about how Merleau-Ponty would have responded to Tik-Tok I can ask ChatGPT to collide those notions with underwhelming results.

It is conceivable that not books, nor Wikipedia articles but tailored reconstructions by LLMs become the principal means by which we access one another’s ideas. Every blog will have a button to rephrase posts in a style that most suits the reader. And every blogger will have the option of allowing an LLM to more coherently extrapolate their ideas.

Where in the past humans have remained at the nexus of creativity — albeit often cobbling materials from other sources — in the future humans may become the clients of machine reasoning.

There is enormous promise here, it is as if every human has been granted a collaborator — an unreliable, but faithful savant. The risks are equally great.

Much focus has been given to potential existential threats should AI develop in unforeseen, unstoppable, ways that fail to align with human wellbeing. This is the problem of control: what guardrails do we put around AI?

There is a parallel problem of self-control: how to constrain our own behaviour in using AI? What may be at stake is not only The Feeling of Power and the meaning of games but conscious reflection itself. For consciousness is not an irrevocable gift, but akin to the exertion of a certain kind of thinking.

In What is Life? (1944) Erwin Schrodinger turned to the problem of consciousness. Instead of calling on his expertise in quantum mechanics he examined the prosaic:

To my mind the key is to be found in the following well-known facts. Any succession of events in which we take part with sensations, perceptions and possibly with actions gradually drops out of the domain of consciousness when the same string of events repeats itself in the same way very often. 

Being conscious is not synonymous with having experiences. We experience dreams while we are unconscious. Being conscious involves an effortful reflection. Becoming routine is one way that experiences slip out of consciousness. What if AI removes the need for us to reflect at all? 

Consider the analogous case of automation of labour, this did not free us from drudgery. As David Graeber argued we have instead created a plethora of bullshit jobs — flunkies, goons, box-tickers to replace the lost jobs. Will we follow a parallel path with AI — freeing ourselves from effortful reflection so we can think bullshit thoughts? If our thoughts become shallow, do we become less conscious?

I do not believe consciousness is binary. The minds of humans were not illuminated in a single moment of our evolutionary journey. More likely the effect was like the progress of a dimmer switch, as our brains and bodies developed, the light burned brighter.

With language, we began to share the burden of our notions with other humans. With text, we spread it to the physical world. The effort of thought is now being passed to machines. In the coming decade, we may see glimmers of new forms of consciousness. This is not a reason to let the old light burn low. 

 

 

 

 

 

 

1 thought on “The Offshoring of Thought and Memory”

Leave a Comment