RELATED TERMS: Large Language Models; Paradox of the Anonymous: When We Wake [Snippets 9];

One way of conceiving the co-dependence or collaboration between human agency and artificial intelligence (AI) agency is to consider it through the lens of the metaphors used to understand the processes of Large Language Models (LLMs). Smith, Greaves and Panch (2023), for example, argue that using a precise and accurate metaphorical language to convey the complex functions and malfunctions of LLMs can lead to a better understanding of these powerful digital technologies. Through a meticulous choice of metaphors, taken from the language used to describe human neuro-cognitive processes for example, a better shared understanding of the complex concepts in the field of AI and LLMs may be achieved.
Thus, Smith, Greaves and Panch (2023) argue that employing the term ‘hallucination’ to characterise the inaccurate and non-factual outputs generated by LLMs implies that LLMs are engaged in perceiving, in other words, becoming consciously aware of a sensory input. However, since LLMs do not have sensory experiences, they cannot mistakenly perceive them as real. Therefore, the term ‘hallucination’ misrepresents the kind of process occurring within LLMs that it aims to characterise. The LLM is not ‘seeing’ something that is not there. Rather, the LLM is making things up.
Given this misrepresentation, Smith, Greaves and Panch (2023) reason that a more accurate terminology can be found within the same psychiatric vocabulary from which ‘hallucination’ itself was plucked. They suggest the concept of ‘confabulation’ is preferable to that of ‘hallucination’. Confabulation refers to the generation of narrative details, although incorrect, that are not recognised as such. Confabulations, unlike hallucinations, are not perceived experiences. Rather, they are mistaken reconstructions of information, influenced by existing knowledge, experiences, expectations and context.
In psychiatry, confabulation is often associated with a generalised lack of awareness of one’s personal deficits. This is often seen in right sided cerebrovascular accidents or traumatic brain injury, as well as in bipolar disorder, schizophrenia and the dementias. The right hemisphere of the brain, Smith, Greaves and Panch (2023) explain, is thought to be responsible for such cognitive functions as processing non-verbal cues, understanding the emotional state of others and appreciating the nuances of music, that is to say, the broader or more ‘holistic’ contexts of a situation (a totality which is never complete or whole), and hence the more macro-dimensions of the ongoing interaction.
When the right hemisphere is damaged, the left hemisphere compensates and predominates. However, it does so more literally and simplistically. This can lead to a focus on narrower contextual understandings of the situation, with an emphasis on detail, a preference for sequencing and ordering and an overly optimistic and often unrealistic assessment of the self. That is, it leads to an emphasis on the narrower contexts of a situation, and hence the more micro-dimensions of the ongoing interaction.
Pursuing the analogy between the left hemisphere’s orientation to the world and LLMs as they currently exist is instructive, Smith, Greaves and Panch (2023) contend. As things stand, LLMs vastly outperform the human brain’s ability to absorb and retain large amounts of information (detail). They can therefore produce outputs on a scale that no individual human could. However, by analogy, LLMs are like the unmitigated, confabulating left hemisphere. As a result, they may confidently produce false information: confabulating by adding erroneous details into the narrative sequence.
Much as suggested by Brian Eno (Shariatmadari, 2025), Smith, Greaves and Panch (2023) propose, based on their analogical reasoning, that responsible use of AI, in the medical field as elsewhere, implies the reintroduction of the contextualising and sense-making functions of the right hemisphere, in the form of direct human oversight. Thus, in the collaboration between humans and AI agents, a synergistic relationship can be witnessed that is reminiscent of that between the brain’s right and left hemispheres. Similarly to the ways in which these brain regions complement each other, due to their relative strengths, humans and AI agents each contribute capabilities that may compensate for the other’s limitations.
Smith, Greaves and Panch’s (2023) proposal to use the term ‘confabulation’ does not merely correct a misnomer. The implied neuroanatomical analogy also permits new ways of understanding, and hence new paths for developing digital and other interaction technologies. Indeed, it is argued here, its value can be extended to all forms of design.
Smith, Greaves and Panch’s (2023) analogy acknowledges the need to recognise the relative value of the broader processes of situational, reflexive macro-contextualisation (metaphorically, ‘right brain’ processes) and the narrower processes of detailed, sequential micro-contextualisation (metaphorically, ‘left brain’ processes) for understanding the unfolding of interactions in any given situation.
Given this analogy, design interventions, design agency, may be grasped as ‘left-brain’ oriented (the narrower, ‘closed’, detailed, sequential micro-context), the limitations of which, for example a tendency to confabulate, are compensated by the broader contextualising and sense-making functions of the ‘right brain’ human (broader, ‘open’, empathic, reflexive). The collaboration between a designed artefact-agent, of whatever degree of technical or technological complexity and degree of immersiveness (or environmentality), and a human agent with whom it is interacting can be seen to enact collaboratively a synthesising and synthetic (in all senses of this word from fabricated to artificial) socio-technical, socio-cultural and socio-economic system.
At the same time, we have to bear in mind that, “our conscious experiences of the world and the self are forms of brain-based prediction … that arise with, through and because of of our living bodies,” (Seth, 2021). Thus, the compensatory ‘right brain’ (human) element of the complex, interactive, socio-technical, socio-cultural and socio-economic system that we are elaborating, can be characterised in the following terms:
Instead of context merely influencing the contents of perception, the idea here … is that perceptual experience is built from the top down, with the incoming (bottom-up) sensory signals mostly fine-tuning the brain’s ‘best guesses’ of what’s out there. In this view, the brain is continually making predictions about the causes of the sensory information it receives, and it uses that information to update its predictions. In other words, we live in a ‘controlled hallucination’ that remains tied to reality by a dance of prediction and correction, but which is never identical to that reality” (Seth, 2022).
In short, when examining situated design interactions, we need to consider both the potential confabulations of the ‘left-brain’ artefactual environments in which we live and with which we engage and the potential hallucinations (controlled or otherwise) of the ‘right-brain’ human experience we ourselves have of that ongoing interaction.
References
Seth, A. (2021) Being you: a new science of consciousness. London, UK: Faber.
Seth, A. (2022) The big idea: do we all experience the world in the same way?, Guardian, 3 October. Available at: https://www.theguardian.com/books/2022/oct/03/the-big-idea-do-we-all-experience-the-world-in-the-same-way (Accessed: 11 November 2022).
Shariatmadari, D. (2025) ‘I don’t want to be revered’ [Interview with Brian Eno and Bette Adriaanse]. The Guardian, Books, 11 January, pp.45-47.
Smith, A. L., Greaves, F. and Panch, T. (2023) Hallucination or confabulation: Neuroanatomy as metaphor in Large Language Models, PLOS Digital Health, 2(11). doi: 10.1371/journal.pdig.0000388.
One thought on “Hallucination and Confabulation”