10/17/2024 | Press release | Distributed by Public on 10/17/2024 03:04
Over the past decade, the capabilities of artificial agents have increased rapidly. Today, they can solve complex problems, learn languages and are capable of self-improvement. In other words, they exhibit broadly intelligent behaviour. However, whether intelligent artificial agents are in principle capable of developing consciousness remains highly controversial. Depending on the answer, there would be far-reaching consequences for morality and human self-understanding.
Inextricably linked to the possibility of artificial consciousness is the question of how consciousness arises in the human brain. Researchers from the Computational Neuroscience Group at the Department of Physiology of the University of Bern and the University of Amsterdam have now proposed new answers to both issues. Their work has been published in "AI and Ethics".
As a starting point, the researchers propose so-called functional correlates of consciousness. Dr. Federico Benitez, postdoctoral researcher at the Department of Physiology and first author of the study, explains: "We aim to trace consciousness back not to specific neural structures, but to more abstract computational functions of the brain, which we call functional correlates." Artificial agents performing all functions that generate consciousness in the brain should therefore themselves experience conscious states. Opponents of artificial consciousness, however, are critical of this. They argue that current AI systems are too different in structure from the human brain to represent functional correlates.
The researchers respond to this objection with a thought experiment. They imagine a neuromorphic chip being implanted into the brain of an infant suffering from a degenerative brain disease, replacing the damaged brain regions. 'Neuromorphic' means that, unlike previous hardware, the chip has an architecture similar to that of neuronal circuits in the brain and is able to continuously adapt its structure. "Such a chip would be able to assume all functions of the replaced area and develop alongside the infant," explains Prof. Dr. Walter Senn, head of the Computational Neuroscience Group. By gathering and combining data from chips in different regions, the brain's functions could be accurately reproduced - including the functional correlates of consciousness. The researchers call the resulting hypothetical artificial agents 'evolving neuromorphic twins', or 'enTwins' for short.
To identify the functional correlates of consciousness, the researchers propose the novel 'Conductor Model of Consciousness' (CMoC). The model postulates a higher-level entity of the brain, the 'conductor', which controls the flow of signals. Specifically, the model proposes a so-called conductor network, which regulates the interaction of three other functional networks: an encoder network, which interprets sensory information from the outside world; a generative network, which produces fictional sensory impressions (e.g. when dreaming); and a decider network, which determines for each sensory signal whether it originates from the outside or was produced by the brain itself.
"The conductor improves the capacities of the generative and encoder networks and trains the decider network to improve its judgements", explains Senn, "this enables the brain to efficiently distinguish between its internal and the external world". In current neuroscientific research, this ability is attributed a key role in the development of consciousness. Finally, to assess whether the enTwins from the thought experiment are conscious, the researchers propose the following test: According to them, an enTwin is conscious if its actions are indistinguishable from those of a human and if it also has a neuromorphic architecture that implements the Conductor Model of Consciousness.
The possibility of enTwins raises ethical questions. "We want to avoid a situation where competition may arise between the legal rights of humans and those of artificial agents," says Benitez. That is why the researchers propose an agreement: Artificial agents could be designed in such a way that they would possess consciousness but would not suffer from the negative emotional components of pain. In return, they would have to accept that human beings are given legal precedence. "In this way, we would be able to protect less privileged human groups and could at the same time prevent a global increase in suffering caused by the creation of conscious artificial agents," concludes Benitez.
The researchers' work provides new ideas for the cutting-edge fields of cognitive and computational neuroscience. It was partly carried out in the context of the 'Human Brain Project', a European research project aiming to create digital models of the human brain, involving the University of Bern and over 100 other institutions. "Research on consciousness is generally treated as somewhat "unscientific" because it is difficult to measure consciousness," says Senn, "by introducing functional correlates and the Conductor Model of Consciousness, we hope to move the debate in a more concrete direction."
Publication details:Federico Benitez, Cyriel Pennartz & Walter Senn (2024). The conductor model of consciousness, our neuromorphic twins, and the human-AI deal. AI and Ethics. DOI: https://doi.org/10.1007/s43681-024-00580-w |
Computational Neuroscience at the Department of PhysiologyAt present, the Department of Physiology houses three research groups which investigate the functional aspects of the brain from a computational, theoretical and neuromorphic point of view: Computational Neuroscience (Prof. Dr. Walter Senn), Theoretical Neuroscience (Prof. Dr. Jean-Pascal Pfister) and a combination of theory, modelling and applications, particularly in neuromorphic systems (Dr. Mihai Petrovici). These research groups build a bridge between experimental neuroscience (mouse experiments, Prof. Dr. Thomas Nevian and Prof. Dr. Stéphane Ciocchi) and research into cognition and artificial intelligence. More information: https://physiologie.unibe.ch/gruppen.aspx |