Sophisticated behavioural functions without consciousness
Consciousness inessentialism proposes that we could theoretically retain the same input-output behaviours in a sophisticated system, without instantiating something like phenomenal consciousness. In other words: that we could simulate a behavioural function (some information processing in the brain, e.g. stepping on a Lego that is accompanied by some phenomenal properties) without phenomenal consciousness in the system.
Luke Muelhauser illustrates this with a simple network of three modules, that perform some kind of information processing that is necessary and sufficient for phenomenal consciousness. Whenever module A passes on unprocessed information I to module B, it processes it in a why that instantiates phenomenal consciousness. If I understand it correctly – the processing in B has phenomenal properties: ‘it’s something-like-to-be processing that information’. The information is then passed on to module C, where we can verbally report on those phenomenal properties.
Now, if we imagine that when A encrypts the information I in a way so that B can still apply the necessary computations before passing it on to module C (still in its encrypted form), the same input-output behaviour could be examined. Then B could never process the information in a way that instantiates phenomenal consciousness, like before the encryption.
Source: Muehlhauser, L. (2017) ‘2017 Report on Consciousness and Moral Patienthood - Open Philanthropy’, Open Philanthropy, 9 June. Available at: https://www.openphilanthropy.org/research/2017-report-on-consciousness-and-moral-patienthood/ (Accessed: 16 February 2023).
Links: Consciousness inessentialism