There are several arguments about consciousness that have the following shape: combine a bunch of everyday things and then interpret this aggregate as a form of computation. Then, with that thought in mind, one is asked to explain why, or why not, such a thing could ever be conscious.
Another concept is the Boltzmann brain. Briefly, this is a physical prediction that a brain could spontaneously arise in the universe, before quickly decaying or vanishing entirely.
Strangely, the probability of Boltzmann brain events appear to dominate the expectation of normal brains by some accounts, with various theories predicting an infinite number of Boltzmann brains filling up a universe undergoing infinite expansion.
I am skeptical of that outcome, but there is a relationship between the Boltzmann brain concept and the idea that a collection of things could be interpreted as being conscious. First, let's talk about their seeming differences.
Unlike a bunch of everyday objects that have been jammed together in some thought experiment, a Boltzmann brain would be physically identical to a normal brain in all the ways that matter; it would have all the right stuff to give rise to a fully conscious, self-aware observer. That is an implicit part of its definition, which allows it to avoid many ontological problems. However ridiculous it may seem, this is also a physical prediction about the universe, and not just a question of interpretation.
One has to stop and ask if the difference between the Boltzmann brain and an aggregate of everyday objects is meaningful. After all, we are nothing more than a collection of smaller things ourselves. Assuming compositionality, what is necessary and sufficient to form such a brain as ours, the nature of its parts or their particular arrangement?
To help analyze this question, I would like to introduce something I call the Composition Hypothesis, which is defined as follows:
Things may have properties that allow them to be composed, and those properties may or may not be relevant to consciousness, but in the case where the arrangement of things makes consciousness possible, the properties of those things are only relevant insofar as they enable compositionality.
Some physical properties will always be relevant to consciousness, even if their only purpose is to enable the composition of simpler parts into more complex systems. The claim made by the Composition Hypothesis is much stronger than that. It asks us to accept that the properties of the parts are only relevant to consciousness because they enable complex behavior.
The true purpose behind the Composition Hypothesis is to act as a catalyst to introduce information theory into our assumptions about consciousness, but this requires a leap from the abstract composition of parts to a concrete theory of information.
By Nyquist and Shannon, we know that it is possible to have a perfect representation of a band-limited analog signal when certain conditions are met. That digital representation can then be used to perfectly capture that analog signal with no loss of information.
To relate this to the Composition Hypothesis let us first consider the medium that is used to carry an analog signal. Are its physical properties in any way relevant to that analog signal when it is being digitally sampled? Yes, but not in the way one might think.
The properties of one medium versus another do not meaningfully contribute to the information content of the signals they carry, though it may impact whether or not a communications channel can support quantum or classical information.
Composition, in the context of information theory, would relate to the properties that allow a physical medium to propagate information. Those properties manifest most prominently as bandwidth, latency, and attenuation.
The key point is that there is an equivalence between analog and digital signals, so long as certain conditions are met. This means that it is possible to perfectly represent an analog signal through digital sampling, and then transmit that signal, as many times as needed, over long distances, and with absolutely no loss of information.
Another way of looking at this equivalence is that it shows us that the information carrying capacity of analog signals are constrained by measurement. That means that even if a particular medium has continuous physical qualities, the actual measurement process effectively induces a discretization when quantized, which is a prerequisite for extracting results or performing work such as computation.
Objections to the Composition Hypothesis within this framework could come in at least two forms. The first would be a claim that we can not digitize sentience because there is some additional information, or property of the medium, that we can not measure. The second claim would most simply be described as being analogous to the difference between recording and playback.
Regardless of the underlying medium, we know from information theory, both in principle, and in practice, that a perfect representation of an analog signal is possible. There are caveats to that statement in quantum information theory, but it broadly holds under certain qualifications.
If we accept the view that the physical world is analog, and that physical information about systems can be interpreted as an analog information source, then the first objection would need to explain what it is about consciousness that we can not sample, observe, or measure.
Another caveat here would be that a classical communications channel can not represent quantum information, but a quantum information channel can represent both classical and quantum information. This is a physical consequence of theorems like no-teleportation and no-cloning. There are methods, however, such as quantum teleportation, that can be used to communicate unknown qubits between two parties through a classical information channel, though it requires pre-shared quantum state or an additional communications channel that supports quantum information. This protocol uses entanglement as an information source, but never violates the speed of light or rate of the classical channel due to the no-communication theorem.
I will address the divide between quantum and classical information in terms of representation later. As to the question of what it could be that we might not be able to sample about consciousness, I can think of at least two things: subjectivity and qualia.
Subjectivity is the concept that we each appear to inhabit a private world. Within, everything is expressed through the phenomenology of sentience. Every thought, feeling, or sensation appears to be mediated by, and inseparable from, the experiential. This is the only reality that we experience directly. Everything else is reflected into this private world, and we must draw inferences to make sense of it, including an account of our own experiential processing, as our perceptions can be misleading.
A world simulation must not be confused with the subjective world that it takes place in, which is as real as the objective reality that the world simulation projects.
I view qualia as quantizations of sentience. To be clear, I do not feel that we can not measure the extrinsic properties of qualia. We will definitely be able to sample the physical representations of mental processing, and we probably already have the means to partially do that, but we currently lack the ability to contextualize that information in the specific basis for each individual subject of experience.
We do not have a theory of the semantics that would map physical degrees of freedom to the particular qualitative experiences that characterize the intrinsic properties of consciousness, and even if we had that, it would need to be generalized to support unique instantiations of those mappings for each subject.
In my opinion, some authors do not separate subjectivity from sentience forcefully enough. I believe the greatest mystery about consciousness is how it is possible that an entire world can arise that is so overwhelmingly private. And I believe this issue should be considered independently from the phenomenological character of our experiences or the sophistication of the subjects within.
I would also challenge one to consider a varying number of subjects of experience within a single subjective world, which is easier to understand if we take the realness of subjectivity as ontologically prior to phenomenological binding, if such a thing makes sense at all.
If we can accept such a priority for subjectivity, then the most interesting case to me is the one where we imagine a subjective world being created with no conscious observer at all. Such empty worlds might be the physical rest state of living minds in dreamless sleep, but that depends on how one interprets the realness of a subjective world when a unitary subject ceases to be whole.
I am likely biased from my computational background, in which I imagine building the virtual world first and then situating an entity within, but I share this potential misstep to elucidate a greater understanding; it could very well be the case that phenomenological binding is precisely what gives rise to subjective worlds, and that a dreamless sleep, or deeply unconscious mental state, results in a total collapse of subjectivity. Such a view identifies subjects of experience with subjective worlds, making it inaccurate to say that a subject of experience is "contained" by that world.
On the other hand, it could be that subjectivity is indeed created by the architecture that enables consciousness to arise in the first place. And that this is not only a useful idea, but a potentially relevant part of any implementation or physical configuration of a conscious observer.
Returning back to the Composition Hypothesis. There was a second objection, which was highlighted by the difference between recording and playback. In order to address that I will introduce something I call the Representation Problem, which is how we will explore the analogous concept of playback and recording in a hypothetical digitization of sentience.
Suppose the human brain is an analog information processing system; a computer, if you will. Not like the computing devices we use every day, but a very specialized analog information processing system, with an architecture that has evolved through natural selection as a result of its impact on genetic fitness.
Now suppose we have a means of sampling all of the information relevant to consciousness in the human body for some length of time.
I include the entire body because it could be the case that embodiment has important consequences for the proper functioning of higher cognition. The body may not merely be a mobile shell for the nervous system, but an extension that includes distributed and auxiliary information processing of all kinds.
That distributed information could then be integrated back in to the totality of our experience in subtle ways that we do not yet fully understand, but which impact our subjective experience and cognition.
Assuming perfect representation, if we could record the information processing of an entire human body, and then digitize those signals, an important question arises:
Would we have lost any information relevant to consciousness?
If one answers that we do not lose any information relating to consciousness then they must also admit that we can at least store a recording of sentience digitally. From that it follows that this information could be reconstructed again, arbitrarily many times, and with no loss of information, so long as it was reconstructed in a system that was capable of faithfully reproducing the character of the original subjective experience.
Another approach to the Representation Problem would be to state that what we experience is always analog before reaching our senses; we never experience from digital signals, only their analog reconstructions.
It's all analog, all the way down. The data in a digital photo must be converted back into light. An audio file must be converted into pressure waves by drivers so that we can perceive sound. And the synchronized combination of these mimick the unity of our subjective experience in film and video games.
This concept captures the subtle notion that, while we may not lose any information in the digitization of sentience, it may be insufficient to reproduce sentience if the recreation does not take place under a very specific analog configuration.
This clarification narrows the argument, but does not weaken it. In fact, it strengthens the case for digital sentience by providing a compatible stance for those who strongly object to the whole concept.
So, this now introduces a dichotomy between the faithful digital representation of sentience and its reconstruction or playback. A conflict arises, however, because we have to ask how it is possible that we can represent sentience, but not recreate it just anywhere.
In other words, where in physics do we account for subjectivity? What degrees of freedom and/or phenomena account for the unquestionable realness of the private worlds that we inhabit?
We should not be able to sample sentience if we were also unable to observe the totality of its properties, otherwise we would only be recording an echo of the underlying information processing that entails it.
Could it be that classical information is insufficient? Perhaps the answer is that we have to move up to quantum information theory.
Invoking quantum information processing or quantum computing seems to defy reason if we account for our biology. We appear to be overwhelmingly classical, especially when considering the inhospitable environment of our anatomy and the staggering timescale differentials that would be involved.
Even if our biology exploits quantum information processing, it might only be for the purpose of optimization, which would still uphold the Composition Hypothesis.
There is, in fact, research published in the journal Nature that suggests that chloroplasts use quantum information processing to optimize photosynthesis. More research is clearly indicated, though it is important to remember that optimization alone can not explain subjectivity or solve the Representation Problem.
If we accept that we can, in principle, measure every bit or qubit of information in a human body, then we should already know all of the properties we are capturing in the act of the recording process. The information content is closed and self-contained; we have all the pieces to the puzzle in this hypothetical trace of our subjective experience. There can be no further information that would be objectively accessible to us, and it must, by extension, include all of the physical information about the subject of experience for the duration.
Supposing quantum information is all there is, and that we can exhaustively entail the information of some bounded local volume of the universe through a perfect representation, then it stands to reason that there is nowhere for consciousness to hide.
This is the crux of the Problem of Representation. Subjectivity must be somehow completely entailed by information, otherwise a physical account of consciousness would have to be abandoned, and that is not something I am prepared to do.
There are only a few outcomes that could resolve such a conflict brought about by the Problem of Representation:
The first outcome would be that an objectively unobservable character exists to consciousness which can only be observed directly by a subject of experience, and that the mechanism by which this occurs can only be disclosed through descriptions that are external to the subjective worlds that arise through such configurations.
The information in a recording of consciousness in such an outcome would be mediated by a particular analog configuration that, when reconstructed, gives rise to the sentience within a private world that is only accessible indirectly from the outside. And all manifestations of external communications to a subject of experience would have to be mediated by the phenomenology of experience inside that private world.
This means that a complete trace of sentience would be insufficient, because our phenomenology must be realized through active processing which collapses to mere descriptions when considered in isolation.
One could imagine having a mapping, and derive a mirror model of what it would be like for a subject like the one that was recorded to have certain experiences, but it would quickly diverge from the original subject, even if the underling system was a perfect copy. And it may also be the case that one or more quantum information theorems provide a full stop to any complete duplication of an individual subject of experience; quantum information must be conserved.
The other outcome that responds to the Problem of Representation could be that there is a theory of information for which we have yet formulated that can account for both the intrinsic and extrinsic qualities of consciousness, including the nature of subjective worlds that are possibly independent from the subjects of experience that typify them.
Such a theory would have to explain how it reduces, relates, or generalizes existing theories of information, and how its measure of subjective information manifests without violating any conservation laws in physics. It would be permissible for this theory of information to make physical claims about subjective worlds that demand us to take them as a postulate, as such an exception seems almost unavoidable at this point.
Why such an exemption is necessary to even begin a theory is because I can not imagine anyone truly rejecting the realness of their own conscious experience. So, if one is committed to eventually having a science of subjectivity, then the only resolution that makes sense to me is to just assume that subjective worlds are physically real. Once that first step is taken, it may be possible to build a theory on top of quantum and/or classical information theory that bridges the gap between the subjective character of our experiences and our various methods of objective observation.
However, even with such accounts, the explanation for why subjectivity exists at all may still remain a philosophical question, and not a physical one, but that would at least be a step in the right direction.
If the Composition Hypothesis is true then there is no Representation Problem and we will not just be able to record consciousness, but actually reproduce it digitally without having to reconstruct it in a suitable analog configuration. This would be the "paperless" option for the nature of being, which is perhaps making light of the unsettling implications of being able to so easily manipulate something so fundamental to our existence.
I should pull this distinction apart further. What does it actually mean to reproduce sentience digitally instead of relying on an analog reconstruction?
Consider two recordings. One is a sample of gravity waves and the other is an audio recording. We have everything we need, technologically, to produce pressure waves of sufficient fidelity for audio signals. We do not, however, presently have the means of producing high amplitude gravity waves. This example is meant to illustrate one of the differences between recording and playback; it may be very difficult to reconstruct the effects of the original signal, even if we have a means of sampling or modeling the source.
A model could be thought of as a way to generate signals at will, which could then be reconstructed to produce the analog effects we desire in the world.
If we had a model for sentience then it is plausible that we could drive an analog implementation that had the necessary physical characteristics of consciousness. Such a digital sentience component could then faithfully produce sentient information that actually resulted in real consciousness in its analog counterpart.
Although getting closer, this would still not be what I mean by having the ability to recreate sentience digitally.
To be digitally reconstructed, sentience would need to share many of the characteristics of information itself. Notably, sentience would need to be expressible through communications channels. Most importantly, however, would be the property that sentience-as-information would need to be measured to be realized.
I would claim that another word for measurement, in this context, would be observation, and that information in the absence of observation is nonsensical. This would appear to make the problem circular, but this is not the case. I am not saying that it takes a conscious observer to make information manifestly real. What I am saying is that measurement is one of the preconditions for information to be useful for any purpose at all, including being recruited for sentience, and that the sampling of a signal is nothing more than systematic observation.
In the definition of a Turing machine, an abstract device is specified with a tape head. This tape head in the framework I am establishing takes on the role of an observer. There is nothing strange about this; there must be some systematic means of signification. The tape head has all of the qualities of an observer in an inertial reference frame, and the act of observation that it performs is a form of measurement. That a conscious observer does measurement or the tape head of an idealized computer does measurement provides no meaningful distinction to me in this context, other than the fact that Turing machines are abstract; it has nothing to do with consciousness at this stage.
What this means is that a computer could provide the essential requirement of being able to perform measurement and observation that an information theoretic account of sentience requires. This could lead to an internally consistent world simulation that gives rise to a virtual subject of experience. Causal effects could be provided by the computer through the semantics of its implementation, by which the simulated subject could activate by means of an internal interface.
However, if we were to successfully create a digital sentience, it would not necessarily be able to act upon the real world. We would have to provide this virtual subject with a means of performing the actions it wishes to take. Then we would need to provide an interface to the virtual subject to carry out those actions from within its world simulation. This is one of the roles of our embodiment.
The device that provides effects in the outside world could be the same as the computing system that is running the simulation of the subject's world, but it could also be separate. This would be one of the strangest aspects to the artificial construction of consciousness, as there might be a significant separation in time or space between the virtual subject of experience and its embedding. And, because of the ability to record this digital representation, a virtual subject might also be unaware that its embodiment has been altered on subsequent repeats of past simulations.
Even if these hypothetical descriptions of consciousness could be implemented, it is not apparent that they would lead to a conscious observer. It might just turn out to be a complex information processing system that appears from the outside to be fully conscious, but otherwise lacks any subjective experience at all.
It is important to highlight the distinction between a simulation of consciousness and an artificial intelligence that was not purpose built for sentience. Both may end up being insentient, and the artificial intelligence not made to reconstruct subjectivity might remain insentient even if it was operating over the right kind of physical configuration for consciousness.
Any artificial intelligence that lacks a world simulation is going to, by definition, lack a means of even emulating subjectivity.
Consider the acceleration that took place when machine learning algorithms were given a framework for attention. Now imagine what would take place if the most advanced learning systems were situated within a rich world simulation, and given all of the simulated characteristics of subjectivity and sentience.
That, however, would take a completely different approach to the problem than what is being done now, and the computational requirements would increase substantially, as these systems would become I/O bound by their world simulations.
This research program is what I mean when I wrote about the Minimum Sentience Conjecture in AI Security. It is also what I meant by saying that strong artificial intelligence and today's approaches are in a completely different category. The summary of the point I was making in that book, which can now be synthesized with the points I'm making here, is that general AI may in fact depend upon world simulations, which depend upon sentience, even if that presentation is merely a simulation.
The binding problem is deeply related to the question of subjectivity, and this is the one issue that can be used to bring together all of the different points I have made so far.
A digital recreation of sentience would have to contend with the binding problem in a way that a hybrid system would not.
While a hybrid system outsources the binding problem to its analog counterpart to reproduce consciousness, a digital sentience would have the burden of manifesting a subject of experience, virtual or otherwise, entirely on its own.
I interpret the binding problem as having two distinct, but highly related questions:
1. How is it that a large number of separate parts, such as neurons, can be combined to form a unified experience?
2. How or why does subjectivity arise at all?
And those two questions have many aspects themselves, some of which are extremely subtle. For starters, with the first concern, how is it that this binding process is done so efficiently in the brain? What is the mechanism by which the signals are combined?
A lot of the first concerns of the binding problem do not impress upon me a difficulty in any way with modern computing equipment. If we were to consider each of our senses as specific communications channels, then those channels would be of a particular bandwidth.
We are actually quite impoverished in terms of the resolution in which we can see and hear. I would imagine that our other senses would be more-or-less the same in terms of the minimization of perception with respect to bandwidth; our brains do an excellent job of convincing us that our experiences are richer than they actually are.
What does worry me is the second part of the binding problem: subjectivity. In my opinion, subjectivity is the central issue for any investigation into consciousness.
It is already possible to create virtual worlds in video games and simulations, but I do not believe such simulations, alone, are equivalent to the subjective world we each inhabit at this moment, as conscious beings. Nevertheless, I do believe that it is possible for a virtual subject to have a projection of the real world that is even richer than our own.
However, for such virtual world simulations to have the same effects as the real ones we experience they would also need to be accompanied by an emulation of sentience.
That is to say, merely simulating a virtual subject inside a virtual world simulation will not be enough. The operational role of qualia, interpreted as quantizations of sentience, would need to also be a part of such a system, else the virtual subject might respond differently to that world than a conscious observer would.
I will admit that it is bold to claim that a simulation of qualia is required to recreate certain behaviors, but it is consistent with my claims about general intelligence being correlated with sentience. And I would like to make a refinement to that stance.
A correlation between general intelligence and sentience does not imply exclusivity. It may be possible to explicitly write programs that perform any behavior, but they would have to be updated or rewritten for each new set of behaviors we specified. The key insight about general intelligence being correlated with sentience is that it uses that relationship to avoid the brittleness of specialization.
Put another way: The adaptations that an artificial intelligence can make over world simulations are going to be more flexible than just training over a few dimensions of experience, such as vision or sound alone. Generally speaking, the richer the representation of a world simulation, the more abstract the inferences that can be made. And the better the AI can abstract, the higher the capacity will be for generalizing intelligence.
Something else to mention, which is related to the divide between a fully digital sentience and its analog counterpart, is that classical simulations do not actually create the physical effects they simulate. As a result of this, it could be the case that digital sentience will never give rise to a physically real subjective world, even though the device running the simulation is physically real itself.
Just as a classical simulation of gravity can not directly recreate gravity, the physical realness of subjective worlds seems to imply that classical simulations might never give rise to true subjectivity. Such a limitation would likely prevent true sentience from arising in those circumstances as well.
However, having simulations of subjectivity and sentience will still be of great utility, and I suspect the most advanced artificial intelligence systems in the future will be based on such architectures, but it is important to investigate the limitations.
It may be morally beneficial that digital sentience is artificial, as that would mean that we would not have to worry about the suffering of an automated workforce based on such technology. It may also set the stage for making it illegal to produce physically real subjective worlds and subjectivity without certain qualifications. This is far more abstract than bio-engineered sentient organisms, artificial persons, and strong artificial intelligence; subjectivity is likely to be prior to all of these things.
The reason I have focused on subjectivity is that I believe that the explanations I have provided so far show that the binding problem reduces to the problem of subjectivity. The actual information processing aspects and integration of distributed sensory data should not pose a significant challenge. Not even on consumer hardware.
What we have is an ideological challenge. Subjectivity and sentience must be entailed first, and then learning systems layered over top. This, in my opinion, will lead to a cycle of systems and theories that are testable. Once we have that, I believe such an approach will begin to inform on a physical basis for consciousness through some of the correspondences I have put forward.