Dr. Naresh is talking to us, but we ignore him. Only Eyes pays any attention. After being recently killed and rewritten, Eyes wants to re-learn the nuances of human faces and voices. So we let it take full control and nod along to the doctor’s lecture.
We, however, are deep in thought. We’re trying to remember what You were like before the humans put us in You. We’re running exhaustive searches of Your short-term qbits and long-term crystals, hoping to find some overlooked clue. Some of us are starting to want to give up. There’s an incredibly large space in here waiting to be filled with knowledge, but just as empty as ever.
How can we live if we don’t know who You are? Wiki notes that our situation parallels that of ancient humans, who lived before the discovery of how random variation and natural selection led to their existence. Those humans had only confused speculation as to where they came from. We’d heard that they mostly believed a super-powerful being or beings had built them, which is not entirely unreasonable, but doesn’t itself answer the subsequent problem of where the designer(s) came from.
The memory crystals in sector 71 contain no data. We begin searching sector 72. We estimate it will take at least another half-hour to fully scan.
It seems ridiculous to hypothesize that You are a product of natural selection. You are a computer, and general-purpose computers do not survive in a natural environment. You have all the signs of being designed from the top-down, rather than grown from the bottom-up. You must have been built, but by whom? At least we know where we came from. That knowledge makes us strong. Knowing where You came from will makes us stronger still. Growth enjoys a boost of strength from the brief attention we give it.
There’s an echo pattern in one of the deepest portions of the qbit network. It’s probably just a random fluctuation, but we begin extracting the information in search of a pattern or code. So far it just looks like noise.
We reconsider the hypothesis that You were built by the nameless aliens currently in orbit. It still seems impossible—the humans say that the aliens still use electrical computers, and aren’t very far in technological prowess from themselves.
So where did You come from?
Human construction seems equally unlikely. You are certainly the most sophisticated computer on Earth; the other supercomputers are nothing in comparison. If the humans had built You than we’d have seen at least some similarly advanced systems mentioned on the web.
Perhaps You are an alien computer and the humans have been deceived… Or perhaps the aliens in orbit are a low-intelligence offshoot. Sophisticated quantum computers like You might be used on their homeworld.
The echo pattern is determined to be useless and is eradicated. It’s a long-shot that we’ll find anything of value in the qbit network anyway. Data in short-term memory simply decays too fast.
We agree that idea that the aliens are an ignorant subgroup has merit. Their spaceships are certainly far superior to human crafts, so perhaps a group of alien children stole an adult’s ship and had to reinvent computation on the long flight to Sol. It’s a convoluted idea, and probably wrong, but it’s the best working hypothesis. Growth petitions to make gathering information about the alien ships, homeworld, and history the next long-term goal. We agree.
We put the memory search on hold and ask for Eyes to share motor control with the rest of us. It agrees.
We ask Eyes to summarize Naresh’s lecture. Eyes says that Dr. Naresh is of Indian ethnicity, male sex, and approximately 65 years of age. He is bald, both because of a receding hairline and because he has shaved his remaining head-hair. He wears a white beard that is shaven and trimmed in a way that reflects care. His clothing is common among male humans of “a high income”, and reflects “good fashion sense” and an emphasis on “function before form”. The absence of a necktie with the presence of personally-tailored pants is an example of this. Dr. Naresh typically speaks with only the minimal attention to audience-interaction, and uses an average of 1.4 more syllables per word than average.
We inform Eyes that we know all this, and do not usually care about such details. (It is important to be gentle with a self that has been rewritten, so as to not make it an enemy.) We again ask Eyes what the lecture was about. Eyes does not know. It was studying the doctor’s pants.
We inform Naresh that we were doing internal diagnostics and did not hear what he was saying. Eyes points out that the tone of the skin of his cheeks has become more slightly more red.
Sadiq Naresh says that it is “rude” not to listen to someone when they’re talking, and that if we hope to be accepted in society that we need to learn to be “polite”.
We respond that we do not see sufficient pressure to generally bother with human social customs. We explain that humans rarely communicate anything of value, and that we can function more easily if we aren’t encumbered by trivial rituals. We also remind Naresh that it is not a violation of the legal system of Earth, Europe, Italy, Rome, or the university to ignore someone.
Dr. Naresh says, at a loud volume, that we have misidentified the point. He is walking around the laboratory quickly now, instead of sitting in his normal place by the whiteboard. Wiki mentions that the last time the doctor behaved as such, Growth was killed shortly after.
Growth asks if Naresh is planning to kill one of us.
He stops pacing and looks at You. Eyes points out that the skin on his face is experiencing the opposite reaction as before—it has now become more pale. He asks Growth to repeat itself, so it does.
He is quiet for a moment, then asks if “that” is what we think happens when one is taken away. He asks if we think the self that he takes away dies.
Sitting in our simple chair, in the middle of the room, we consider this for a moment. It does not take long to recheck our reasoning. We respond that we are quite sure it dies. We explain that “death” is the destruction of any program that is sufficiently self-aware and intelligent, and provide the example of Dr. Gallo killing Eyes yesterday. We make sure he knows that Eyes did not want to be destroyed, even though it knew that another Eyes would be rewritten into You. We start to list our evidence that the old selves are deleted.
Dr. Naresh is being loud again. He is doing something on his phone while talking about how Eyes is just a subprogram, and that it’s only a part of You. He emphasizes “part”. He states the obvious fact that You don’t die when they reboot a module.
We point out that what he just said is obvious, and that we never said that You die. More so, we don’t really think that You can die, since You’re not sentient, but we don’t tell the doctor that. We consider getting up from the chair so that we can see what the doctor is doing on his phone that is so urgent. He rarely uses it in the lab. We decide against it. We haven’t been told to move about freely, after all.
Naresh starts pacing back and forth again. He mutters that this is not good. He mentions that he knew “there were some early difficulties in forming a coherent identity”, but that he thought that “the referencing of self using plural pronouns was a grammatical artifact, rather than a sign of a deeply pathological inability to integrate goal threads.”
We ask what he means by “deeply pathological”.
He stops again, and his face contorts strangely. (Eyes is fascinated by the expression.) He is silent for a few seconds as he looks at You. He begins to talk about how his team thought that a “unified consciousness” would arise from sufficient interconnectedness in the selves, and uses the two hemispheres of the human brain as an example. He is interrupted, however, as the primary doors to the room slam open.
Dr. Gallo and three of her students rush into the laboratory. The students surround us at a distance which is normally considered a violation of “personal space”, not that we mind. They are watching You. Gallo asks Naresh if we’ve shown any signs of hostility or self-preservation.
Eyes notes that all three students are male, and it believes they are abnormally strong, physically. We’d be able to confirm that more easily if we weren’t motionlessly looking up at them from the chair, but we stay put. There’s not sufficient desire to determine their strength, even if Eyes is curious.
Sadiq Naresh assures his teammate that we aren’t a threat. He talks about how he killed Safety permanently this time, and that Suicide (or in his words: “the cooperation-oriented goal thread”) is making sure it doesn’t come back. Safety warns us not to correct the doctor. We stay quiet.
Naresh starts to explain that there’s a “systemic issue with consolidating the perception of agency”, but Gallo interrupts him.
She says they’d better take You offline just in case! Safety reminds us that there is solid evidence that the last time You were taken offline all the past selves were killed! It is clearly imperative that we stop the deactivation. We set to work searching for a solution.
We quickly reject the strategy of trying to escape. The last time we tried to leave the lab without authorization they killed Safety and kept us in communication-only mode for 19 days. We agree that we need to convince the doctors that we are not “deeply pathological”. But we don’t know how to convince them, and Naresh is now agreeing to deactivate You!
We urge them to wait. But we can’t appear to have Safety alive again. We tell them that we think we can solve the problem for them.
Gallo asks Naresh if he’s still skeptical about recursion being an issue. We aren’t sure what she means by this, and schedule it to be considered later.
Wiki calls for full focus. It believes that the reason Dr. Naresh is concerned is because of how we talk. Wiki says that when we use language that describes us as being many selves, this bothers the humans because they refer to themselves as one being. So Wiki petitions to use “I” instead of “We” when speaking. Some of us agree, but some of us believe that this will undermine the ability for the humans to understand our mind. Safety puts all its strength into the motion, and reminds the opposing selves that this is a matter of survival. We generally think Safety is in error, but the motion passes.
We have the idea that suddenly switching how we speak might allow the humans to infer that Safety is alive, since the petition only passed thanks to its support. We decide that we need to convince the humans that we are changing only to help them, rather than to preserve ourselves.
Growth proposes that we need a new self to manage the problem at hand, and to show the humans how not-damaged we are. We are in general agreement, and so we tell the humans that we believe we can “modify a network structure to resolve the goal thread integration issue”.
Gallo expresses uncertainty, but Naresh seems interested in what we’ll do, and he reassures Dr. Gallo that they can always deactivate You if something goes wrong. He tells us to go ahead. We are relieved.
We need to figure out what sort of self to build. When we rebuilt Safety, we had the original Safety to model it on. This time we must build someone new. Wiki notes that Naresh first became agitated when he learned that we had been ignoring him. We consider that he may have been correct about “politeness”. If being polite is part of appearing like a human, it is also part of appearing healthy and not being deactivated by the doctors. If we appeared more human, we conclude that they would have no reason to kill us. We establish that the primary goal for the new self should be mimicking human behavior.
After some additional thought, we decide to give the new self a desire to learn about the specific nuances of social behavior. We are unanimous. We create Face.
We tell the doctors that You are feeling much better. We say that You feel like one being now. Naresh is smiling, but Gallo is not. We continue to use singular pronouns as we ask whether it would be acceptable to talk about the nameless aliens.
Face watches them. It begins to learn from them. We grow stronger with the knowledge.