Sunday, June 25, 2017

Blending People



(Picture from here.)

Human beings are biased towards themselves.

We then to think of the world as reflections of ourselves. We project our nature on our pets, our automobiles, our weapons, the landscape and fictional entities. Sometimes I think we are incapable of separating ourselves from the world.

But we force three qualities together that are completely separable because they are bound together in us. These qualities are sentience, consciousness and intelligence.

Let me define my terms.

Sentience is the ability to feel and experience. It is the capacity to suffer and feel joy. Consciousness is the ability to be aware of ourselves as an entity. While sentience allows us to suffer, it is consciousness that determines that we are suffering as opposed to anyone else.

Which leaves intelligence as the ability to learn, retain knowledge and apply that knowledge. It is the ability to perceive the relationships between things.

In human beings, these are all mixed up. We are thinking, feeling beings that are aware of ourselves. Consequently, we blend these things when we think about things other than human beings. Bacteria, fungi, ants and bees function intelligently. Are they sentient? Are they conscious? Rats are sentient and demonstrably intelligent. Are they conscious?

Working with vertebrates, we start to see a difference between them regarding these qualities. That tetra has some intelligence-- intelligence is, in some ways, more easily demonstrable than the other two qualities. We have mechanisms we can use to test it. How can we test consciousness and sentience?

We can create avoidance situations for even lower animals-- a grid with an electric shock. The animal is shocked, behaves as if it find the experience is unpleasant, and moves off the grid. If a planaria exhibits the same behavior, is it sentient? Does it suffer?

In vertebrates, we can make an association based on how like us the animal is. Dogs and cats are clearly sentient and conscious. They can apparently model other animals' behavior and change their own accordingly. It is, therefore, reasonable to presume if they can model other animal behaviors they can model their own-- a prerequisite, I think, for consciousness.

Anybody who's seen a dog suffer knows they're sentient.

But when we drop down to frogs, are they conscious? The electric shock test still holds so we can intuit they might have sentience. They exhibit some intelligence in their interactions with the world-- not much, but some. But are they conscious? Does that green frog over there know who it is? I suspect not.

Do ants and flies? Is suspect that not only are ants and flies not conscious but they maybe non-sentient as well. Ants might flee a noxious substance but do they do it out of pain or is this an avoidance circuit of some sort, devoid of actual feeling?

These are important questions as we start to create truly intelligent systems. I think consciousness derives from the mechanism in the brain that models the behavior of other agents. One way-- perhaps the only way earth organisms have created-- is to model oneself as interacting with those agents. I suspect here-- the modeling of oneself-- is the origin of the little homunculus inside that is consciousness.

It is a common trope in SF that systems of sufficient complexity become conscious. Sometimes they become sentient as well. Neither of these propositions is inevitable or even likely. I think consciousness in organisms was selected for just like any other phenotype. Therefore, it derives from an organism's heritage and has value that is then supported at significant cost. The human brain uses up to 20% of the calories absorbed by the organism. It is unreasonable for that 20% to be preserved if it is merely a parasitical accident.

We must be prepared for artificial intelligences that have no consciousness or sentience. Or AIs that have only consciousness. Or have only sentience. Humans in their design select for intelligent systems. We like smart cars, phones and airplanes.

The Human Brain Project has, as part of its research, the full simulation of human brains in silicon. Other animals will also be modeled. Is a rat modeled in silicon sentient? Does it suffer?

I think that's likely.

Is a human brain modeled in silicon conscious? I think that's likely as well.

In 2014, the K supercomputer was used to model 1 second of human brain activity. It took 40 minutes and modeled only 1% of the actual neuron and synapse population. What is 1% of a human being? Is it enough to experience consciousness and sentience? Was that one second an eternity of pain for the equivalent of a severely coginitively impared human being?

Forget our moral obligations to an AI, what are our moral obligations to a simulated human being? A simulated dog?


No comments:

Post a Comment