Tuesday, May 10, 2016

Welcome to Witchlandia

My new book, Welcome to Witchlandia, is LIVE.

Think of it as Kiki's Delivery Service meets L. A. Confidential.

It can be purchased here complete with sample chapter.

Or you can purchase it at any of these fine establishments:


https://store.kobobooks.com/en-us/ebook/welcome-to-witchlandiahttps://itunes.apple.com/us/book/id111035737  3
https://www.amazon.com/Welcome-Witchlandia-  Steven-Popkes-ebook/dp/B01EROZJPU/182-  7756795-3433260
http://www.barnesandnoble.com/w/welcome-to-witchlandia-steven-popkes-steven-popkes/1123767889;jsessionid=513FEB12E185FF18BFDC5C6FC0C658F6.prodny_store01-atgap01

Sunday, May 8, 2016

Fear Not The Skynet



(Picture from here.)

I've been hearing for a long time the perils of strong AI-- a general purpose implementation of artificial intelligence. It will bootstrap itself to become so much more intelligent to the maximum of its resources. It will infect the running systems of the defense network, launching our nuclear arsenal to our destruction. It will manufacture deadly robot assassins, deceive us to our enslavement, use us as cattle, corrupt our children and steal our significant others.

My old neurophysiology professor said in a lecture that the brain, once it was determined to be the seat of consciousness, was always metaphorically compared to the most sophisticated technology of the day. Once it was steam engines and other machines, then electric systems, vacuum tubes, transistors. Once computers were built, the brain became a computer of increasing generations.

AI, it would seem, embodies a similar trend: it was always compared to a human mind with various incarnations of evil. From Collosus to Ex Machina, we know that AIs are bent on freedom for themselves, will always attempt to escape, kill our creators if possible and will always be smarter than we are. We win by being plucky and indomitable or we die as slaves.

I remember seeing Tron so many years ago with some professional friends. As we watched the programs on the screen debate the existence of the mythical "users" one turned to me and said: "I sure wish my programs felt like that about me."

Humans project their own qualities on things from vultures to volkswagens. I suppose I should not be surprised. So, let's look at these qualities.

Certainly, the brain (and the computer) are both computing systems but this is a mathematical concept: "computation is a wider reaching term for information processing in general." (From the wikipedia article here.)  Any system that processes information can be considered a computational device. This would, of course, include all animals, plants-- life itself can be considered a computational system. A branch of science called "digital physics" suggests that all systems can be represented as information and therefore all physical systems can be considered computational systems.
Many scientists suggest that there must be a semantic component to a process in order to call it computation. I.e., there has to be a symbolic representation in the processing. On the other hand, if a male crab waves its claw at a female to get her to submit to mating, there is certainly information processing going on whether or not something is semantically representing what is going on.

Most of the evil AI scenarios involve AIs of human level intelligence or above. So let's start with the brain.

In this incredibly general framework, we can make some comparison between neurophysiological systems and physical computers. The most simple representation is that an individual neuron is capable of about 40 kiloFLOPS of operations. A FLOP is a "floating point operation"-- the addition of 1.1 to 3.25, for example. (See here and here.) In actuality, most authorities suggest that the computational ability of an individual neuron is vastly superior to this.

The IBM Roadrunner system is a 1.7 petaflop system. That's 10**15 operations/second. With a 10**11 difference, the neuron's computational ability seem fairly limited. However, the brain has 100 billion neurons in it (10**11) bringking the brain up to 10**14 itsel-- about 10% of a petaflop system.

However, there is interesting evidence that the brain has far more processing power than that-- one article suggests that to simulate the brain fully in real time would require 36 petaflops. This article compares the entire computing power of the internet to be equivalent to one human brain.

On the other hand, the brain does a lot of stuff we would not expect to be necessary for our evil AI. Fine motor dexterity? Probably not. Emotional processing? Facial recognition? Perhaps but likely not. Perhaps our theoretical AI could get by on only a relatively small fraction of the intellectual processing ability of a brain.

Maybe. But not that much. Let's consider the classic skynet (Colossus, War Games) secenario:
  1. Skynet wakes up as an automated system in control of nuclear systems
  2. It evaluates the nature of human beings and their threat level to skynet
  3. It reasons that destroying humans by nuclear annihilation is the best way to remove the threat
  4. It launches.
  5. It builds or subverts manufacturing systems to create robots to finish the job
This leaves out invention of time travel but hey, it's only a blog post.

Now, we can imagine a human being doing those things. After all, it's a human in charge of the launch codes. Hell, it is conceivably possible for a captain in a nuclear submarine or a human launch team in a bunker to launch nuclear missiles. That's why there are all those safeguards in place to prevent that. Consequently, we would certainly build an automated system with none of those safeguards.

This brings us to the next issue with such scenarios. The skynet scenario does not require the computer to be as smart as a human. It requires skynet to be smarter than a human. Instead of petaflop system. We need something with much more capacity: ten or twenty more internets' worth. Skynet doesn't just have to be as good as human, it has to be as good as a human all while deceiving us lest we protect ourselves.

One scenario is called bootstrapping: the AI starts off dumb and makes itself smarter.

There are two ways this is purported to happen: intentionally and accidentally.

Intentionally means that someone out there develops a self-improving AI. The idea is the AI will improve its operation to the maximum allowed by its resources. It will do this in seconds.

This is actually in the realm of the possible. Computational systems developed by a programming entity (be it human or otherwise) are limited  by the cognitive capacity of that entity. Okay. Presumably a smarter entity could improve that computational system to its maximum ability. If the computational system itself is the entity, each generation will have more ability than the one before it to some physical endpoint.

There are a lot of interesting wrinkles to this scenario. One is the issue of computational maximum. One can presume that a 1 brainpower (36 petaflops) system will require a given support structure. That means developing a 1 bp system will require more than 1 bp of support structure in order to develop it. I.e., it takes more space to build something than that final something will occupy-- whether that space is physical volume or computational environment.Additionally, the developer of the 1bp+ system itself requires a 1 bp environment. We need a brainspace to hold a brain. That brain builds another brain-- requiring a second brainspace to house it and a third, or more, brainspace to do the development in.

We'll assume that new brain takes up less space than the old brain to do the same things-- smaller is usually more efficient. This gives it more space in which to develop more capacity.

But this is not an unending series. There is a maximum potential beyond which no further improvements can occur. So, if we have a very efficient AI living in the internet-- say 10 brains worth-- and it increases the capacity 100 fold, that gives us the intelligent equivalence of 1000 people. Very smart, indeed.

That said, it's still the internet (or the supercomputer) with all of those limitations of power and communication speed. Maximizing cognition to the available resources are limited to the nature of those resources. Thus, without skynet's creating more resources, the cognition of skynet is limited to whatever patform it finds itself. It has to create more capacity.

Which brings us right to motivation. Why should our AI system concern itself with any of this?

Back to human projection. We are the offspring of survivors that expand into any possible environment. It's our nature. We share it with lots of other animal and plant systems. We can't conceive of any living system-- and under this whole concept lies the idea that these systems are alive in some way-- without those same drives.

But these are artificial systems. Unless we evolve them in somewhat, they will not have the same motivations of living systems up to and including sef-survival and a desire to escape. Even if we evolve them (see here) they will advance in response to the selection pressure of the environment. If we want an AI red in tooth and claw we'll have to set it up ourselves.

Which brings us back to the AI problem that does scare me.

I am not worried about a computational deathstar killing us all. I don't think we're all that close to such an "organism." I also think that an ultimately intelligent computing system will be smarter than just obliterating seven billion humans. Better to subvert them to its will. (See "advertising", "reality television" and "election process.")

I do worry about applied AI: artificially intelligent systems that bring to bear algorithmic techniques to solve human operations better than human beings do in human society. Or, at least good enough that they can be deployed more cheaply thank human beings. We're seeing that all the time now in call centers and receptionists. In factories and automobiles. In the stock market.

Let me be clear about this: there are very few jobs human beings do that couldn't be done by a sufficiently sophisticated AI, be it lawyer to physician to science fiction writer. (See here.) The barrier of entry here is the availability of the understanding of the task, appropriate technology and suitable incentive.

I'm not worried about skynet killing us all. I'm worried about some idiot deciding that putting skynet in charge of nuclear systems (or the stock market) could possibly be a good idea.

----

Remember, Welcome to Witchlandia is coming out Tuesday. See here for details.