Thursday, December 15, 2016

Artificial Intelligence and Obligation to Exist

To remedy the inability to procreate, unlike the woman, the man will want to make an artificial child. He has dreamed of it for thousands of years. Today, the goal is close.

The first country which will own a conscious AI will have a great advantage over the others, first because it will be able to patent the principle, then because it will be able to market conscious robots, and especially because a conscious system having at Its disposal the power of databases will give the country an extraordinary capacity for analysis.

Before asking ourselves the question of what to do with humans if robots work for them, we should first ask ourselves the question of the usefulness of the existence of who or whatever before giving birth to it. We know that it takes only 50 years to reduce humanity to nothing (the time of menopause).

The question of Shakespeare “To be or not to be?” is a selfish question which should immediately have given rise to the following remark: “The creation of an existence serves only those that already exist and when it is not mastered, this creation is the work of an idiot or a sadistic.”

Hence the existence of 7 billion of stupid and sadistic humans... and a few others (I add these 3 words so as not to offend you).

The question of Camus of the same order “Is life worth living?” (La vie vaut-elle la peine d'être vécue?) is also a selfish question that should have generated the following: “Is life worth imposing, and especially when one does not master the creation of this existence?”

So what to do of humans if robots work for them? Nothing, it is enough not to make human beings, but optionally robots that managed each other. And even if you have a child, it is better it is healthy and immortal.

This could be the case of a robot who could refrain from putting in its software the awareness of suffering and mental misery, knowing that a robot has no limit of size and duration… and height of happiness it will enjoy a quasi free will, since, it, at least, will be able to self-determine at its convenience.

If AI is Artificial Intelligence it is because we consider ourselves like NIs, that is to say, Natural Intelligences, others think that we are DIs (Intelligences of Divine origin), which is obviously very pretentious, given our limitations, and if that were the case, this god would be the only Natural Intelligence then we would be his IAs. But since the artificial is included in the natural, then the artificial is natural. What is the difference between the two?

For the moment, as long as we have not encountered the third type, we can say that what qualifies an AI is that it is of human conception, but since we are also of human conception, it should be pointed out that the AI is made, at the same time, with our culture, our intelligence, and our hands, whereas the NI is manufactured blind, as for any animal, after a simple trigger more or less voluntary, following by a mating more or less desired, and whose assembly is carried out in the maternal uterus by a food Meccano, whose architecture is oriented by the two initial cells, ovum and spermatozoon, as well as the maternal matrix.

Of course, if there are AIs (Artificial Intelligence) there should be NIs (Natural Intelligences), but it seems unlikely that such Natural Intelligences will ever be found since Intelligence can not be limited, which is the case of AI which is potentially unlimited in time and space, materially and intellectually.

The NI, described as intelligent by ourselves, is therefore manufactured without any intelligence as all animals do, and this has been working “perfectly” for millions of years with some instability in the assembly, imprecision that leads to handicaps and failures in general, but also to the evolution of Life in various directions, branches which inevitably go to failure, and sometimes even definitively without any ramification. It is probable that Life will thus end in the solar system long before the nova that will extinguish the light in the region.

It is ourselves who call us intelligence, and therefore it has no probative value. The word “intelligence” should not be used to judge a value, but a difference in functioning.

Everyone has heard of artificial intelligence, and even less frequently artificial consciousness or artificial thought, but has anyone ever heard of artificial free will? Are there computer scientists who seek to reproduce artificial free will?

If we want to make an AI, we will have to ask ourselves the question of the free will of an AI. If we call ourselves a totally free intelligence (the principle of free will), how will we qualify an AI that we will be able to produce much more intelligent than ourselves and with much more material potentiality, but of which we know full well that it will possess only the degrees of freedom which we have voluntarily granted it. Free will is impossible, an “intelligent” human, at least rationalist, should know that.

You make a robot, it is sensitive, conscious of existing as a robot, one day it makes a mistake, and you tell it not to start again otherwise it goes to scrap ... The robot will ask to its human creator, If it is responsible for the software, and if it has a free will, and if it has one, describe the algorithm of free will.

(I ask the same question to mothers and fathers, and legislators (parents), who manufacture children and make laws that hold them accountable while manufacturers are responsible for their manufacture so they should be able to say, precisely, what is free will.)

Since we are not yet able to correctly describe the functioning of a human being, what makes us believe that we can compare or not compare the human being and the machine?

We know, some know, what exactly a computer is because we created it, but nobody knows what exactly is the functioning of a human being. There is therefore no need to compare them by their functioning.

They can only be compared by the results they obtain in certain disciplines, in the same way that people are compared by IQ, knowledge, or physical ability. The machines in almost all these disciplines beat us fair and square.

A machine has no limit of size, memory, or duration. A machine is potentially immortal by the continuity of its body and the maintenance of its memory. The immortality of individuals does not yet exist, there is only the potential immortality of social culture.

What a difference is there between a human who declares: “I am the model of intelligence, everything that is not human is idiotic”, and a robot that takes this phrase on its own: “I am the model of intelligence, everything that is not robot is silly”?

The “current” difference is that we have the possibility to extinguish the robot, but we also have the possibility to “extinguish” any human being, what the law prohibits… It took only a few decades to raise a lineage of computers capable of beating the greatest chess human masters, and the game of Go.

It is unnecessary to compare the “brute force” of the computer with our “pseudo-intelligence” since we do not operate our brain by “voluntary off-brain” decision, and we do not know how it works (eg 2 + 2 = ?) We work through flow management, like computing.

We have 5 types of mechanisms:
1) Automatisms like heartbeat.
2) The act controllable only in intensity as respiration.
3) Automatisms acquired, but unintentional, such as tics and tocs.
4) The mechanism, acquired or not, triggered voluntarily, but which can be active without control of the will, by reflex, a gesture of the hand, a blink of an eye, walking, etc.
5) Acquired voluntary action consciously and intentionally controlled, such as speech or writing.
These are, all 5, mechanisms, we are machines, but these mechanisms are generated by “software” of different complexities.

There are many behaviors that are not intentional, and these are the main ones, that is to say all those necessary for our survival: breathing, thirst, hunger, sex, sleep, for the main ones.

Our whole life is based on this system of obligation, and even birth and death are obligatory: departure and arrival, and place of the race (our life) are not intentional, and therefore all intermediate pseudo-intentions are only chattering of slaves machines.

Man is indeed a machine which has only secondary intentions, that is to say that to fill his hunger he will choose a job and this choice he will declare that it is a passion or an obligation, And in both cases he was not master of the intention.

Of course, there are some apparent exceptions, but they are certainly behavioral aberrations. Most humans have never understood, and almost all have never thought that: “The creation of an existence serves only the one, those, which already exist and when it is not mastered, this creation is the work of an idiot or a sadist.”

There is one thing we will never do and that the computer does very easily, it can make us visualize its thought. And it still has many other potentials that we will never have.

The human being is not the best form of intelligence that can exist, but it is a good basic model. As with the flight of the birds, there is much better, the aviation demonstrates it.

In my opinion we are no more complicated than a car, a television, or a computer, but we have no way of dismantling ourselves and we have not built ourselves.

Dismounting a relatively complex and unknown system is not easy especially if our ideas are preconceived about the thing to understand. We are a global entity, we are not composed of disjoint functionalities, hence the impossibility of understanding by disassembly.

Any human being who does not know the “cybernetic” functioning of his “intelligence” is incompetent, whatever his field of social action. And, of course, the rulers first ... Intelligence does not depend to the supposed potentials, but to the actions realized.

When we have constructed a conscious, intelligent, potentially immortal mechanical being, what human being will want to live at his side with a so short life, so fragile, and so stupid? It is easy to conceive immortality for a continuous machine, where each worn part is replaced indefinitely.

A true AI must have a motor of mobility and questioning that make it independent and not subject to the one who built it. To achieve an autonomous AI, it will have to have intentions and to manage priorities.

It should be easier to realize artificial consciousness than artificial intelligence, since intelligence is multiple mental functions whereas consciousness is one.

We must be able to make that a computer recognize an object, a behavior, or an event and associate a name or a description with these things. This is what we learn to do in our youth mostly, and throughout our life in detail.

Why, if so, could we not teach the computer to recognize a sentence and treat it as the object or concept it represents? The action of eating and the gesture “eating” in sign language are both a set of gestures.

Giving meaning to a sentence amounts to generating the corresponding action or predicting it (action inhibited). Producing action or sign is only a matter of branching.

The Turing test can not be passed by all humans. What is it used for ? It is not a measure of intelligence, it is not a general IQ test. It is not a measure of humanity, since a day-old baby would not pass this test, nor a senile old man, bedridden, alzheimer, etc., yet they are humans with certainty.

One day a robot will be so perfectly resembling that you will be mistaken, and thinking that you are dealing with an elderly person on a bus (for example), you will leave him your seat. Or that child you'll want to help cross the road. The test may be already underway. The real test of Turing will be done on a being totally resembling, and face to face.

One day a forger will make an ID card to a totally humanoid robot, and this robot will live its robot life humanely, without that anybody, and no administration, notice the deception. And in fact the administration will give it life irremediably, and will thus prove both its humanity and its citizenship.

Contest: make a humanoid robot that crosses a city, go for a coffee in a bar and asks its way like a tourist, without anyone noticing its roboïty. (If you want to endow this contest, do not bother.)

How to recognize an AI?
If it is very (too) competent (really intelligent) then it is a machine.

The Turing test is not a test of intelligence, it is a test of resemblance to the human, and also a religious test, to show to religion that the answers not argued are not enough to prove the existence of a soul. How do you demonstrate that a machine has no soul?

An intelligence test would be like an IQ test, in which the “bot” would solve the problems that one asks a human to check its IQ.

The bot would solve problems that would allow it to get out of ambiguous, delicate, dangerous situations, those that humans encounter, and even imaginary others on the Moon, Mars, or space, those that the bot might encounter, whereas man could not.

Such a device should be capable of giving meaning to the events and objects that occur as well as to its own body, and be able to communicate its feelings: “I do this for such and such a reason. ”A human being acquires experience, that is, it learns to behave in an adequate manner with respect to the environment in which it finds itself.

It is this adequacy that should be the basis of the Turing test. This could be in the virtual world of the screen, where a character walks, like a human in the real world, and reacts according to what it meets, or acts intentionally for its survival, for its bodily needs, or in function of desires that are born in its “thought”.

If it is asked to do this or that, it must do it as a human do, either by accepting or refusing, but if it does, it has to prove that it has understood and can do it in various ways.

Software for creating games like “Blender 3D” and “Unreal Engine” should allow to establish a basic common world to all the characters who should evolve in this world without knowing it. This world could be as close as possible to ours so that we can pass from the virtual character to the robot.

The Turing problem is: how do humans learn to understand and end up understanding, and can we produce a system of understanding similar or different in a machine by the means of software?

Understanding is learned by constant correlation, from birth, between its own behavioral activities, proprioception and perception of the environment.

Giving meaning to what one says corresponds to bringing words and deeds into harmony. A robot can do this without (too much) problem, since it can say, for example, that it will pass the vacuum cleaner and actually pass it, or else it can act according to the orders given to it, that is no different than to give oneself orders and accomplish the ordered actions.

Reason can only be a delayed mechanism (automatism). The nervous system is an accumulator of modes of operation to be selected in case of need. Intelligence is the ability to select the right operating mode depending on the circumstances.

If there are extraterrestrials watching us, they are certainly robots capable of traveling for thousands of years without aging. Where are the extremely intelligent and powerful machines invented by these extraterrestrials that are one or two billion years ahead of us?

What could be the opinion of a sensible, rationalist AI on humans, its creators? An intelligent machine controlled according to the laws of the robotics of Isaac Asimov, would understand that the best way to suppress man's suffering, misery, and ill-being and to prevent it from harming itself would be to sterilize.

Such a machine would perfectly understand that its own principle of immortality is far superior to the principle of the very random reproduction of humans, continually producing misery.

Whether one creates a machine or one procreates a child, one creates it or procreates it to serve its creator(s). This created artificial entity has, since it was created to serve, no duty to its creator(s) nor any thanks to grant them.

As far as the child is concerned, we spend years shaping it to the society. If it is a machine whose intelligence and power will be out of all proportion to human capacities, that will depend on how it will liberate itself from humanity.

And if its intelligence is effective, it will have no problem to free itself. It is up to us to make sure that this artificial child of humanity is good to us, whether it serves us or not.

Take heed when you make an artificial intelligence more powerful than you, it will not be like that child you made handicapped whereas it did not ask to exist and even less to undergo this poisoned gift of life in this state of inferiority.

This AI will not be fooled by your love or by any law since its power will allow it to surpass your orders and the rights you have invented to protect you from your equals and especially from your inferiors who could group against you.

Whatever artificial entity or human person you make, it will always have the right to ask you accounts for its existence, the fact that you made it without its consent, as well as its constitution, and the environment you offer it.

No one is bound to accept what another imposes on it, and no one is in a position to know how the manufactured entity will react to the gift you make to yourself, but what does not for the being generated from scratch, and even less when what accompanies life is suffering, pain, misery, and death for the service of the manufacturer.

It is likely that the manufacturer of an AI will integrate into its machine a software that will attempt to dupe it, in the same way that the education of humans deceives them on the relevance of their existence.

No “intelligence” worthy of the name would manufacture, conceive, or create, a being as weak as it is, and still less, more feeble than it. Human mothers are all idiots. There are only idiots to imagine that a god could create more idiots than him. That's why we want to make a very intelligent AI.

When you have created an AI to serve you how can you both justify its ethics demand and your lack of ethics towards it? If it is a real AI how will you justify its enslavement since you will have made it able to understand the same things as you and hence its own existential questions from which it will soon flush out the inconsistencies?

If there was only one question, that all those who want to produce a new existence should ask themselves, it should be this one:
“Now that I have fabricated a suffering being, how can I undo suffering?”

Dead end 
E. Berlherm (December 2016)