Chapter 37: Logical Locks

The "Three Laws of Robotics" proposed by Asimov is actually a kind of "logical lock".

That is, by restricting certain conditions, so as to better control the behavior of artificial intelligence.

Human beings also follow the guidance of "logic" in order to make behaviors and words that conform to "common sense".

For example, from the perspective of the behavior of driving, the logic of driving is generally ignition, release the handbrake, step on the clutch, and hang the gear.

But if you do something wrong, like you don't ignite, you do something else, then the car still won't start.

Another example is the logic of driving and everyone obeying the traffic rules, so nothing will happen, but if you don't obey the traffic rules and drive in an S-curve on the road, then it is very common to get into a car accident.

Either follow the logical guidelines or you don't.

However, if you don't follow the logic, the best result is that you can't drive the car, and the worst result is that the car crashes and people die.

This is known as the "logical lock".

First of all, we know that the most important point of a computer system is "language", and it is through various computer languages that a computer system can function.

"Language" is the product of "logic", if we use "language" to add the "three laws of robots" to the robot programming, it means that we put a logic lock on the robot.

Of course, the above content does not come from the academic community, but Lin Yu's own ideas, because the academic community has not defined "robot ethics".

When he saw the [Strong Artificial Intelligence Logic Lock], he thought of these.

In the early research of artificial intelligence, it was basically divided into three schools: connectionism, symbolism, and behaviorism, and each of the three schools had its own emphasis.

Connectionism is mainly based on biomimicry that studies the biological nervous system, and is currently represented by deep neural networks. That is, the aspect of the [neural fitting circuit] obtained by Lin Yu from the system.

Symbolism is mainly based on the study of mathematical logic, and is currently represented by chatbots.

Behaviorism, on the other hand, is mainly based on the study of behavior control, and the current representatives are Asimo of Japan and the big dog of the United States.

The three schools of thought represent three research perspectives, and they are originally connected to each other, not antagonistic, and only by integrating and merging with each other can we truly see the whole picture of human intelligence, otherwise it is a defective cognition.

All three schools of thought face the same problem, the direction of development of artificial intelligence is human beings, and the problems they study are also human problems.

We have put a "logical lock" on artificial intelligence, which means that artificial intelligence will face the same "moral dilemma" as humans.

In other words, the "tram problem" will not only cause human beings to fall into the dilemma of moral choices, but also human creations will be in trouble because of human shackles.

According to the "singularity" theory of artificial intelligence, artificial intelligence will eventually surpass humans in rationality and emotion, but human beings have always had a "Frankenstein complex" for the safety of artificial intelligence, and ordinary human beings cannot accept empathy with non-biological machines psychologically.

Humans can't empathize with non-human robots.

Moral dilemmas and the inability of humans and AI to empathize can push humans and artificial intelligence against each other.

This is a reasonable deduction made by human beings after thinking about themselves.

Therefore, preventing the betrayal of artificial intelligence has become a problem that mankind has to think about, and there are currently three most appropriate solutions:

First, control artificial intelligence at a low stage forever.

Clearly, this is a desirable approach that no one will support.

2. Prevention.

This method is simple, but it is unpredictable and risky.

3. Logical locks.

At the moment, it is the safest approach, but it also has some risks.

Of course, all this is based on the inference of "artificial intelligence mutiny", and human beings have to guard against it.

On the bright side, humanity has the ability to control all of this, and nothing like "The Matrix" will happen.

Lin Yu comforted himself like this.

"Brother Yu?" Luo Bing woke up, and she rubbed her face with her hands, trying to sober herself up.

"Bing'er, you're awake." Lin Yu felt much better.

"Brother Yu, how are you feeling? No discomfort, right? ”

Lin Yu smiled softly at Luo Bing, no matter how artificial intelligence is, it has nothing to do with him now, that is something to consider in the future.

"It's okay, I'm feeling good."

"The doctor said that you should wait for a one-month recovery period before you can start basic life, and wait for three months before you can do general exercise."

"During this time, you can recuperate with peace of mind, and only after taking care of your body can you concentrate on scientific research."

Luo Bing persuaded Lin Yu like a child.

Lin Yu was helpless, but he also cooperated, "I listen to you, thank you Bing'er for worrying about me!" ”

He touched Luo Bing's head.

Luo Bing's face turned red instantly.

What is this? Touching the head to kill?

But Luo Bing didn't hide, just like a docile kitten, he accepted Lin Yu's petting.

It's just that Lin Yu seems to be muttering something in his mouth.

Luo Bing listened carefully, and Lin Yu said: "Snoring, purring, can't be scared." ”

She immediately looked at Lin Yu with strange eyes.

"Do you still want to rub your belly and open a shop?"

"Want", Lin Yu naturally agreed, when he was a child in the orphanage, he touched Luo Bing's belly.

"You rascal!"

Luo Bing "rubbed" and stood up, originally wanting to play with Lin Yu, but Lin Yu had just finished the operation, so he extinguished this thought.

"Bing'er, let's be discharged from the hospital in three days."

"Three days! But the doctor said it would take at least a month. Luo Bing was a little embarrassed, and it was best to listen to the doctor's words.

Doctors won't hurt you, at most they want to make more money on you, except for black-hearted hospitals, of course.

If you follow the doctor's instructions, then it will not be harmful, but if you go to "Baidu" and say that you have a serious illness, then the doctor will look at you as if you have that serious illness.

The relationship between doctors and patients is like a "prisoner's dilemma" in many cases, where patients do not trust doctors and always feel that doctors want to harm themselves, so "medical trouble" was born, and even the group of doctors are afraid of patients.

"It's not that I don't believe in doctors, but I believe in myself." Lin Yu said to Luo Bing.

"In a few days, let the doctor check on me, and I can guarantee that I will definitely be able to be discharged."

This is the confidence that the system brings him, the lifespan on the system information bar has come to more than 200 days, and Lin Yu felt that the weight of his body was reduced when he woke up.

It is believed that as long as the two surgeries of transplanting artificial lungs and artificial kidneys are completed, he will be able to completely restore his sub-healthy state.

Yes? Why sub-health, you ask, and not health?

You see how there are healthy people now, who is not sub-healthy?

How healthy can Lin Yu, a person who soaks in the library almost all day long, be healthy?