Chapter 537: As a living being's most fatal weakness

Although this ten-year-old girl looks a little unreliable at the moment, Fang Zheng still handed over the body of the narrator's sister to herβ€”β€”β€” after all, it is just maintenance, and judging from the dog, the scientific and technological ability of that world is also quite good, but there should be no problem with simple maintenance work.

Fang Zheng returned to his room and began to analyze the narrator's girl's program.

The reason why he plans to do this by himself instead of handing it over to Nimfu is that Fang Zheng wants to analyze it through this narrator's girl's program and make some adjustments to the artificial AI manufacturing. Moreover, he also hopes to see what level the technology of artificial AI has developed in other worlds, although it is not to say that all of them can be borrowed, but the stones of other mountains can be used to attack jade.

"Yumemi Hoshino........."

Looking at the name of the file displayed on the screen, Fang Zheng fell into a long thought, the parsing program itself is not difficult, Fang Zheng itself copied Nimfu's electronic intrusion ability, and he has been learning this knowledge from Nimfu during this time, so it didn't take too much time to parse the program itself.

However, when Founder dismantled the core of Yumemi Hoshino's program and re-decomposed the functions into lines of code, he suddenly thought of a very special problem.

What is the danger of artificial AI? Then again, is AI really dangerous?

Taking this commentator girl as an example, Fang Zheng can easily find the underlying instruction code of the three laws of the robot in her program, and the relationship between these codes has also proved to Fang Zheng that the person who spoke to him before was not a living life, just a robot. Her every move, every smile and smile is controlled by the program, by analyzing the scene in front of her, and then making the highest priority action that she can choose.

To put it bluntly, in essence, this girl's approach is actually no different from those working robots on the assembly line, or the NPCs in the game. You choose actions, and it reacts based on those actions. As in many games, players can increase their kindness or malice based on their actions, and NPCs react based on this accumulated data.

For example, it can be set that when the goodness value reaches a certain level, NPCs may make more demands on the player, or it may be easier for the player to pass through a certain area. Conversely, if the malice value reaches a certain level, it may be easier for NPCs to succumb to certain requests from the player, or prevent the player from entering certain areas.

However, it has nothing to do with whether the NPC likes the player or not, because the stats are set that way, and they don't have the judgment to do so. In other words, if Founder changes the range of this value, then people can see an NPC smiling at players who are full of evil, and ignoring players who are kind and honest. This also has nothing to do with the moral values of NPCs, as this is the stat setting.

So, going back to the previous question, Fang Zheng admits that his first meeting with Yumemi Hoshino was quite dramatic, and the narrator robot girl was also very interesting.

So let's make an analogy, if the narrator girl gives the bouquet made of a large pile of non-combustible garbage to Fang Zheng, Fang Zheng suddenly gets angry, smashes the garbage bouquet into pieces, and then directly cuts the robot girl in front of him in half, then what will be the reaction of this robot girl?

She doesn't cry, she doesn't get angry, and according to her program, she will only apologize to Fangzheng, and think that her mistake caused the guests to be unhappy with her, and maybe she will ask Fangzheng to find a staff to repair it.

If you look at this scene in the eyes of other people, of course, you will feel that the narrator is pitiful, and think that Fang Zheng is a nasty bully.

So, how did this difference come about?

In essence, this narrator robot is actually the same as automatic doors, escalators and other tools, and it completes its own work by setting programs. If an automatic door is out of order, it doesn't open when it's time to open it, or it snaps shut when you walk by. You don't think that automatic door is stupid, you just want to open it quickly. If he can't open it, he might smash the broken door and walk away.

If this scene is seen in the eyes of other people, then they may think that this person is a bit rude, but they will not have any disgust with what he has done, let alone think that the other party is a bully.

There is only one reason, and that is interactivity and communication.

And this is also the biggest weakness of the living bodyβ€”β€”β€” emotional projection.

They project their feelings onto an object and expect it to reciprocate. Why do humans like to have pets? Because pets respond to everything they do. For example, when you call a dog, it will come running and wagging its tail at you. A cat may just lie there and don't bother to pay attention to you, but when you pet it, it will still flick its tail in the same way, or some of it will lick your hand.

But if you shout at a table and stroke a nail, they won't be able to give you the slightest response, even if you're full of love. Because they don't give anything back to your emotional projection, they naturally won't be valued.

In the same way, if you have a TV and one day you want to change it to a new one, then you will not have any hesitation, maybe the price and space will be your consideration, but the TV itself is not one of them.

But on the other hand, if you add an artificial AI to the TV, every day when you come home, the TV will welcome you home, and it will also tell you what programs are available today, and when you are watching the show, it will echo your complaints. And when you decide to buy a new TV, it will also complain and say, "What, am I not doing well, aren't you going to want me?" ”

Then when you buy a new TV to replace it, you will naturally hesitate. Because your emotional projection is rewarded here, and this TV's artificial AI also has memories of all the time with you. If you don't have a memory card to move it to another TV, would you hesitate or give up on getting a new one?

Definitely yes.

But be sensible, brother. It's just a TV, everything it does is programmed, and it's all debugged by merchants and engineers specifically for user viscosity. They do this to make sure you'll keep buying their products, and the pleading voice inside is nothing more than to discourage you from switching to other brands. Because when you say that you want to buy a new TV, this artificial AI does not think "I am sad that he is going to abandon me" but "the owner wants to buy a new TV, but the new TV is not its own brand, so according to this logical feedback, I need to start the 'prayer' program to let the owner continue to maintain stickiness and loyalty to his own brand".

The truth is true, and the truth is the truth, but will you accept it?

No.

Because life has feelings, and the inseparability of sensibility and reason is the consistent manifestation of intelligent life.

Human beings will always do a lot of things that don't make sense, and that's why they do.

So when they feel pitiful to AI, it's not because AI is really pitiful, but because they "feel" AI pitiful.

That's enough, as for what the truth is, no one cares.

That's why there's always a conflict between humans and AI, there's nothing wrong with AI itself, everything it does is within the scope of its own programs and logic, and all of this is created by humans and delineated for it. It's just that in the process, the emotional projection of human beings themselves has changed, and they have gradually changed their minds.

They expect the AI to respond more to their emotional projections, so they adjust the AI's processing range so that they have more emotion and reaction and self-awareness. They believe that the AI has learned to feel (which it doesn't) and that they can no longer treat them as machines, so they are given the right to be self-aware.

However, when the AI became self-aware, began to awaken and act according to this premise, humans began to be afraid.

Because they find that they have made things that are not under their control.

But the problem is that "uncontrolled" itself is also a set of instructions made by themselves.

They think that the AI has betrayed them, but in fact, from the beginning to the end, the AI just acts according to the instructions they set. There is no such thing as betrayal, on the contrary, they are simply confused by their feelings.

It's a dead knot.

If Founder sets out to build an AI on its own, it may also fall into it and can't extricate itself. Suppose he creates an AI for a little girl, then he will definitely gradually improve her functions as he does with his own child, and eventually exert some "freedom" on her because of "emotional projection".

In this way, it is possible that the AI will react completely beyond Founder's expectations because it has different logic from humans.

And at that time, Fangzheng's only thought was......... Himself was betrayed.

But in fact, he caused it all on his own.

β€œβ€¦β€¦β€¦β€¦β€¦β€¦β€¦ Maybe I should think about something else. ”

Looking at the code in front of him, Fang Zheng was silent for a long time, and then sighed.

He had thought that this was a very simple thing to do, but now, Fang Zheng was not so sure.

Until then, though.........

Looking at the code in front of him, Fang Zheng reached out and put it on the keyboard.

Let's do what we have to do.