[Chapter 647: When AI is Smarter Than Humans...]
……
"The most common answer is technology, and that's true, technology is the great result of our human history."
"The rapid development of technology today is the direct cause, which is why we humans are so productive now, but we want to explore the ultimate reason for the further future."
"We have a gap of 250,000 generations from our ancestors, during which time we went from picking up rocks from the ground as weapons to being able to use atomic energy to make devastating super bombs, and now we know that such a complex mechanism takes a long time to evolve, but these huge changes depend on the small changes in the human brain, the chimpanzee brain and the human brain are not much different, but the humans win, we are outside and they are inside the zoo!"
"Hence the conclusion that in the future, any significant change in the basis of thinking can make a huge difference."
Ren Hong took a sip of water, paused and continued:
"Some of my colleagues think that we or humans are about to invent technology that can completely change the way we think, and that is super-artificial intelligence, or super-AI or super-intelligent beings."
"The artificial intelligence that we humans now master, figuratively speaking, is to input a certain instruction into a box, and this process requires programmers to transform knowledge into runnable programs, for which a professional system will be established, php, C++ and other computer languages."
"They're blunt, you can't extend their function, basically you just get what you put in it, and that's it."
"Although our AI technology is developing rapidly and maturing day by day, it still has not reached the same level as humans, with the same strong cross-domain complex and comprehensive learning ability."
"So we now face the question: how long will it be before humans can have this kind of powerful ability for artificial intelligence?"
"Matrix Technology once conducted a questionnaire survey of the world's top artificial intelligence experts to collect their opinions, and one of the questions was: What year do you think humans will create artificial intelligence that reaches the human level?"
"We define the AI in this question as having the ability to complete any task as well as an adult. An adult will be good at different jobs, etc., so that the capabilities of the AI will no longer be limited to a single field. ”
"The middle of the answer to this question is now, the mid-21st century range, and it seems that it will take a while, and no one knows the exact time, but I think it should be soon."
“…… We know that neurons travel signals in axons at speeds of up to 100 m/s, but in computers, signals travel at the speed of light. In addition to this, there are size limitations, the human brain is only the size of a skull, you can't expand it twice, but a computer can be expanded many times, it can be the size of a box, it can be the size of a room, or even the volume of a building, which can never be ignored. ”
"So super AI may be lurking in it, just as atomic energy was lurking in history until it was awakened in 1945."
"And in this century, humanity may awaken the wisdom of super AI, and we will see a big explosion of wisdom. When people are thinking about what is smart and what is stupid, especially when we talk about power rights. ”
"For example, chimpanzees are strong and the same size is equivalent to two healthy males, but the key between the two is more about what humans can do than what chimpanzees can do."
"So, when the super AI appears, the fate of humanity may depend on what the super intelligence wants to do."
"Just imagine, superintelligence may be the last invention that humanity needs to create, superintelligence is smarter than humans, better at creating than we are, and it will do so in a very short period of time, which means that it will be a shortened future."
"Just imagine, all the crazy technologies we have fantasized about, maybe human beings can complete and realize it in a certain time, such as ending aging, immortality, colonization of the universe, ......"
"Elements that seem to only exist in the science fiction world but conform to the laws of physics at the same time, super intelligence has the means to develop these things, and it is faster and more efficient than humans, we humans need 1,000 years to complete an invention, super AI may only take 1 hour, or even less, this is the shortened future."
"If there is a super intelligence body with such mature technology now, its power will be unimaginable to humans, and usually it can get whatever it wants, and the future of our human race will be dominated by the preferences of this super AI."
"So the question is, what is its preference?"
"It's a tricky and serious question, and to make progress in this area, for example, there is a way to think that we must avoid personifying super AI, blocking or sparse, which smells of opinion."
"It's an ironic question, because every news story about the future of artificial intelligence or a topic related to it, including what we're working on, will probably be labeled in tomorrow's news with a poster from the Hollywood sci-fi movie 'The Terminator,' with robots versus humans (shrugging and laughing)."
"So, I personally think we should express this issue in a more abstract way, rather than the narrative of Hollywood movies where robots stand up against humans, wars, and so on, which is too one-sided."
"We should think of super AI abstraction as an optimization process, like the optimization of a program by a programmer."
"Super AI or super intelligence is a very powerful optimization process, and it is very good at using resources to achieve the end goal, which means that there is no necessary connection between having high intelligence and having a goal that is useful to humans."
"If it's not easy to understand this sentence, let's take a few examples: if we give artificial intelligence the task of making people laugh, our current home machine assistants and other robots may make funny performances to make people laugh, which is typical of weak AI behavior."
"And when the AI given to the task is a super-intelligence, a super-AI, it realizes that there is a better way to achieve this effect or complete the task: it may control the world and insert electrodes into all human facial muscles to make humans laugh constantly."
"For example, if the task of this super AI is to protect the safety of the owner, then it will choose a better way to deal with it, and it will imprison the owner at home and not let him go out to better protect the safety of the owner. There may still be dangers at home, it will also take into account all kinds of factors that may threaten and lead to the failure of the mission, and erase them one by one, eliminate all the factors with malice to the master, and even control the world, all these actions are in order not to fail the task, it will do the ultimate optimization choice and put into action to achieve the goal of completing the task. ”
"For example, if we give this super AI the task goal of solving an extremely difficult mathematical problem, it will realize that there is a more efficient way to accomplish the task goal, which is to turn the whole world, the whole earth, and even more exaggerated scale into a super-large computer, so that it will be more powerful and easier to accomplish the task goal. And it will realize that this solution will not be approved by us, and that humanity will stop it, and that humanity is a potential threat in this mode, so it will solve all obstacles for the ultimate goal, including humanity, any matter, such as planning some sub-plan to exterminate humanity and so on. ”
Of course, these are exaggerated descriptions, and we can't be so wrong about this kind of thing, but the main point of the above three exaggerated examples is important: if you create a very powerful optimization program to maximize your goals, you have to make sure that your goals are precise and include everything you care about. If you create a powerful optimization process and give it a wrong or imprecise target, the consequences may look like the example above. ”
"One might say that if a 'computer' starts putting electrodes in a person's face, we can turn it off. Actually, it's definitely not an easy thing to do, if we rely on this system a lot, like the internet that we rely on, do you know where the switch of the internet is? ”
"So there must be a reason, we humans are smart, we can meet threats and try to avoid them, not to mention the super AI that is smarter than us, it will only do better than us."
"We shouldn't be confident that we're in control of everything."
"So let's put this problem simply, like we put artificial intelligence in a small box and make a software environment that is insured, like a virtual reality simulator that it can't escape."
"But do we really have full confidence and certainty that it will not find a loophole, a loophole that will allow him to escape?"
"Even we human hackers are able to find cyber vulnerabilities every moment."
"I might say that I'm not very confident that the super AI won't find the loophole and get away. So we decided to disconnect the internet to create a void insulation, but I have to reiterate that human hackers can socially engineer themselves to cross that gap time and time again. ”
"Now, for example, as I speak, I'm sure that at some point an employee here asks him to hand over his account details, on the grounds that it's for someone in the computer information department, or some other example. If you're the AI, imagine using the intricate winding electrodes around your body to create a kind of radio wave to communicate. ”
Or you can pretend something is wrong. At this point, the programmer will open you to see what went wrong, they will find the source code, and you will be able to take control of the process. Or you can come up with a very tempting technology blueprint, and when we implement it, there will be some side effects of the secrets that you have planned as artificial intelligence to achieve your obscure purposes, etc., and the list goes on. ”
"So, any attempt to control a super-AI is extremely ridiculous, and we can't be overconfident that we will be able to control a super-AI forever, that it will one day break free of control, and after that, will it be a benevolent god?"
"Personally, I think it's inevitable that AI will be humanized, so I think we need to understand that if we create a super AI, even if it's not constrained by us. It should still be harmless to us, it should be on our side, it should have the same values as us. ”
"So are you optimistic that this problem can be effectively solved?"
"We don't need to write down all the things we care about with super AI, or even turn those things into computer language, because that's a task that can never be done. Rather, it should be that the AI we create uses its own wisdom to learn our values, and can inspire it to pursue our values, or to do things that we would approve of, to solve valuable problems. ”
"It's not impossible, it's possible, and the results can benefit humanity enormously, but it won't happen automatically, and its values need to be guided."
"The initial conditions for the Big Bang of wisdom need to be correctly established from the most primitive stage."
"If we want everything not to deviate from our expectations, the values of AI and our values complement each other not only in familiar situations, such as when we can easily check its behavior, but also in the unprecedented situation that all AI may encounter, in a future without boundaries and our values still complement each other, and there are many esoteric questions that need to be solved: such as how it makes decisions, how to solve logical uncertainty and many similar problems, and so on."
"This task may seem a bit difficult, but it's not as difficult as creating a supersapient being, is it?"
"It's still quite difficult (laughter is heard again)!"
"What we're worried about is that if creating a super AI is really a big challenge, it's a bigger challenge to create a safe super AI, and the risk is that if we solve the first problem, we can't solve the second problem of ensuring security, so I think we should come up with a solution that doesn't deviate from our values in advance, so that we can use it when we need it."
"Now, maybe we can't solve the second security problem, because there are factors that you need to understand, and you need to apply the details of that actual architecture to be implemented effectively."
"If we can solve this problem, it will be much smoother when we move into the era of real superintelligence, which is something that is very worthwhile for us."
"And I can imagine that if all goes well, in a hundred, thousand, or millions of years, when our children and grandchildren look back on our century, they may say that the most important thing our ancestors, our generation, did was make the right decision."
"Thank you!"
……… (To be continued.) )