123. Targeted reporting
"Just yesterday, Google released its latest AI technology, FaceGAN.
At the same time, there was a website called "These People Don't Exist".
Maybe the audience will think that this is a complete prank, because these portraits, these characters look so real and lifelike.
Lovely young children, beautiful ladies, handsome gentlemen, old people who have gone through vicissitudes. All this does not seem to be any different from the characters in our lives.
But it turns out that the images were not actually taken, but were generated using the most cutting-edge artificial intelligence technology.
We should feel deep concern about this.
FaceGAN is a combination of two phrases, Face and GAN. Among them, GAN is a generative adversarial network proposed by Meng Fanqi, a young artificial intelligence scholar in China, two months ago.
This is an innovative approach to artificial intelligence, and two months ago, GAN was just a groundbreaking idea in the field.
But two months later, FaceGAN for face tasks has already shown amazing generation effects, and the power is endless.
At this rate of development, it's reasonable to believe that anyone with a computer and the internet can create photorealistic photos and videos that show people saying and doing things they don't actually say or do.
Anything that is ridiculous can have [evidence] to back it up.
Despite its impressiveness, the current FaceGAN technology is still not on par with a real high-definition photo – a closer look can often tell that the photo is AI-generated.
But the technology is advancing at an alarming rate. Experts predict that it won't be long before people will be able to distinguish AI-generated content from real images. ”
This is a Forbes report, and it is generally quite pertinent.
Although I am overly optimistic about the speed of subsequent development of artificial intelligence, this is a normal mistake made by a layman, and it is completely understandable.
But on CNN's side, the style of the report is completely different.
This section in particular, is very aggressive.
"The first use case where a similar generative technique will be widely adopted – and new technology is usually like that, whether you want it or not – is going to be astringent.
Generative astringent content is almost always involuntary, and from some of the darker corners of the web, such generative techniques can gradually spread from the astringent realm to the political realm and cause even greater chaos.
If you can show everyone what they think is [true] fake content, it doesn't take much imagination to understand the harm that can be done.
Imagine generative fake footage of politicians engaging in bribery or sexual assault before an election; or atrocities committed by U.S. soldiers against civilians overseas; Or President Okun-hae announcing the launch of nuclear weapons against North Korea.
In such a world, even if there is some uncertainty about whether these clips are real, the consequences can be catastrophic.
Thanks to the popularity of this technology, anyone can make footage like this: a state-sponsored actor, a political group, an independent individual.
distorting democratic discourse; rigging elections; erosion of trust in institutions; weakening journalism; exacerbating social divisions; breach of public safety; and inflict irreparable damage to the reputations of prominent figures, including elected officials and candidates for public office.
In the past, if you wanted to threaten the United States, you needed 10 aircraft carriers, nuclear weapons, and long-range missiles.
Today...... All you need is the ability to make a very realistic fake video that could ruin our elections, which could plunge our country into a huge internal crisis and weaken us deeply.
These things are in the near future.
If we can't trust the video, audio, images, and information collected from around the world, it will be a serious national security risk.
It hardly matters whether the images and videos are real or not. Powerful generative technology will make it increasingly difficult for the public to distinguish between what is real and what is fake, and political actors will inevitably take advantage of the situation – which could have devastating consequences. ”
Meng Fanqi was almost numb when he read this, no wonder Trump likes to say that CNN is fake news.
This is more outrageous than it says, and now just a generation technology for low-resolution facial images has been said by CNN to be more evil than aircraft carriers.
What used to require "10 aircraft carriers, nuclear weapons and long-range missiles", now only the ability to make fake videos is needed?
It means that if he Meng Fanqi engages in artificial intelligence for another two years, he will have a crisis of intelligence and take the United States, right?
I don't talk about serious things at all, and I don't mention the technical content at all, so I patronize and sell anxiety.
It can be seen that Meng Fanqi's blood pressure is high.
The Wall Street Journal's report was a bit more technical:
"The core technology that makes it possible to generate such realistic images is generative adversarial networks, which were unveiled by Meng Fanqi in October 2013.
The godfathers of the AI world, Hinton and Bengio, both praised the idea and called it the most interesting idea of the last decade.
Before the advent of GANs, neural networks were good at classifying existing content, language, speech, images, etc., but they were not good at creating new content at all.
Meng Fanqi not only endowed the neural network with the ability to perceive, but also gave it the ability to create.
Meng's conceptual breakthrough was to use two independent neural networks to build GANs – one called a "generator" and the other a "discriminator" – and pit them against each other.
Starting with a given dataset (e.g., a collection of face photos), the generator begins to generate new images that are mathematically similar to existing images in terms of pixels. At the same time, the discriminators are fed into the photos without being told whether they are from the original dataset or from the output of the generator; Its task is to identify which photos are compositely generated.
As the two networks repeatedly pitted each other – generators trying to fool discriminators, discriminators trying to prove generators' creativity – they honed each other's abilities. The classification success rate of the final discriminator drops to 50%, which is no better than a random guess, which means that the composite resulting photo becomes indistinguishable from the original.
And that's the case in our real world, once we find a way to identify generative fake content, the generator can fix it very quickly. It's like a cat-and-mouse game, and our future confrontation with generative fake content, like the GAN approach, will continue to make generative models stronger. ”
"I'm super, this is a pure philosopher." Meng Fanqi's heart was shocked after reading it, this final sublimation was something he didn't expect himself.
After a little more shopping on Twitter, Meng Fanqi realized why the technology had suddenly attracted so many people's attention.
It turned out that there was a performance artist, and after looking at the [these people don't exist] website, I found a few pictures from it as avatars, and went live to the online chat.
As a result, many of the people who spoke to him had commented on the images, but none of them doubted the authenticity of the images.
Under the watchful eyes of millions of melon-eating people, FaceGAN's strength was raised to a height that did not belong to it.
As the first author of GAN technology and FaceGAN technology, Meng Fanqi is now hot on Twitter.
Countless questions and @ dazzled him, and even the mainstream media that posted the article also launched an interview with Meng Fanqi through Twitter messages or Google relations.