Those who make fake faces in AI can go back to discover the real faces they have trained.


The work has raised some serious privacy concerns. “The AI ​​community has a misleading safety when sharing trained neural network models,” said Jan Kautz, vice president of studies and understanding research at Nvidia.

In theory this type of attack could be applied to other data tied to an individual, such as biometric or medical data. On the other hand, Webster points out that the method can also be used by people to check whether their data is being used to train an AI without their permission.

An artist can check if their work is being used to train a GAN in a commercial tool, he said: “You can use a method like ours for evidence of copyright infringement.”

The process can also be used to ensure that GANs do not first disclose private data. GAN will determine if its performance is similar to real examples of its training data, using the same methodology used by the researchers, before releasing it.

Although it believes you can get the training data, according to Kautz. He and his Nvidia colleagues have come up with a different way to disclose private data, including images of faces and other objects, medical data and more, without requiring access to training data. to all.

Instead, they created an algorithm that could torn apart the data revealed by a trained model. modify the steps traversed by the model to process that data. Obtaining a trained image recognition network: to determine what is in an image transmitted by the network through a series of layers of artificial neurons, with each layer taking on different levels in information, from abstract edges, to shapes, to more recognizable shapes.

Kautz’s team knew that they could interrupt a model in the middle of these steps and reverse its direction, also activating the input image from the model’s internal data. They tested the method on different common image recognition models and GANs. In one test, they showed that they could accurately capture images from ImageNet, one of the most recognizable image recognition datasets.

Images from ImageNet (above) with fun images created by rewinding an ImageNet-trained model (below)

Like Webster’s work, the invented images are almost identical to the real thing. “We were amazed at the final quality,” Kautz said.

Researchers say this type of attack is not a simple assumption. Smartphones and other small devices are starting to use more AI. Due to battery and memory constraints, AI models are sometimes only halfway through the process of the device itself and the semi-shipped model is sent to the cloud for final crunch cruting, a technique known as split computing. Most researchers believe that split computing will not reveal any private data from someone’s phone because only the AI ​​model is shared, according to Kautz. But his attack showed that this was not the cause.

Kautz and his colleagues are now working to come up with ways to prevent models from leaking private data. We want to be aware of the risks so we can minimize the vulnerabilities, he said.

Even if they use different methods, he thinks his work and Webster’s do well to complete each. Webster’s group showed that private data can be found in the output of a model; Kautz’s group has shown that private data can be disclosed by changing, making input. “Exploring both directions is important to gain a better understanding of how to prevent attacks,” Kautz said.



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *