At the CIPR Conference, MWS AI presented the results of a survey in which they asked 1,600 Russians over the age of 18 to tell which images were neural network-generated and which ones were real photographs. Out of 10 images presented, 4 were AI-generated and 6 were real.
Key findings:
- In all deepfake recognition tests, most respondents (62% to 80%) said that images created by a specialized AI for face generation were real.
- Out of 6 real images, only 3 were recognized as such by the majority of respondents.
Source: MTS AI |
Three of the four AI images were generated by a specialized neural network designed to create human deepfakes, while the fourth one was made by a general-purpose large language model based on a text description. The survey revealed that the image generated by a general-purpose model was more commonly recognized as fake: 76% of respondents said it was AI-generated. This can be explained by the models’ intended purpose: in recent years, neural networks developed specifically to generate human faces have learned to perfectly imitate skin texture, symmetry of the features and transitions between light and shadow, making the final image virtually indistinguishable from a real photo. The recognition rate was significantly lower for images made by a specialized neural network developed to generate human faces. Most respondents mistook the three AI-generated portraits for photographs of real people, with less than 40% correct guesses in each case. One of the deepfakes was identified as a photograph of a real person by 81% of respondents. For comparison, the VisionLabs deepfake detector correctly identified all the AI-generated images.