Specialists from MTS AI have developed a neural network assistant that automatically detects a wide range of undesirable content in videos. Previously, the checking was done entirely manually by employees. However, as users upload more and more videos, moderation has become an extremely labor-intensive task, so NUUM decided to enlist neural networks.
The AI assistant works as a full-fledged moderator and performs tasks for the initial screening of short videos on the platform. The neural network handles moderation three times faster than employees: the AI assistant checks up to 6,000 videos per day, while other workers only manage 2,000. As a result, department heads receive a file with time codes, violation categories, and links to specific frames.
The system created by MTS AI recognizes prohibited content with an accuracy of over 90%. This allows filtering out most videos that don’t contain forbidden objects and directs employees’ attention to frames with suspicious content. The artificial intelligence only records violations but doesn’t block videos – the final decision remains with the platform’s moderators. High detection accuracy was achieved through training on a dataset consisting of 700,000 images of prohibited content.
The AI moderator uses parallel video processing technology. Files are split into frames, which are then grouped and distributed among several streams for object recognition. All streams are processed in parallel for efficient resource use. Afterwards, all results are grouped for further use.
“Modern neural networks can recognize dangerous content not only in movies and TV series but also in stream recordings. In the future, they may also be able to monitor live broadcasts and voice content in real-time. We are working to make effective content moderation methods available to our colleagues and are constantly improving algorithms and datasets,” explained Semyon Galushkin, project manager at MTS AI.