Lately, there has been an uptick in spoofing: cases where criminals use an entire range of technologies to pose as other people and swindle their victims out of money or confidential data. Common people, celebrities and representatives of large corporations can all fall victim to this type of fraud. We are going to take a look at how neural networks help counteract that threat and the solutions MWS AI can offer in that area.
Problem
The spread and increasing accessibility of generative neural networks have caused massive use of these technologies for fraudulent purposes. In particular, criminals can send messages pretending to be official representatives of other companies or public offices (the Interior Ministry, the FSS, microfinancing organizations and so on), often using a synthesized voice to imitate the speech of their victims’ colleagues, family members, friends or other people close to them. Furthermore, if criminals gain access to a person’s messenger accounts, they can use neural networks to imitate their communication style and send messages to their contacts for nefarious purposes.
Aside from direct financial losses, this type of fraud can damage business goodwill. Jumio estimates, for instance, that 67% of clients worry whether their bank is doing enough to protect them from deepfakes; 75% are willing to switch banks if that protection is insufficient.
Solutions by MWS AI
AI-based solutions like deepfake detectors are already used extensively to counteract fraud. They can be trained to detect synthetic or fake content using datasets that contain artificially generated voices, images or other materials.
In particular, MWS AI, together with VisionLabs, released a deepfake detector that protects its users from spoofing attacks; it is capable of recognizing all major types of fake images or videos: from face replacement or synthesis to fully generated media files. The anti-spoofing module of the Audiogram speech synthesis and recognition platform can, in turn, detect AI-generated audio streams created for fraudulent or misinformation purposes. The Cotype large language model by MWS AI is trained to detect the signs of fraudulent action or intention.
Solutions like these are already eagerly sought in the telecom industry, social networks, and media platforms, as well as in education projects, HR and recruiting sectors. They help brands prevent money theft and reputational damage by detecting fake messages sent in the company’s name and verifying the authenticity of documents, interview videos and other materials.
MWS AI technologies: practical use options
MWS AI offers its clients an omni-channel fraud detection system that covers their apps, websites, and messengers. A user can upload a suspicious message, a conversation screenshot or a recorded phone call to the system, and the AI service to analyze the materials and indicate how probable a fraud is in that case, listing the reasons for its findings. Those reasons include:
- a plea for financial aid from someone you know;
- a link to a third-party resource;
- a request to act in a certain way (follow a link, vote in a poll, etc.);
- emotional pressure (help someone who got in a traffic accident);
- a request for urgent transfer of money;
- a request for confidential banking data;
- recording defects, signs that the voice had been cloned.
The solution is powered by multiple MWS AI products: the Cotype large language model detects signs of fraud in texts; the Audiogram service featuring an anti-spoofing module helps decipher voice messages to detect synthetic or cloned speech; while the fine-tuned open OCR model can recognize text on images and screenshots. The solution is capable of processing up to 300 messages per hour and responding to user queries in 15 seconds, while detecting fraud with up to 97% success rate.