Deepfakes Fuel Fraud, Posing Risk for Corporations: Swiss Re
By Diego Valverde | Journalist & Industry Analyst -
Fri, 09/19/2025 - 07:43
Deepfake technology, based on AI, has become one of the most significant emerging risks for the global corporate environment, according to the latest SONAR report from Swiss Re. This technology is no longer a theoretical threat; it is now an active tool in sophisticated fraud and cyberattacks, forcing organizations to reevaluate their security protocols and face a new risk landscape.
The Swiss Re report highlights that the growing sophistication and accessibility of Generative AI tools are the main catalysts for this threat. "Deepfake technology and disinformation are emerging as significant threats, enabling sophisticated insurance fraud and cyberattacks," reads the analysis. Incidents related to deepfakes in the fintech sector increased by 700% in 2023, showcasing their rapid adoption by malicious actors to exploit vulnerabilities in business processes.
Deepfakes are synthetic multimedia content, such as images, videos, or audio, generated by deep learning algorithms. They are created mainly through Generative Adversarial Networks (GANs). This technology operates with two neural networks competing against each other. A "generator" network creates the forgeries and a "discriminator" network tries to detect them. Through thousands of iterations, the generator learns to produce content that is increasingly realistic and indistinguishable from the original to the human eye.
Initially, deepfakes were perceived as a tool for political disinformation or the manipulation of public figures' images. They have now migrated rapidly to the corporate sphere. The digitalization of operations, the adoption of remote work, and the growing reliance on digital communications have created a favorable environment for their use in social engineering attacks. Criminals leverage the inherent trust in internal communications to impersonate high-level executives, employees, or business partners with a never-before-seen level of realism, overcoming the barriers of traditional phishing emails.
The implications of this technology for corporate cybersecurity are extensive and multifaceted. According to the study, the attacks are no longer limited to low-level fraud attempts. They now involve complex operations with multi-million dollar losses.
Attack Vectors and Fraud Methods
The main attack vector is identity theft for financial fraud. In these scenarios, attackers use deepfake audio or video to simulate an executive's voice or image. They then instruct an employee to make urgent and confidential fund transfers. In early 2024, an employee of a multinational company in Hong Kong authorized transfers totaling US$25 million after participating in a video conference with digital replicas of several members of management, all generated by AI.
The insurance sector is another prominent target. The Swiss Re report notes that insurers face a dual risk. On one hand, there is an increase in fraudulent claims supported by forged evidence like accident videos or proof of damage. On the other hand, insurers themselves are targets of direct attacks. This raises operational costs, as it requires implementing advanced detection technologies and specialized staff training to validate digital evidence.
Market manipulation and reputational damage represent another critical threat. The dissemination of a fake video of a CEO announcing poor financial results, a product crisis or improper conduct can cause a sharp drop in the company's stock value, loss of investor confidence, and irreparable brand damage, reads the report.
Hard Data on the Impact
The Regula Survey 2024: Key Trends in Deepfake Detection and Prevention, reveals that 92% of companies globally experienced direct financial losses from deepfake-related incidents. Of these, 10% reported damages exceeding US$1 million.
Despite this evidence, a significant awareness gap persists. Recent studies indicate that about one in four business leaders have little or no familiarity with this technology. Furthermore, 31% do not believe that deepfakes increase the risk of fraud in their organization.
This lack of preparation is concerning, as more than half of companies admit they have not provided employees with specific training to identify or manage a deepfake attack attempt. Executives' confidence in their teams' ability to detect these threats is low. This is understandable, as the quality of forgeries is improving at an exponential rate.
The deepfake threat is expected to evolve toward more automated and scalable attacks. According to the SONAR report, it will likely combine Generative AI with other technologies to identify and exploit vulnerabilities in real time. The convergence of deepfakes with malware and ransomware attacks could lead to new forms of digital extortion.
To address this emerging risk, Swiss Re urges organizations to adopt a defense-in-depth approach that combines technology, processes, and the human element. First, companies should invest in biometric authentication and liveness detection tools that can differentiate between a real person and a digital simulation. Using blockchain technology to create immutable digital watermarks on corporate multimedia content can also help verify its authenticity.
Second, organizations need to strengthen verification protocols, says the company. High-risk financial transactions and requests for sensitive information must be subject to multi-factor verification processes that do not rely solely on video or voice communication. Establishing a secondary and secure communication channel to confirm such operations is crucial.
Third, continuous training and awareness are vital. The human element remains the most vulnerable link. Companies must develop ongoing training programs to educate employees about the characteristics of deepfakes. These include inconsistencies in blinking, imperfect lip-syncing, and visual artifacts. Finally, clear protocols on how to respond to a suspicious request must also be established, says Swiss Re.


