Loading...

Pennsylvania CPA Journal

Spring 2025

Deepfake Detection Is Vital to Protecting Your Business

Run-of-the-mill fraud is bad enough, but when it is enhanced by artificial intelligence the risks to your firm amplify. Here are several keys for helping you detect and counter deepfakes and other AI-generated fraud risks.


by Lauren Pitonyak
Mar 14, 2025, 00:00 AM


Insightful lessons can be learned by reviewing professional liability issues. With this in mind, Gallagher Affinity provides this column for your review. For more information about liability issues, contact Irene Walton at irene_walton@ajg.com.


Counterfeit art or antiques, plagiarism, bad checks: the idea of creating realistic fakes has been with us seemingly forever. However, concerns over the ease and accessibility of malicious efforts based in technology really began in the early 2000s. Today, with the rapid advances in artificial intelligence (AI) and highly powerful computing, a majority of executives expect an increase in the number and size of deepfake attacks targeting their organizations over the next year.

These concerns are not unfounded. Cybercriminals continue to develop technology that creates high-quality impersonations, enabling them to cause significant harm on an even larger scale.

Deepfakes and AI Threats

Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness using AI. This technology can create highly realistic and convincing audio, video, and images that are difficult to distinguish from authentic content.

The utilization of AI in cybercrimes is evolving and becoming more sophisticated. Therefore, it’s important to understand the different forms these attacks take.

For deepfakes specifically, here are a few of the methods fraudsters use.

Manipulating communications through impersonation scams – When targeting accounting firms, the most common use of deepfakes is impersonating trusted contacts. Criminals lead the target to believe they are having a real interaction with a fellow employee, their boss, or upper management in order to trick them into giving up sensitive information. Deepfake frauds can be more sophisticated than bogus emails; they can entail manipulated audio or video to impersonate company executives or clients. This poses a big risk in areas such as authorizing financial transactions or altering financial statements.

Creating misleading messages and fraudulent profiles – While the use of fake written communications and documents strays away from the technical definition of a deepfake, it is still in the realm of fraud and identity theft cybercrimes and applies to this topic. Certain AI tools have an exceptional ability to analyze large information sets to help make email (phishing) and text message (smishing) attacks more targeted and personal. AI technology can also create fake identification documents, such as passports or driver’s licenses.1 They can use these to commit fraud by opening bank accounts, receiving tax refunds, or obtaining loans. These deepfake-like tactics enable criminals to carry out highly personalized, sophisticated attacks with more convincing content.

Blackmailing or defaming victims – Deepfakes can create images and videos used to blackmail individuals or mislead the public. The following methods are more common for extortion and media manipulation than defrauding financial companies, but they can happen to anyone in an effort to exploit your firm. Here are a few areas of defamation:

  • Compromising videos: Threatening to release a fake video of someone doing something embarrassing or illegal unless they pay off the blackmailer.
  • False advertising: Misleading ads or endorsements.
  • Fabricated evidence: False evidence of someone engaging in illegal activity that could cause reputational damage and public backlash.
How These Methods Are Used

Criminals use the aforementioned tactics to directly steal from organizations or bypass a firm’s software or data security measures.

Extracting finances and sensitive information – By simulating a trusted individual’s voice or appearance on phone or video calls, impersonators can trick employees or other business partners into transferring funds or disclosing sensitive information.

Manipulating security procedures and bypassing authentication mechanisms – Deepfakes can be created to seem like someone making a legitimate security request for software or data access. By impersonating executives or company leaders, they can trick employees into providing login information or authentication codes.

Spear phishing, business email compromise, and whaling – As stated before, written communications are not technically deepfakes. However, technology-assisted impersonations of a trusted work colleague or business partner is a common tactic of these attack types.

Spear phishing is a method where criminals create personalized emails tailored to the victim, whether to convince the receiver to share private information, click a link, or download an attachment. This method focuses on gaining unauthorized access or installing malware.2

Business email compromise (BEC) scams are also highly targeted attacks, but they primarily focus on tricking the receiver into giving up information or money through the use of legitimate-looking communications.3

Whaling is a type of spear phishing or BEC attempt that targets high-profile individuals. These would be C-suite executives and other senior leaders at your company. It is also known as CEO fraud or executive phishing.4

Increased sophistication, speed, and persistence of schemes – Criminals often automate attacks to deploy them faster and easier. The simpler it is to carry out scams, the higher the rate at which they will be launched. With more attempts comes a higher probability of success.

How to Protect Your Business

As deepfake technology advances and impersonations become more realistic, your company must have measures in place to detect and prevent them.

Detecting deepfakes – Every employee should have adequate cybersecurity training, including the ability to recognize and respond to deepfake and phishing activity. Ensure they understand the importance of approaching every procedure thoroughly.

An educational document by the U.S. Department of Homeland Security recommends the following practices for recognizing deepfakes in communications.5 In videos and images, you should be on alert if you notice the following:

  • Inconsistent video quality, such as a blurry face with a clear background (or vice versa).
  • Changes in skin tone near the edge of the face.
  • Abnormal appearance of facial features, such as double chins, eyebrows, or edges to the face.
  • The person’s face blurs when obscured by a hand or another object.
  • Lower-quality sections throughout the same video.
  • Box-like shapes and cropped effects around the mouth, eyes, and neck.
  • Unnatural blinking movements, such as excessive or little blinking.
  • Changes in background or lighting.
  • Contextual oddities, such as if the background is inconsistent with the subject and foreground.

For audio, you should proceed with caution if you notice:

  • Choppy sentences.
  • Varying tone inflection while speaking.
  • Odd phrasing. Did the speaker say something in a way that doesn’t make sense or is seemingly unlike the speaker?
  • Off-topic and nonsensical statements. (Are they discussing relevant topics and answering questions in a way that makes sense?)
  • Contextual oddities. (One example might be if the background noise is inconsistent with the speaker’s supposed location.)

In written communications and documents, be on the lookout for the following:6

  • Abnormal grammar and language.
  • Punctuation mistakes.
  • Asking for private data, information, or login credentials out of the blue.
  • Urgency and rushing you to reply.
  • Mathematical errors.
  • Fuzzy logos.
  • Invoice numbers that do not make sense.
  • Formatting irregularities.

Implement strict procedures for data handling in the company – Access to your firm’s most sensitive data and systems should be limited to a tight, trusted circle of individuals. This lowers the chances of cybercriminals tricking someone into giving up valuable data, especially if this group is well-trained.

Multifactor authentication (MFA) – In addition to correct usernames and passwords, MFA protocols require all employees to verify their identity on a different device or through an alternate communication channel. This is part of the FBI’s “zero trust” approach to security.7

FBI SIFT – The FBI also suggests implementing what they call their “SIFT response” into your deepfake detection protocols.8 The acronym stands for Stop, Investigate the source, Find trusted coverage from multiple sources, and Trace the original content.

Conduct Risk Assessments

Deepfake procedures should be included in your broader cyber-risk assessments. Evaluate your employees’ ability to detect schemes and preparedness to handle such situations when (not if) they arise. 

 

1 Artificial Intelligence, Deepfakes and the Growing Sophistication of Cyber Crime,” Risk Placement Services.

2 Reetta Sainio, “Spear Phishing vs Phishing (No-Nonsense Guide),” Hoxhunt (June 12, 2024).

3 What Is Business Email Compromise and How to Educate Your Clients,” Nationwide (Oct. 7, 2024).

4 What Is a Whaling Phishing Attack?” Cisco.

5 Increasing Threat of Deepfake Identities,” United States Department of Homeland Security (June 7, 2019).

6 Phishing, Smishing, and Vishing ... Oh My!” Georgetown University Information Security Office. And Ray Sang and Clay Kniepmann, “AI and Fraud: What CPAs Should Know,” Journal of Accountancy (May 1, 2024).

7 Roman H. Kepczyk, “Deepfakes Emerge as Real Cybersecurity Threat,” AICPA & CIMA (Sept. 28, 2022).

8 Malicious Actors Almost Certainly Will Leverage Synthetic Content for Cyber and Foreign Influence Operations,” Internet Crime Complaint Center, Federal Bureau of Investigation, Cyber Division (March 10, 2021).


Lauren Pitonyak is an account executive with Gallagher Affinity in Mount Laurel, N.J. She can be reached at lauren_pitonyak@ajg.com.

The information herein is provided as an overview of current market risks and available coverages. It is intended for discussion purposes only and does not offer legal or client-specific risk management advice. Actual insurance policies must always be consulted for full coverage details and analysis.

Deepfake Detection Is Vital to Protecting Your Business

Run-of-the-mill fraud is bad enough, but when it is enhanced by artificial intelligence the risks to your firm amplify. Here are several keys for helping you detect and counter deepfakes and other AI-generated fraud risks.


by Lauren Pitonyak
Mar 14, 2025, 00:00 AM


Insightful lessons can be learned by reviewing professional liability issues. With this in mind, Gallagher Affinity provides this column for your review. For more information about liability issues, contact Irene Walton at irene_walton@ajg.com.


Counterfeit art or antiques, plagiarism, bad checks: the idea of creating realistic fakes has been with us seemingly forever. However, concerns over the ease and accessibility of malicious efforts based in technology really began in the early 2000s. Today, with the rapid advances in artificial intelligence (AI) and highly powerful computing, a majority of executives expect an increase in the number and size of deepfake attacks targeting their organizations over the next year.

These concerns are not unfounded. Cybercriminals continue to develop technology that creates high-quality impersonations, enabling them to cause significant harm on an even larger scale.

Deepfakes and AI Threats

Deepfakes are synthetic media in which a person in an existing image or video is replaced with someone else’s likeness using AI. This technology can create highly realistic and convincing audio, video, and images that are difficult to distinguish from authentic content.

The utilization of AI in cybercrimes is evolving and becoming more sophisticated. Therefore, it’s important to understand the different forms these attacks take.

For deepfakes specifically, here are a few of the methods fraudsters use.

Manipulating communications through impersonation scams – When targeting accounting firms, the most common use of deepfakes is impersonating trusted contacts. Criminals lead the target to believe they are having a real interaction with a fellow employee, their boss, or upper management in order to trick them into giving up sensitive information. Deepfake frauds can be more sophisticated than bogus emails; they can entail manipulated audio or video to impersonate company executives or clients. This poses a big risk in areas such as authorizing financial transactions or altering financial statements.

Creating misleading messages and fraudulent profiles – While the use of fake written communications and documents strays away from the technical definition of a deepfake, it is still in the realm of fraud and identity theft cybercrimes and applies to this topic. Certain AI tools have an exceptional ability to analyze large information sets to help make email (phishing) and text message (smishing) attacks more targeted and personal. AI technology can also create fake identification documents, such as passports or driver’s licenses.1 They can use these to commit fraud by opening bank accounts, receiving tax refunds, or obtaining loans. These deepfake-like tactics enable criminals to carry out highly personalized, sophisticated attacks with more convincing content.

Blackmailing or defaming victims – Deepfakes can create images and videos used to blackmail individuals or mislead the public. The following methods are more common for extortion and media manipulation than defrauding financial companies, but they can happen to anyone in an effort to exploit your firm. Here are a few areas of defamation:

  • Compromising videos: Threatening to release a fake video of someone doing something embarrassing or illegal unless they pay off the blackmailer.
  • False advertising: Misleading ads or endorsements.
  • Fabricated evidence: False evidence of someone engaging in illegal activity that could cause reputational damage and public backlash.
How These Methods Are Used

Criminals use the aforementioned tactics to directly steal from organizations or bypass a firm’s software or data security measures.

Extracting finances and sensitive information – By simulating a trusted individual’s voice or appearance on phone or video calls, impersonators can trick employees or other business partners into transferring funds or disclosing sensitive information.

Manipulating security procedures and bypassing authentication mechanisms – Deepfakes can be created to seem like someone making a legitimate security request for software or data access. By impersonating executives or company leaders, they can trick employees into providing login information or authentication codes.

Spear phishing, business email compromise, and whaling – As stated before, written communications are not technically deepfakes. However, technology-assisted impersonations of a trusted work colleague or business partner is a common tactic of these attack types.

Spear phishing is a method where criminals create personalized emails tailored to the victim, whether to convince the receiver to share private information, click a link, or download an attachment. This method focuses on gaining unauthorized access or installing malware.2

Business email compromise (BEC) scams are also highly targeted attacks, but they primarily focus on tricking the receiver into giving up information or money through the use of legitimate-looking communications.3

Whaling is a type of spear phishing or BEC attempt that targets high-profile individuals. These would be C-suite executives and other senior leaders at your company. It is also known as CEO fraud or executive phishing.4

Increased sophistication, speed, and persistence of schemes – Criminals often automate attacks to deploy them faster and easier. The simpler it is to carry out scams, the higher the rate at which they will be launched. With more attempts comes a higher probability of success.

How to Protect Your Business

As deepfake technology advances and impersonations become more realistic, your company must have measures in place to detect and prevent them.

Detecting deepfakes – Every employee should have adequate cybersecurity training, including the ability to recognize and respond to deepfake and phishing activity. Ensure they understand the importance of approaching every procedure thoroughly.

An educational document by the U.S. Department of Homeland Security recommends the following practices for recognizing deepfakes in communications.5 In videos and images, you should be on alert if you notice the following:

  • Inconsistent video quality, such as a blurry face with a clear background (or vice versa).
  • Changes in skin tone near the edge of the face.
  • Abnormal appearance of facial features, such as double chins, eyebrows, or edges to the face.
  • The person’s face blurs when obscured by a hand or another object.
  • Lower-quality sections throughout the same video.
  • Box-like shapes and cropped effects around the mouth, eyes, and neck.
  • Unnatural blinking movements, such as excessive or little blinking.
  • Changes in background or lighting.
  • Contextual oddities, such as if the background is inconsistent with the subject and foreground.

For audio, you should proceed with caution if you notice:

  • Choppy sentences.
  • Varying tone inflection while speaking.
  • Odd phrasing. Did the speaker say something in a way that doesn’t make sense or is seemingly unlike the speaker?
  • Off-topic and nonsensical statements. (Are they discussing relevant topics and answering questions in a way that makes sense?)
  • Contextual oddities. (One example might be if the background noise is inconsistent with the speaker’s supposed location.)

In written communications and documents, be on the lookout for the following:6

  • Abnormal grammar and language.
  • Punctuation mistakes.
  • Asking for private data, information, or login credentials out of the blue.
  • Urgency and rushing you to reply.
  • Mathematical errors.
  • Fuzzy logos.
  • Invoice numbers that do not make sense.
  • Formatting irregularities.

Implement strict procedures for data handling in the company – Access to your firm’s most sensitive data and systems should be limited to a tight, trusted circle of individuals. This lowers the chances of cybercriminals tricking someone into giving up valuable data, especially if this group is well-trained.

Multifactor authentication (MFA) – In addition to correct usernames and passwords, MFA protocols require all employees to verify their identity on a different device or through an alternate communication channel. This is part of the FBI’s “zero trust” approach to security.7

FBI SIFT – The FBI also suggests implementing what they call their “SIFT response” into your deepfake detection protocols.8 The acronym stands for Stop, Investigate the source, Find trusted coverage from multiple sources, and Trace the original content.

Conduct Risk Assessments

Deepfake procedures should be included in your broader cyber-risk assessments. Evaluate your employees’ ability to detect schemes and preparedness to handle such situations when (not if) they arise. 

 

1 Artificial Intelligence, Deepfakes and the Growing Sophistication of Cyber Crime,” Risk Placement Services.

2 Reetta Sainio, “Spear Phishing vs Phishing (No-Nonsense Guide),” Hoxhunt (June 12, 2024).

3 What Is Business Email Compromise and How to Educate Your Clients,” Nationwide (Oct. 7, 2024).

4 What Is a Whaling Phishing Attack?” Cisco.

5 Increasing Threat of Deepfake Identities,” United States Department of Homeland Security (June 7, 2019).

6 Phishing, Smishing, and Vishing ... Oh My!” Georgetown University Information Security Office. And Ray Sang and Clay Kniepmann, “AI and Fraud: What CPAs Should Know,” Journal of Accountancy (May 1, 2024).

7 Roman H. Kepczyk, “Deepfakes Emerge as Real Cybersecurity Threat,” AICPA & CIMA (Sept. 28, 2022).

8 Malicious Actors Almost Certainly Will Leverage Synthetic Content for Cyber and Foreign Influence Operations,” Internet Crime Complaint Center, Federal Bureau of Investigation, Cyber Division (March 10, 2021).


Lauren Pitonyak is an account executive with Gallagher Affinity in Mount Laurel, N.J. She can be reached at lauren_pitonyak@ajg.com.

The information herein is provided as an overview of current market risks and available coverages. It is intended for discussion purposes only and does not offer legal or client-specific risk management advice. Actual insurance policies must always be consulted for full coverage details and analysis.