Loading...

Building a Stronger Profession Together

PICPA propels, promotes and protects our profession, its organizations and its people. We are:

 

a leading voice on the state and national level

 

problem solvers for cutting edge issues

 

a meaningful resource for members

Advocating for Pennsylvania CPAs

Legislative Advocacy

A trusted voice protecting the interests of our members in Harrisburg and on a national level.

Pennsylvania CPA Foundation

Fostering the growth of aspiring accountants with educational, motivational and financial support.

Professional and Technical Standards

Integrity, accuracy, and ethics are pillars of the accounting profession.

See Recent Accomplishments

News & Updates VIEW ALL

CPA-PAC logo

Supporting the CPA-PAC Is Crucial

Sep 10, 2025, 04:12 AM

CPA-PAC receives no funding from PICPA membership dues; every dollar comes from voluntary contributions. So, your participation is essential to ensuring the PICPA remains an influential advocate for the CPA profession in Pennsylvania.

Full story

Stay ahead with CPA Now

Subscribe today and never miss an update!

Member-Exclusive Content

CPAs and Generative AI: Careful Steps Needed

Generative AI was going to be the next big thing. But as accounting firms start to dig deeper into some of the cybersecurity, privacy, and accuracy issues, many have started to pump the breaks. This does not mean generative AI is going away or can’t be integrated into some workflows, but CPAs must be aware of the risks.


by Colleen S. Krcelich, CPA, Christina M. Olear, CPA, and John Peatross, CPA
Mar 10, 2025, 09:50 AM


The hype surrounding generative artificial intelligence (generative AI) has subsided a bit since the intense public interest in 2023, but that doesn’t mean the technology has faded away. According to OpenAI, the developer of ChatGPT, 92% of Fortune 500 companies were using ChatGPT as of August 2024.1 This feature discusses generative AI’s evolution, why some companies are putting on the brakes, and how others are embracing it successfully.

First, a review of some basics. Traditional AI has been around for decades. It works well with explicit rules applied to structured data. One example would be pattern recognition within existing data to make predictions through the use of decision trees, linear regression, or clustering algorithms.

Generative AI works with existing data too, but it can create content based on both structured data and unstructured data, making it much more versatile. Not only can it create content, but it can also summarize, research, change, organize, and streamline content. It is multimodal in its input and output, too, which means one can create written content, pictures, videos, and audio content. It also excels at streamlining processes, code generation, and decision support.

Generative AI’s broader capabilities are due to the complexities of the underlying algorithms. These complicated algorithms, however, come with a lack of transparency, and this creates risks related to data accuracy, bias, ethics, and privacy. Also, businesses have struggled to apply use cases to their specific needs and successfully integrate it within their systems. Eventually, the cybersecurity, ethical, privacy, legal liability, and data accuracy risks also became more apparent.

According to Gartner Inc., while 73% of CIOs had planned to invest more in AI in 2024 than they did in 2023, CFOs are skeptical, with 67% saying that digital investments have underperformed expectations.2

Per Arize AI Inc., more than half of Fortune 500 companies mentioned artificial intelligence in May 2024 annual reports. While one in five specifically mention generative AI, more than two-thirds of the companies did so in reference to generative AI risks.3 These risks include data ethics and privacy, intellectual property issues, inaccuracies, and not keeping up with competitors. We will cover these and more in this feature, along with strategies to mitigate risks.

Generative AI will continue to evolve as regulations emerge, employees are trained, company data governance policies are clarified, and security and ethical concerns are mitigated with appropriate protections. Internal company-use-only generative AI tools will mitigate many risks associated with open source generative AI, but they are expensive and time consuming to build. It is often the larger companies that are investing in the creation of safe models trained on specific content for employee use only. PwC, for example, created ChatPwC, a model trained on tax and regulatory sources. The tool is part of PwC’s $1 billion investment in generative AI.4

The most popular generative AI tools are OpenAI’s ChatGPT, Google’s Gemini, Anthopic’s Claude, and Microsoft’s Copilot. These are general generative AI tools, but there are more specific tools geared toward tasks such as productivity, art creation, code generation, and so on. There are also accounting and tax-specific generative AI tools, including TaxGPT, CPA Pilot, and TruePrep.ai. These tools have fees of about $1,000 per year. They claim to streamline tax research, perform tax calculations, and provide customized tax advice after scanning client tax returns. We have not used any of these tools, therefore we cannot attest to their accuracy, viability, or security. Certainly, these tools would need thorough vetting before being used with client or company data.

The scope of this feature is focused on open source generative AI tools such as ChatGPT. Below is our examination of current use cases of generative AI technology that are of particular interest to accountants, along with the associated risks and risk-mitigation strategies.

Transforming Business and Accounting Practices

Generative AI can offer businesses innovative solutions that enhance efficiency, creativity, and productivity across various domains. Here are just a few examples:

  • Customer service – AI chatbots that deliver 24/7 support, resolve queries instantly, and elevate customer experiences.
  • Marketing – Generative-AI-created content for blogs, social media, and campaigns that ensure captivating and tailored messaging.
  • Data analysis – Advanced models that make sense of complex data and generate insights to inform decisions and predict trends.
  • Product design – Generate fresh ideas and realistic prototypes while speeding up innovation cycles and processes.
  • Process optimization – Enhance supply chain efficiency and predictive maintenance to minimize downtime and operational costs.
  • Employee productivity – Draft professional emails, provide augmented customer support, and streamline repetitive tasks, shifting employees’ focus to strategic initiatives.

Generative AI – supported by research from firms such as McKinsey & Company, Deloitte, and PwC – is reshaping the accounting profession. Studies show that AI adoption in accounting can increase efficiency by up to 40% while significantly reducing errors. This transformative technology is being applied in several key areas:

  • Automation of Routine Tasks

– Report generation: Generative AI automates creating financial reports and summaries, reducing time and errors. A 2023 McKinsey study highlights that automating repetitive accounting tasks can save up to 40% of the time spent on traditional workflows.5

– Data extraction: AI tools such as optical character recognition extract data from invoices and receipts, enabling faster processing and reducing manual effort.

– Journal entry creation: Automatic categorization and generation of entries improve accuracy.

– Bank reconciliations: Generative AI compares transactions against records resolve discrepancies.

  • Research and Compliance

– Tax law analysis: AI-powered systems quickly interpret complex tax laws and identify potential deductions or credits.

– Accounting standards: AI tools analyze regulatory changes, update accounting practices seamlessly, and keep firms in compliance with new standards.

– Regulation monitoring: Real-time updates on legislative changes allow businesses to adapt strategies proactively.

  • Financial Analysis and Reporting

– Visualizations: Generative AI tools can enhance data presentation by converting raw data into easily interpretable dashboards.

– Pattern detection: Identifies anomalies and provides actionable insights from vast data sets.

  • Auditing

– Outlier detection: AI-powered tools can flag unusual transactions and significantly reduce audit time.

– Compliance checks: Ensures adherence to financial regulations by cross-referencing datasets with legal requirements.

– Fraud detection: Can detect potentially fraudulent activity via algorithms trained to recognize irregularities in transactional patterns.

  • Advisory Services and Client Communication

– Scenario simulations: Modeling financial outcomes under various scenarios.

– Narrative generation: Tools like ChatGPT can create client-ready summaries of financial data, ensuring accessibility and clarity.

– Email drafting: Automation of professional communication, including updates and tailored responses, saving significant administrative time.

  • Process Optimization

– Expense management: Identifies cost-saving opportunities through expense-trend analysis. Automated expense tracking systems significantly improves accuracy.

– Tax planning: Generative AI supports tax optimization by analyzing complex tax codes and regulations to ensure compliance and maximize deductions.

General Generative AI Risks

As you can see above, generative AI can offer significant benefits to accountants, but it also comes with risks that must be managed. They run the gamut from inadvertent employee misuse to malicious targeted cyberattacks. Some risks include ethical and privacy concerns, security risks, legal and regulatory risks, and data reliability issues.

Mitigating the risks requires robust employee training, awareness of AI-specific vulnerabilities, and implementation of strict security and compliance measures. Below are two accounting use cases presented for your consideration. They focus on research use and potential error inclusions.

Research Risks – The most common use of generative AI by CPAs is through services like ChatGPT as a means to conduct research, acting as an interactive Google search on technical questions. Unfortunately, the AI doesn’t say “it depends” like a real accountant would. It tends to be definitive in its pronouncements. In a profession where the devil is in the details, the risk of using these tools to conduct research lies in the fact that the output may not be complete, relevant, or timely.

  • Accuracy and Reliability: If you’ve ever written a memo regarding a technical position, you know that there is (somehow) always more to discuss and analyze when contemplating the Internal Revenue Code or generally accepted accounting principles. AI programs have a tendency to concoct responses to appease the reader. These responses look correct and thorough to an uninformed reader, but they are often missing key information or may just be incorrect. For accountants with less experience or knowledge in the area they are researching, these fantasy responses are particularly challenging because they can be a compelling read.
  • Relevance: Many technical accounting and tax questions are nuanced, requiring multiple layers of analysis to come to a reasonable conclusion. In addition to the inherent complexity of some guidance, judgment is often required when taking a certain accounting or tax position. An AI’s response will likely not factor in all the relevant information needed to make a proper conclusion on a given issue. Accountants who use this technology to research need to understand how to ask the right questions, and when to ask more questions to get the technology aligned with sought after output.
  • Timeliness: Generative AI tools produce information based on training data and the inputs provided by the user. Unless the AI is specifically trained to answer technical accounting questions, it may not be able to properly differentiate between current and outdated information within its training data. This can lead to inaccurate responses. In many cases, it is worth the time to find out what the most current guidance is and prompt the generative AI to respond with that guidance in mind. Then double check the response against the actual guidance from the IRS or the Financial Accounting Standards Board.

Many senior-associate-level professionals use generative AI for research. It can be a great way to get started on a memo or to figure out where to dive deeper. It is certainly not the only research that should be done. To mitigate these risks, professionals need to be trained and aware of how to research without the use of generative AI so that they don’t fall victim to the inaccuracies of the technology. As an advanced tool in an accountant’s research toolkit, generative AI can be the key to unlocking incredible efficiencies, but it can be a trap for the inexperienced. Firms that invest in training and on the practical use of these tools will be able to leverage them significantly more effectively than the ones that don’t.

Error Risks – Generative AI is renowned for its ability to fabricate, it can generate content rife with partial or complete inaccuracies. It has been built to produce compelling sounding content that can be difficult to fact-check unless reviewed by a content expert.

The errors could include calculation errors, rounding issues, missed anomalies, or the mismatching of categories (such as expenses). If undetected, these issues could lead to inaccurate accounting records that may result in illogical and incorrect financial reports. For example, if an accounts payable employee used generative AI to extract invoice data and code it to the appropriate cost center, just one error could cause general ledger inaccuracies if not caught.

Although gathering and processing routine information is a logical and relatively low-risk place to incorporate generative AI use, necessary mitigation strategies can help safeguard the process. These strategies include human oversight, employee training, system checks, and data governance policies. Human checkpoints at various steps in the process by skilled employees will be foundational. These employees will need continuous generative AI training to ensure proper usage and company policy compliance. Company data governance policies will help guide the employee training objectives to ensure compliance and privacy issues are addressed. Using other systems for checks (such as Excel) can provide another layer of detection and assurance. These strategies will allow accountants to harness the benefits of using Generative AI while minimizing risks related to errors.

Another type of error risk might entail leaked data.

Let’s say a solo tax preparer loads his or her clients’ tax returns into generative AI and asks it to summarize and take notes about the return, make suggestions for improvements, and then format it into a client summary and checklist to be shared with the client. The preparer also enters some clients’ emails and asks the site to provide a reply to emails. Unfortunately, the preparer forgets to remove personally identifiable information, such as names, Social Security numbers, addresses, and bank account information.

This scenario poses significant risks to both the accounting firm and the client. Since the information uploaded to generative AI is stored in the cloud and now resides on external servers, this could result in identity theft and fraud. This data breach would have to be reported to all clients, with a significant cost to the firm – especially if it does not have cybersecurity insurance. It will also potentially damage the reputation of the firm.

In addition, the unauthorized sharing of personal sensitive information violates various regulatory provisions, including privacy laws and Circular 230. This could expose the company to fines and legal ramifications.

These risks can be mitigated in various ways, the main one being not allowing generative AI to analyze confidential client information.

Risk Mitigation Strategies

If a company is going to use generative AI, it must establish a strict information security policy that outlines the intended (and forbidden) uses of generative AI and require training on generative AI so professionals are aware of the risks. They must also ensure the AI tool complies with data privacy regulations like GDPR and CCPA, and has end-to-end encryption. Firms may also want to consider limiting access to professional AI systems or, for larger firms, only allow access to an internally designed generative AI tool. The table below offers tips on a few risk mitigation strategies you may want to consider.

Risk Mitigation StrategyExample
Company Generative AI Policy that Includes ...
Ethical GuidelinesEstablish ethical AI usage standards, including fairness, transparency, and accountability. Include provisions to prevent bias and ensure inclusivity in AI-generated content.
Compliance and Legal RequirementsDetail legal standards and regulations that must be adhered to. Incorporate federal agency AI best practices and outline procedures to stay updated with current laws.
Quality AssuranceImplement review processes for AI-generated content. Specify criteria for accuracy, reliability, and relevance, and assign responsibility for quality control.
Data Privacy and SecurityDefine protocols for safeguarding data and securing AI systems. Include guidelines for data handling, storage, and access controls.
Training and Updates
Company's Generative AI PoliciesDevelop and deliver training sessions to educate employees on proper AI usage, data security, and compliance with internal policies. Continuous Provide regular employee training to stay up on AI tool updates and best practices for integrating AI into workflows.
Quality Assurance
Human in the LoopIncorporate human review at multiple points, such as verifying raw data inputs, reviewing intermediate outputs (trial balances), and auditing reports or forecasts for accuracy and consistency.
Other SystemsUse Excel for data validation by applying conditional formatting to flag discrepancies or errors in intermediate outputs or final content.
Future Direction

As AI continues to evolve, its integration into accounting practices will likely expand to include advanced predictive analytics, enhanced collaboration tools, and fully automated financial ecosystems. However, successful adoption requires a human-in-the-loop approach to ensure oversight that validates AI outputs and maintains accountability. Organizations should establish clear generative AI company policies to govern its use and address ethical concerns, compliance, and data security. Starting small by implementing AI in targeted, low-risk areas can help firms build confidence, refine processes, and establish methods to mitigate risks.

Generative AI empowers accountants to focus on strategic, high-value tasks, transforming their roles from transactional processors to strategic advisers. By balancing innovation with thoughtful oversight and phased implementation, this technology is not just reshaping processes but redefining the profession itself. 

 

1 Carl Franzen, “OpenAI Says ChatGPT Now Has 200M Users,” VentureBeat (Aug. 29, 2024).

2 Get AI Ready - What IT Leaders Need to Know and Do,” Gartner Inc. 

3 The Rise of Generative AI in SEC Filings,” Arize. 

4 Lindsey Wilkinson, “PwC Brings Private Generative AI Tool to Internal Employees,” CIO Dive (Aug. 15, 2023).

5 Ankur Agrawal, Ben Ellencweig, Rohit Sood, and Michele Tam, “Gen AI: A Guide for CFOs,” McKinsey & Company (Nov. 15, 2023).


Colleen S. Krcelich, CPA, is a professor of accounting and business at Pennsylvania State University – Lehigh Valley in Center Valley. She can be reached at colleen@bizsupportllc.com.

Christina M. Olear, CPA, is an accounting professor with Pennsylvania State University Brandywine in Media and is a member of the Pennsylvania CPA Journal Editorial Board. She can be reached at cmo16@psu.edu.

John Peatross, CPA, is a senior manager at Siegfried Advisory LLC in Wilmington, Del., and an adjunct instructor in the department of economics at the University of Maryland. He can be reached at jpeatross@siegfriedgroup.com.

CPAs and Generative AI: Careful Steps Needed

Generative AI was going to be the next big thing. But as accounting firms start to dig deeper into some of the cybersecurity, privacy, and accuracy issues, many have started to pump the breaks. This does not mean generative AI is going away or can’t be integrated into some workflows, but CPAs must be aware of the risks.


by Colleen S. Krcelich, CPA, Christina M. Olear, CPA, and John Peatross, CPA
Mar 10, 2025, 09:50 AM


The hype surrounding generative artificial intelligence (generative AI) has subsided a bit since the intense public interest in 2023, but that doesn’t mean the technology has faded away. According to OpenAI, the developer of ChatGPT, 92% of Fortune 500 companies were using ChatGPT as of August 2024.1 This feature discusses generative AI’s evolution, why some companies are putting on the brakes, and how others are embracing it successfully.

First, a review of some basics. Traditional AI has been around for decades. It works well with explicit rules applied to structured data. One example would be pattern recognition within existing data to make predictions through the use of decision trees, linear regression, or clustering algorithms.

Generative AI works with existing data too, but it can create content based on both structured data and unstructured data, making it much more versatile. Not only can it create content, but it can also summarize, research, change, organize, and streamline content. It is multimodal in its input and output, too, which means one can create written content, pictures, videos, and audio content. It also excels at streamlining processes, code generation, and decision support.

Generative AI’s broader capabilities are due to the complexities of the underlying algorithms. These complicated algorithms, however, come with a lack of transparency, and this creates risks related to data accuracy, bias, ethics, and privacy. Also, businesses have struggled to apply use cases to their specific needs and successfully integrate it within their systems. Eventually, the cybersecurity, ethical, privacy, legal liability, and data accuracy risks also became more apparent.

According to Gartner Inc., while 73% of CIOs had planned to invest more in AI in 2024 than they did in 2023, CFOs are skeptical, with 67% saying that digital investments have underperformed expectations.2

Per Arize AI Inc., more than half of Fortune 500 companies mentioned artificial intelligence in May 2024 annual reports. While one in five specifically mention generative AI, more than two-thirds of the companies did so in reference to generative AI risks.3 These risks include data ethics and privacy, intellectual property issues, inaccuracies, and not keeping up with competitors. We will cover these and more in this feature, along with strategies to mitigate risks.

Generative AI will continue to evolve as regulations emerge, employees are trained, company data governance policies are clarified, and security and ethical concerns are mitigated with appropriate protections. Internal company-use-only generative AI tools will mitigate many risks associated with open source generative AI, but they are expensive and time consuming to build. It is often the larger companies that are investing in the creation of safe models trained on specific content for employee use only. PwC, for example, created ChatPwC, a model trained on tax and regulatory sources. The tool is part of PwC’s $1 billion investment in generative AI.4

The most popular generative AI tools are OpenAI’s ChatGPT, Google’s Gemini, Anthopic’s Claude, and Microsoft’s Copilot. These are general generative AI tools, but there are more specific tools geared toward tasks such as productivity, art creation, code generation, and so on. There are also accounting and tax-specific generative AI tools, including TaxGPT, CPA Pilot, and TruePrep.ai. These tools have fees of about $1,000 per year. They claim to streamline tax research, perform tax calculations, and provide customized tax advice after scanning client tax returns. We have not used any of these tools, therefore we cannot attest to their accuracy, viability, or security. Certainly, these tools would need thorough vetting before being used with client or company data.

The scope of this feature is focused on open source generative AI tools such as ChatGPT. Below is our examination of current use cases of generative AI technology that are of particular interest to accountants, along with the associated risks and risk-mitigation strategies.

Transforming Business and Accounting Practices

Generative AI can offer businesses innovative solutions that enhance efficiency, creativity, and productivity across various domains. Here are just a few examples:

  • Customer service – AI chatbots that deliver 24/7 support, resolve queries instantly, and elevate customer experiences.
  • Marketing – Generative-AI-created content for blogs, social media, and campaigns that ensure captivating and tailored messaging.
  • Data analysis – Advanced models that make sense of complex data and generate insights to inform decisions and predict trends.
  • Product design – Generate fresh ideas and realistic prototypes while speeding up innovation cycles and processes.
  • Process optimization – Enhance supply chain efficiency and predictive maintenance to minimize downtime and operational costs.
  • Employee productivity – Draft professional emails, provide augmented customer support, and streamline repetitive tasks, shifting employees’ focus to strategic initiatives.

Generative AI – supported by research from firms such as McKinsey & Company, Deloitte, and PwC – is reshaping the accounting profession. Studies show that AI adoption in accounting can increase efficiency by up to 40% while significantly reducing errors. This transformative technology is being applied in several key areas:

  • Automation of Routine Tasks

– Report generation: Generative AI automates creating financial reports and summaries, reducing time and errors. A 2023 McKinsey study highlights that automating repetitive accounting tasks can save up to 40% of the time spent on traditional workflows.5

– Data extraction: AI tools such as optical character recognition extract data from invoices and receipts, enabling faster processing and reducing manual effort.

– Journal entry creation: Automatic categorization and generation of entries improve accuracy.

– Bank reconciliations: Generative AI compares transactions against records resolve discrepancies.

  • Research and Compliance

– Tax law analysis: AI-powered systems quickly interpret complex tax laws and identify potential deductions or credits.

– Accounting standards: AI tools analyze regulatory changes, update accounting practices seamlessly, and keep firms in compliance with new standards.

– Regulation monitoring: Real-time updates on legislative changes allow businesses to adapt strategies proactively.

  • Financial Analysis and Reporting

– Visualizations: Generative AI tools can enhance data presentation by converting raw data into easily interpretable dashboards.

– Pattern detection: Identifies anomalies and provides actionable insights from vast data sets.

  • Auditing

– Outlier detection: AI-powered tools can flag unusual transactions and significantly reduce audit time.

– Compliance checks: Ensures adherence to financial regulations by cross-referencing datasets with legal requirements.

– Fraud detection: Can detect potentially fraudulent activity via algorithms trained to recognize irregularities in transactional patterns.

  • Advisory Services and Client Communication

– Scenario simulations: Modeling financial outcomes under various scenarios.

– Narrative generation: Tools like ChatGPT can create client-ready summaries of financial data, ensuring accessibility and clarity.

– Email drafting: Automation of professional communication, including updates and tailored responses, saving significant administrative time.

  • Process Optimization

– Expense management: Identifies cost-saving opportunities through expense-trend analysis. Automated expense tracking systems significantly improves accuracy.

– Tax planning: Generative AI supports tax optimization by analyzing complex tax codes and regulations to ensure compliance and maximize deductions.

General Generative AI Risks

As you can see above, generative AI can offer significant benefits to accountants, but it also comes with risks that must be managed. They run the gamut from inadvertent employee misuse to malicious targeted cyberattacks. Some risks include ethical and privacy concerns, security risks, legal and regulatory risks, and data reliability issues.

Mitigating the risks requires robust employee training, awareness of AI-specific vulnerabilities, and implementation of strict security and compliance measures. Below are two accounting use cases presented for your consideration. They focus on research use and potential error inclusions.

Research Risks – The most common use of generative AI by CPAs is through services like ChatGPT as a means to conduct research, acting as an interactive Google search on technical questions. Unfortunately, the AI doesn’t say “it depends” like a real accountant would. It tends to be definitive in its pronouncements. In a profession where the devil is in the details, the risk of using these tools to conduct research lies in the fact that the output may not be complete, relevant, or timely.

  • Accuracy and Reliability: If you’ve ever written a memo regarding a technical position, you know that there is (somehow) always more to discuss and analyze when contemplating the Internal Revenue Code or generally accepted accounting principles. AI programs have a tendency to concoct responses to appease the reader. These responses look correct and thorough to an uninformed reader, but they are often missing key information or may just be incorrect. For accountants with less experience or knowledge in the area they are researching, these fantasy responses are particularly challenging because they can be a compelling read.
  • Relevance: Many technical accounting and tax questions are nuanced, requiring multiple layers of analysis to come to a reasonable conclusion. In addition to the inherent complexity of some guidance, judgment is often required when taking a certain accounting or tax position. An AI’s response will likely not factor in all the relevant information needed to make a proper conclusion on a given issue. Accountants who use this technology to research need to understand how to ask the right questions, and when to ask more questions to get the technology aligned with sought after output.
  • Timeliness: Generative AI tools produce information based on training data and the inputs provided by the user. Unless the AI is specifically trained to answer technical accounting questions, it may not be able to properly differentiate between current and outdated information within its training data. This can lead to inaccurate responses. In many cases, it is worth the time to find out what the most current guidance is and prompt the generative AI to respond with that guidance in mind. Then double check the response against the actual guidance from the IRS or the Financial Accounting Standards Board.

Many senior-associate-level professionals use generative AI for research. It can be a great way to get started on a memo or to figure out where to dive deeper. It is certainly not the only research that should be done. To mitigate these risks, professionals need to be trained and aware of how to research without the use of generative AI so that they don’t fall victim to the inaccuracies of the technology. As an advanced tool in an accountant’s research toolkit, generative AI can be the key to unlocking incredible efficiencies, but it can be a trap for the inexperienced. Firms that invest in training and on the practical use of these tools will be able to leverage them significantly more effectively than the ones that don’t.

Error Risks – Generative AI is renowned for its ability to fabricate, it can generate content rife with partial or complete inaccuracies. It has been built to produce compelling sounding content that can be difficult to fact-check unless reviewed by a content expert.

The errors could include calculation errors, rounding issues, missed anomalies, or the mismatching of categories (such as expenses). If undetected, these issues could lead to inaccurate accounting records that may result in illogical and incorrect financial reports. For example, if an accounts payable employee used generative AI to extract invoice data and code it to the appropriate cost center, just one error could cause general ledger inaccuracies if not caught.

Although gathering and processing routine information is a logical and relatively low-risk place to incorporate generative AI use, necessary mitigation strategies can help safeguard the process. These strategies include human oversight, employee training, system checks, and data governance policies. Human checkpoints at various steps in the process by skilled employees will be foundational. These employees will need continuous generative AI training to ensure proper usage and company policy compliance. Company data governance policies will help guide the employee training objectives to ensure compliance and privacy issues are addressed. Using other systems for checks (such as Excel) can provide another layer of detection and assurance. These strategies will allow accountants to harness the benefits of using Generative AI while minimizing risks related to errors.

Another type of error risk might entail leaked data.

Let’s say a solo tax preparer loads his or her clients’ tax returns into generative AI and asks it to summarize and take notes about the return, make suggestions for improvements, and then format it into a client summary and checklist to be shared with the client. The preparer also enters some clients’ emails and asks the site to provide a reply to emails. Unfortunately, the preparer forgets to remove personally identifiable information, such as names, Social Security numbers, addresses, and bank account information.

This scenario poses significant risks to both the accounting firm and the client. Since the information uploaded to generative AI is stored in the cloud and now resides on external servers, this could result in identity theft and fraud. This data breach would have to be reported to all clients, with a significant cost to the firm – especially if it does not have cybersecurity insurance. It will also potentially damage the reputation of the firm.

In addition, the unauthorized sharing of personal sensitive information violates various regulatory provisions, including privacy laws and Circular 230. This could expose the company to fines and legal ramifications.

These risks can be mitigated in various ways, the main one being not allowing generative AI to analyze confidential client information.

Risk Mitigation Strategies

If a company is going to use generative AI, it must establish a strict information security policy that outlines the intended (and forbidden) uses of generative AI and require training on generative AI so professionals are aware of the risks. They must also ensure the AI tool complies with data privacy regulations like GDPR and CCPA, and has end-to-end encryption. Firms may also want to consider limiting access to professional AI systems or, for larger firms, only allow access to an internally designed generative AI tool. The table below offers tips on a few risk mitigation strategies you may want to consider.

Risk Mitigation StrategyExample
Company Generative AI Policy that Includes ...
Ethical GuidelinesEstablish ethical AI usage standards, including fairness, transparency, and accountability. Include provisions to prevent bias and ensure inclusivity in AI-generated content.
Compliance and Legal RequirementsDetail legal standards and regulations that must be adhered to. Incorporate federal agency AI best practices and outline procedures to stay updated with current laws.
Quality AssuranceImplement review processes for AI-generated content. Specify criteria for accuracy, reliability, and relevance, and assign responsibility for quality control.
Data Privacy and SecurityDefine protocols for safeguarding data and securing AI systems. Include guidelines for data handling, storage, and access controls.
Training and Updates
Company's Generative AI PoliciesDevelop and deliver training sessions to educate employees on proper AI usage, data security, and compliance with internal policies. Continuous Provide regular employee training to stay up on AI tool updates and best practices for integrating AI into workflows.
Quality Assurance
Human in the LoopIncorporate human review at multiple points, such as verifying raw data inputs, reviewing intermediate outputs (trial balances), and auditing reports or forecasts for accuracy and consistency.
Other SystemsUse Excel for data validation by applying conditional formatting to flag discrepancies or errors in intermediate outputs or final content.
Future Direction

As AI continues to evolve, its integration into accounting practices will likely expand to include advanced predictive analytics, enhanced collaboration tools, and fully automated financial ecosystems. However, successful adoption requires a human-in-the-loop approach to ensure oversight that validates AI outputs and maintains accountability. Organizations should establish clear generative AI company policies to govern its use and address ethical concerns, compliance, and data security. Starting small by implementing AI in targeted, low-risk areas can help firms build confidence, refine processes, and establish methods to mitigate risks.

Generative AI empowers accountants to focus on strategic, high-value tasks, transforming their roles from transactional processors to strategic advisers. By balancing innovation with thoughtful oversight and phased implementation, this technology is not just reshaping processes but redefining the profession itself. 

 

1 Carl Franzen, “OpenAI Says ChatGPT Now Has 200M Users,” VentureBeat (Aug. 29, 2024).

2 Get AI Ready - What IT Leaders Need to Know and Do,” Gartner Inc. 

3 The Rise of Generative AI in SEC Filings,” Arize. 

4 Lindsey Wilkinson, “PwC Brings Private Generative AI Tool to Internal Employees,” CIO Dive (Aug. 15, 2023).

5 Ankur Agrawal, Ben Ellencweig, Rohit Sood, and Michele Tam, “Gen AI: A Guide for CFOs,” McKinsey & Company (Nov. 15, 2023).


Colleen S. Krcelich, CPA, is a professor of accounting and business at Pennsylvania State University – Lehigh Valley in Center Valley. She can be reached at colleen@bizsupportllc.com.

Christina M. Olear, CPA, is an accounting professor with Pennsylvania State University Brandywine in Media and is a member of the Pennsylvania CPA Journal Editorial Board. She can be reached at cmo16@psu.edu.

John Peatross, CPA, is a senior manager at Siegfried Advisory LLC in Wilmington, Del., and an adjunct instructor in the department of economics at the University of Maryland. He can be reached at jpeatross@siegfriedgroup.com.

See how you can benefit.

Advocacy

Ensure that your interests are represented in Harrisburg with state legislators and regulators.

Free CPE

Get more than 30 hours of free CPE per year with monthly town halls, professional issues updates, free member programs, and self-study CPE Academy courses.

Exclusive Savings

Get up to $100 off CPE courses, exclusive unlimited discount packages, and discounts on CPA products and services

Career Support

Tap into PICPA's Career Center for career guidance and a Pennsylvania accounting specific job board.

Join PICPA today and enjoy these exceptional benefits!

Stay informed.

Subscribe to My PICPA Weekly for timely information on technical updates in tax, auditing, and other financial topics. This personalized newsletter is your ticket to getting the most important topics in front of you as soon as they become issues.

SIGN UP NOW