Generative AI was going to be the next big thing. But as accounting firms start to dig deeper into some of the cybersecurity, privacy, and accuracy issues, many have started to pump the breaks. This does not mean generative AI is going away or can’t be integrated into some workflows, but CPAs must be aware of the risks.
by Colleen S. Krcelich, CPA, Christina M. Olear, CPA, and John Peatross, CPA
Mar 10, 2025, 09:50 AM
The hype surrounding generative artificial intelligence (generative AI) has subsided a bit since the intense public interest in 2023, but that doesn’t mean the technology has faded away. According to OpenAI, the developer of ChatGPT, 92% of Fortune 500 companies were using ChatGPT as of August 2024.1 This feature discusses generative AI’s evolution, why some companies are putting on the brakes, and how others are embracing it successfully.
First, a review of some basics. Traditional AI has been around for decades. It works well with explicit rules applied to structured data. One example would be pattern recognition within existing data to make predictions through the use of decision trees, linear regression, or clustering algorithms.
Generative AI works with existing data too, but it can create content based on both structured data and unstructured data, making it much more versatile. Not only can it create content, but it can also summarize, research, change, organize, and streamline content. It is multimodal in its input and output, too, which means one can create written content, pictures, videos, and audio content. It also excels at streamlining processes, code generation, and decision support.
Generative AI’s broader capabilities are due to the complexities of the underlying algorithms. These complicated algorithms, however, come with a lack of transparency, and this creates risks related to data accuracy, bias, ethics, and privacy. Also, businesses have struggled to apply use cases to their specific needs and successfully integrate it within their systems. Eventually, the cybersecurity, ethical, privacy, legal liability, and data accuracy risks also became more apparent.
According to Gartner Inc., while 73% of CIOs had planned to invest more in AI in 2024 than they did in 2023, CFOs are skeptical, with 67% saying that digital investments have underperformed expectations.2
Per Arize AI Inc., more than half of Fortune 500 companies mentioned artificial intelligence in May 2024 annual reports. While one in five specifically mention generative AI, more than two-thirds of the companies did so in reference to generative AI risks.3 These risks include data ethics and privacy, intellectual property issues, inaccuracies, and not keeping up with competitors. We will cover these and more in this feature, along with strategies to mitigate risks.
Generative AI will continue to evolve as regulations emerge, employees are trained, company data governance policies are clarified, and security and ethical concerns are mitigated with appropriate protections. Internal company-use-only generative AI tools will mitigate many risks associated with open source generative AI, but they are expensive and time consuming to build. It is often the larger companies that are investing in the creation of safe models trained on specific content for employee use only. PwC, for example, created ChatPwC, a model trained on tax and regulatory sources. The tool is part of PwC’s $1 billion investment in generative AI.4
The most popular generative AI tools are OpenAI’s ChatGPT, Google’s Gemini, Anthopic’s Claude, and Microsoft’s Copilot. These are general generative AI tools, but there are more specific tools geared toward tasks such as productivity, art creation, code generation, and so on. There are also accounting and tax-specific generative AI tools, including TaxGPT, CPA Pilot, and TruePrep.ai. These tools have fees of about $1,000 per year. They claim to streamline tax research, perform tax calculations, and provide customized tax advice after scanning client tax returns. We have not used any of these tools, therefore we cannot attest to their accuracy, viability, or security. Certainly, these tools would need thorough vetting before being used with client or company data.
The scope of this feature is focused on open source generative AI tools such as ChatGPT. Below is our examination of current use cases of generative AI technology that are of particular interest to accountants, along with the associated risks and risk-mitigation strategies.
Generative AI can offer businesses innovative solutions that enhance efficiency, creativity, and productivity across various domains. Here are just a few examples:
Generative AI – supported by research from firms such as McKinsey & Company, Deloitte, and PwC – is reshaping the accounting profession. Studies show that AI adoption in accounting can increase efficiency by up to 40% while significantly reducing errors. This transformative technology is being applied in several key areas:
– Report generation: Generative AI automates creating financial reports and summaries, reducing time and errors. A 2023 McKinsey study highlights that automating repetitive accounting tasks can save up to 40% of the time spent on traditional workflows.5
– Data extraction: AI tools such as optical character recognition extract data from invoices and receipts, enabling faster processing and reducing manual effort.
– Journal entry creation: Automatic categorization and generation of entries improve accuracy.
– Bank reconciliations: Generative AI compares transactions against records resolve discrepancies.
– Tax law analysis: AI-powered systems quickly interpret complex tax laws and identify potential deductions or credits.
– Accounting standards: AI tools analyze regulatory changes, update accounting practices seamlessly, and keep firms in compliance with new standards.
– Regulation monitoring: Real-time updates on legislative changes allow businesses to adapt strategies proactively.
– Visualizations: Generative AI tools can enhance data presentation by converting raw data into easily interpretable dashboards.
– Pattern detection: Identifies anomalies and provides actionable insights from vast data sets.
– Outlier detection: AI-powered tools can flag unusual transactions and significantly reduce audit time.
– Compliance checks: Ensures adherence to financial regulations by cross-referencing datasets with legal requirements.
– Fraud detection: Can detect potentially fraudulent activity via algorithms trained to recognize irregularities in transactional patterns.
– Scenario simulations: Modeling financial outcomes under various scenarios.
– Narrative generation: Tools like ChatGPT can create client-ready summaries of financial data, ensuring accessibility and clarity.
– Email drafting: Automation of professional communication, including updates and tailored responses, saving significant administrative time.
– Expense management: Identifies cost-saving opportunities through expense-trend analysis. Automated expense tracking systems significantly improves accuracy.
– Tax planning: Generative AI supports tax optimization by analyzing complex tax codes and regulations to ensure compliance and maximize deductions.
As you can see above, generative AI can offer significant benefits to accountants, but it also comes with risks that must be managed. They run the gamut from inadvertent employee misuse to malicious targeted cyberattacks. Some risks include ethical and privacy concerns, security risks, legal and regulatory risks, and data reliability issues.
Mitigating the risks requires robust employee training, awareness of AI-specific vulnerabilities, and implementation of strict security and compliance measures. Below are two accounting use cases presented for your consideration. They focus on research use and potential error inclusions.
Research Risks – The most common use of generative AI by CPAs is through services like ChatGPT as a means to conduct research, acting as an interactive Google search on technical questions. Unfortunately, the AI doesn’t say “it depends” like a real accountant would. It tends to be definitive in its pronouncements. In a profession where the devil is in the details, the risk of using these tools to conduct research lies in the fact that the output may not be complete, relevant, or timely.
Many senior-associate-level professionals use generative AI for research. It can be a great way to get started on a memo or to figure out where to dive deeper. It is certainly not the only research that should be done. To mitigate these risks, professionals need to be trained and aware of how to research without the use of generative AI so that they don’t fall victim to the inaccuracies of the technology. As an advanced tool in an accountant’s research toolkit, generative AI can be the key to unlocking incredible efficiencies, but it can be a trap for the inexperienced. Firms that invest in training and on the practical use of these tools will be able to leverage them significantly more effectively than the ones that don’t.
Error Risks – Generative AI is renowned for its ability to fabricate, it can generate content rife with partial or complete inaccuracies. It has been built to produce compelling sounding content that can be difficult to fact-check unless reviewed by a content expert.
The errors could include calculation errors, rounding issues, missed anomalies, or the mismatching of categories (such as expenses). If undetected, these issues could lead to inaccurate accounting records that may result in illogical and incorrect financial reports. For example, if an accounts payable employee used generative AI to extract invoice data and code it to the appropriate cost center, just one error could cause general ledger inaccuracies if not caught.
Although gathering and processing routine information is a logical and relatively low-risk place to incorporate generative AI use, necessary mitigation strategies can help safeguard the process. These strategies include human oversight, employee training, system checks, and data governance policies. Human checkpoints at various steps in the process by skilled employees will be foundational. These employees will need continuous generative AI training to ensure proper usage and company policy compliance. Company data governance policies will help guide the employee training objectives to ensure compliance and privacy issues are addressed. Using other systems for checks (such as Excel) can provide another layer of detection and assurance. These strategies will allow accountants to harness the benefits of using Generative AI while minimizing risks related to errors.
Another type of error risk might entail leaked data.
Let’s say a solo tax preparer loads his or her clients’ tax returns into generative AI and asks it to summarize and take notes about the return, make suggestions for improvements, and then format it into a client summary and checklist to be shared with the client. The preparer also enters some clients’ emails and asks the site to provide a reply to emails. Unfortunately, the preparer forgets to remove personally identifiable information, such as names, Social Security numbers, addresses, and bank account information.
This scenario poses significant risks to both the accounting firm and the client. Since the information uploaded to generative AI is stored in the cloud and now resides on external servers, this could result in identity theft and fraud. This data breach would have to be reported to all clients, with a significant cost to the firm – especially if it does not have cybersecurity insurance. It will also potentially damage the reputation of the firm.
In addition, the unauthorized sharing of personal sensitive information violates various regulatory provisions, including privacy laws and Circular 230. This could expose the company to fines and legal ramifications.
These risks can be mitigated in various ways, the main one being not allowing generative AI to analyze confidential client information.
If a company is going to use generative AI, it must establish a strict information security policy that outlines the intended (and forbidden) uses of generative AI and require training on generative AI so professionals are aware of the risks. They must also ensure the AI tool complies with data privacy regulations like GDPR and CCPA, and has end-to-end encryption. Firms may also want to consider limiting access to professional AI systems or, for larger firms, only allow access to an internally designed generative AI tool. The table below offers tips on a few risk mitigation strategies you may want to consider.
| Risk Mitigation Strategy | Example |
|---|---|
| Company Generative AI Policy that Includes ... | |
| Ethical Guidelines | Establish ethical AI usage standards, including fairness, transparency, and accountability. Include provisions to prevent bias and ensure inclusivity in AI-generated content. |
| Compliance and Legal Requirements | Detail legal standards and regulations that must be adhered to. Incorporate federal agency AI best practices and outline procedures to stay updated with current laws. |
| Quality Assurance | Implement review processes for AI-generated content. Specify criteria for accuracy, reliability, and relevance, and assign responsibility for quality control. |
| Data Privacy and Security | Define protocols for safeguarding data and securing AI systems. Include guidelines for data handling, storage, and access controls. |
| Training and Updates | |
| Company's Generative AI Policies | Develop and deliver training sessions to educate employees on proper AI usage, data security, and compliance with internal policies. Continuous Provide regular employee training to stay up on AI tool updates and best practices for integrating AI into workflows. |
| Quality Assurance | |
| Human in the Loop | Incorporate human review at multiple points, such as verifying raw data inputs, reviewing intermediate outputs (trial balances), and auditing reports or forecasts for accuracy and consistency. |
| Other Systems | Use Excel for data validation by applying conditional formatting to flag discrepancies or errors in intermediate outputs or final content. |
As AI continues to evolve, its integration into accounting practices will likely expand to include advanced predictive analytics, enhanced collaboration tools, and fully automated financial ecosystems. However, successful adoption requires a human-in-the-loop approach to ensure oversight that validates AI outputs and maintains accountability. Organizations should establish clear generative AI company policies to govern its use and address ethical concerns, compliance, and data security. Starting small by implementing AI in targeted, low-risk areas can help firms build confidence, refine processes, and establish methods to mitigate risks.
Generative AI empowers accountants to focus on strategic, high-value tasks, transforming their roles from transactional processors to strategic advisers. By balancing innovation with thoughtful oversight and phased implementation, this technology is not just reshaping processes but redefining the profession itself.
1 Carl Franzen, “OpenAI Says ChatGPT Now Has 200M Users,” VentureBeat (Aug. 29, 2024).
2 “Get AI Ready - What IT Leaders Need to Know and Do,” Gartner Inc.
3 “The Rise of Generative AI in SEC Filings,” Arize.
4 Lindsey Wilkinson, “PwC Brings Private Generative AI Tool to Internal Employees,” CIO Dive (Aug. 15, 2023).
5 Ankur Agrawal, Ben Ellencweig, Rohit Sood, and Michele Tam, “Gen AI: A Guide for CFOs,” McKinsey & Company (Nov. 15, 2023).
Colleen S. Krcelich, CPA, is a professor of accounting and business at Pennsylvania State University – Lehigh Valley in Center Valley. She can be reached at colleen@bizsupportllc.com.
Christina M. Olear, CPA, is an accounting professor with Pennsylvania State University Brandywine in Media and is a member of the Pennsylvania CPA Journal Editorial Board. She can be reached at cmo16@psu.edu.
John Peatross, CPA, is a senior manager at Siegfried Advisory LLC in Wilmington, Del., and an adjunct instructor in the department of economics at the University of Maryland. He can be reached at jpeatross@siegfriedgroup.com.
Generative AI was going to be the next big thing. But as accounting firms start to dig deeper into some of the cybersecurity, privacy, and accuracy issues, many have started to pump the breaks. This does not mean generative AI is going away or can’t be integrated into some workflows, but CPAs must be aware of the risks.
by Colleen S. Krcelich, CPA, Christina M. Olear, CPA, and John Peatross, CPA
Mar 10, 2025, 09:50 AM
The hype surrounding generative artificial intelligence (generative AI) has subsided a bit since the intense public interest in 2023, but that doesn’t mean the technology has faded away. According to OpenAI, the developer of ChatGPT, 92% of Fortune 500 companies were using ChatGPT as of August 2024.1 This feature discusses generative AI’s evolution, why some companies are putting on the brakes, and how others are embracing it successfully.
First, a review of some basics. Traditional AI has been around for decades. It works well with explicit rules applied to structured data. One example would be pattern recognition within existing data to make predictions through the use of decision trees, linear regression, or clustering algorithms.
Generative AI works with existing data too, but it can create content based on both structured data and unstructured data, making it much more versatile. Not only can it create content, but it can also summarize, research, change, organize, and streamline content. It is multimodal in its input and output, too, which means one can create written content, pictures, videos, and audio content. It also excels at streamlining processes, code generation, and decision support.
Generative AI’s broader capabilities are due to the complexities of the underlying algorithms. These complicated algorithms, however, come with a lack of transparency, and this creates risks related to data accuracy, bias, ethics, and privacy. Also, businesses have struggled to apply use cases to their specific needs and successfully integrate it within their systems. Eventually, the cybersecurity, ethical, privacy, legal liability, and data accuracy risks also became more apparent.
According to Gartner Inc., while 73% of CIOs had planned to invest more in AI in 2024 than they did in 2023, CFOs are skeptical, with 67% saying that digital investments have underperformed expectations.2
Per Arize AI Inc., more than half of Fortune 500 companies mentioned artificial intelligence in May 2024 annual reports. While one in five specifically mention generative AI, more than two-thirds of the companies did so in reference to generative AI risks.3 These risks include data ethics and privacy, intellectual property issues, inaccuracies, and not keeping up with competitors. We will cover these and more in this feature, along with strategies to mitigate risks.
Generative AI will continue to evolve as regulations emerge, employees are trained, company data governance policies are clarified, and security and ethical concerns are mitigated with appropriate protections. Internal company-use-only generative AI tools will mitigate many risks associated with open source generative AI, but they are expensive and time consuming to build. It is often the larger companies that are investing in the creation of safe models trained on specific content for employee use only. PwC, for example, created ChatPwC, a model trained on tax and regulatory sources. The tool is part of PwC’s $1 billion investment in generative AI.4
The most popular generative AI tools are OpenAI’s ChatGPT, Google’s Gemini, Anthopic’s Claude, and Microsoft’s Copilot. These are general generative AI tools, but there are more specific tools geared toward tasks such as productivity, art creation, code generation, and so on. There are also accounting and tax-specific generative AI tools, including TaxGPT, CPA Pilot, and TruePrep.ai. These tools have fees of about $1,000 per year. They claim to streamline tax research, perform tax calculations, and provide customized tax advice after scanning client tax returns. We have not used any of these tools, therefore we cannot attest to their accuracy, viability, or security. Certainly, these tools would need thorough vetting before being used with client or company data.
The scope of this feature is focused on open source generative AI tools such as ChatGPT. Below is our examination of current use cases of generative AI technology that are of particular interest to accountants, along with the associated risks and risk-mitigation strategies.
Generative AI can offer businesses innovative solutions that enhance efficiency, creativity, and productivity across various domains. Here are just a few examples:
Generative AI – supported by research from firms such as McKinsey & Company, Deloitte, and PwC – is reshaping the accounting profession. Studies show that AI adoption in accounting can increase efficiency by up to 40% while significantly reducing errors. This transformative technology is being applied in several key areas:
– Report generation: Generative AI automates creating financial reports and summaries, reducing time and errors. A 2023 McKinsey study highlights that automating repetitive accounting tasks can save up to 40% of the time spent on traditional workflows.5
– Data extraction: AI tools such as optical character recognition extract data from invoices and receipts, enabling faster processing and reducing manual effort.
– Journal entry creation: Automatic categorization and generation of entries improve accuracy.
– Bank reconciliations: Generative AI compares transactions against records resolve discrepancies.
– Tax law analysis: AI-powered systems quickly interpret complex tax laws and identify potential deductions or credits.
– Accounting standards: AI tools analyze regulatory changes, update accounting practices seamlessly, and keep firms in compliance with new standards.
– Regulation monitoring: Real-time updates on legislative changes allow businesses to adapt strategies proactively.
– Visualizations: Generative AI tools can enhance data presentation by converting raw data into easily interpretable dashboards.
– Pattern detection: Identifies anomalies and provides actionable insights from vast data sets.
– Outlier detection: AI-powered tools can flag unusual transactions and significantly reduce audit time.
– Compliance checks: Ensures adherence to financial regulations by cross-referencing datasets with legal requirements.
– Fraud detection: Can detect potentially fraudulent activity via algorithms trained to recognize irregularities in transactional patterns.
– Scenario simulations: Modeling financial outcomes under various scenarios.
– Narrative generation: Tools like ChatGPT can create client-ready summaries of financial data, ensuring accessibility and clarity.
– Email drafting: Automation of professional communication, including updates and tailored responses, saving significant administrative time.
– Expense management: Identifies cost-saving opportunities through expense-trend analysis. Automated expense tracking systems significantly improves accuracy.
– Tax planning: Generative AI supports tax optimization by analyzing complex tax codes and regulations to ensure compliance and maximize deductions.
As you can see above, generative AI can offer significant benefits to accountants, but it also comes with risks that must be managed. They run the gamut from inadvertent employee misuse to malicious targeted cyberattacks. Some risks include ethical and privacy concerns, security risks, legal and regulatory risks, and data reliability issues.
Mitigating the risks requires robust employee training, awareness of AI-specific vulnerabilities, and implementation of strict security and compliance measures. Below are two accounting use cases presented for your consideration. They focus on research use and potential error inclusions.
Research Risks – The most common use of generative AI by CPAs is through services like ChatGPT as a means to conduct research, acting as an interactive Google search on technical questions. Unfortunately, the AI doesn’t say “it depends” like a real accountant would. It tends to be definitive in its pronouncements. In a profession where the devil is in the details, the risk of using these tools to conduct research lies in the fact that the output may not be complete, relevant, or timely.
Many senior-associate-level professionals use generative AI for research. It can be a great way to get started on a memo or to figure out where to dive deeper. It is certainly not the only research that should be done. To mitigate these risks, professionals need to be trained and aware of how to research without the use of generative AI so that they don’t fall victim to the inaccuracies of the technology. As an advanced tool in an accountant’s research toolkit, generative AI can be the key to unlocking incredible efficiencies, but it can be a trap for the inexperienced. Firms that invest in training and on the practical use of these tools will be able to leverage them significantly more effectively than the ones that don’t.
Error Risks – Generative AI is renowned for its ability to fabricate, it can generate content rife with partial or complete inaccuracies. It has been built to produce compelling sounding content that can be difficult to fact-check unless reviewed by a content expert.
The errors could include calculation errors, rounding issues, missed anomalies, or the mismatching of categories (such as expenses). If undetected, these issues could lead to inaccurate accounting records that may result in illogical and incorrect financial reports. For example, if an accounts payable employee used generative AI to extract invoice data and code it to the appropriate cost center, just one error could cause general ledger inaccuracies if not caught.
Although gathering and processing routine information is a logical and relatively low-risk place to incorporate generative AI use, necessary mitigation strategies can help safeguard the process. These strategies include human oversight, employee training, system checks, and data governance policies. Human checkpoints at various steps in the process by skilled employees will be foundational. These employees will need continuous generative AI training to ensure proper usage and company policy compliance. Company data governance policies will help guide the employee training objectives to ensure compliance and privacy issues are addressed. Using other systems for checks (such as Excel) can provide another layer of detection and assurance. These strategies will allow accountants to harness the benefits of using Generative AI while minimizing risks related to errors.
Another type of error risk might entail leaked data.
Let’s say a solo tax preparer loads his or her clients’ tax returns into generative AI and asks it to summarize and take notes about the return, make suggestions for improvements, and then format it into a client summary and checklist to be shared with the client. The preparer also enters some clients’ emails and asks the site to provide a reply to emails. Unfortunately, the preparer forgets to remove personally identifiable information, such as names, Social Security numbers, addresses, and bank account information.
This scenario poses significant risks to both the accounting firm and the client. Since the information uploaded to generative AI is stored in the cloud and now resides on external servers, this could result in identity theft and fraud. This data breach would have to be reported to all clients, with a significant cost to the firm – especially if it does not have cybersecurity insurance. It will also potentially damage the reputation of the firm.
In addition, the unauthorized sharing of personal sensitive information violates various regulatory provisions, including privacy laws and Circular 230. This could expose the company to fines and legal ramifications.
These risks can be mitigated in various ways, the main one being not allowing generative AI to analyze confidential client information.
If a company is going to use generative AI, it must establish a strict information security policy that outlines the intended (and forbidden) uses of generative AI and require training on generative AI so professionals are aware of the risks. They must also ensure the AI tool complies with data privacy regulations like GDPR and CCPA, and has end-to-end encryption. Firms may also want to consider limiting access to professional AI systems or, for larger firms, only allow access to an internally designed generative AI tool. The table below offers tips on a few risk mitigation strategies you may want to consider.
| Risk Mitigation Strategy | Example |
|---|---|
| Company Generative AI Policy that Includes ... | |
| Ethical Guidelines | Establish ethical AI usage standards, including fairness, transparency, and accountability. Include provisions to prevent bias and ensure inclusivity in AI-generated content. |
| Compliance and Legal Requirements | Detail legal standards and regulations that must be adhered to. Incorporate federal agency AI best practices and outline procedures to stay updated with current laws. |
| Quality Assurance | Implement review processes for AI-generated content. Specify criteria for accuracy, reliability, and relevance, and assign responsibility for quality control. |
| Data Privacy and Security | Define protocols for safeguarding data and securing AI systems. Include guidelines for data handling, storage, and access controls. |
| Training and Updates | |
| Company's Generative AI Policies | Develop and deliver training sessions to educate employees on proper AI usage, data security, and compliance with internal policies. Continuous Provide regular employee training to stay up on AI tool updates and best practices for integrating AI into workflows. |
| Quality Assurance | |
| Human in the Loop | Incorporate human review at multiple points, such as verifying raw data inputs, reviewing intermediate outputs (trial balances), and auditing reports or forecasts for accuracy and consistency. |
| Other Systems | Use Excel for data validation by applying conditional formatting to flag discrepancies or errors in intermediate outputs or final content. |
As AI continues to evolve, its integration into accounting practices will likely expand to include advanced predictive analytics, enhanced collaboration tools, and fully automated financial ecosystems. However, successful adoption requires a human-in-the-loop approach to ensure oversight that validates AI outputs and maintains accountability. Organizations should establish clear generative AI company policies to govern its use and address ethical concerns, compliance, and data security. Starting small by implementing AI in targeted, low-risk areas can help firms build confidence, refine processes, and establish methods to mitigate risks.
Generative AI empowers accountants to focus on strategic, high-value tasks, transforming their roles from transactional processors to strategic advisers. By balancing innovation with thoughtful oversight and phased implementation, this technology is not just reshaping processes but redefining the profession itself.
1 Carl Franzen, “OpenAI Says ChatGPT Now Has 200M Users,” VentureBeat (Aug. 29, 2024).
2 “Get AI Ready - What IT Leaders Need to Know and Do,” Gartner Inc.
3 “The Rise of Generative AI in SEC Filings,” Arize.
4 Lindsey Wilkinson, “PwC Brings Private Generative AI Tool to Internal Employees,” CIO Dive (Aug. 15, 2023).
5 Ankur Agrawal, Ben Ellencweig, Rohit Sood, and Michele Tam, “Gen AI: A Guide for CFOs,” McKinsey & Company (Nov. 15, 2023).
Colleen S. Krcelich, CPA, is a professor of accounting and business at Pennsylvania State University – Lehigh Valley in Center Valley. She can be reached at colleen@bizsupportllc.com.
Christina M. Olear, CPA, is an accounting professor with Pennsylvania State University Brandywine in Media and is a member of the Pennsylvania CPA Journal Editorial Board. She can be reached at cmo16@psu.edu.
John Peatross, CPA, is a senior manager at Siegfried Advisory LLC in Wilmington, Del., and an adjunct instructor in the department of economics at the University of Maryland. He can be reached at jpeatross@siegfriedgroup.com.