The use of artificial intelligence (AI) by CPAs not only automates tedious tasks, but its generative capabilities also can understand patterns, find and examine anomalies, and provide insights. But the use of AI at work inevitably comes with added professional liability risk.
by Jonathan S. Ziss, JD
Sep 12, 2025, 11:21 AM
Insightful lessons can be learned by reviewing professional liability issues. With this in mind, Gallagher Affinity provides this column for your review. For more information about liability issues, contact Irene Walton.
The acceptance and uptake of artificial intelligence (AI) in professional accounting has been widespread. Providing unfathomable depth and breadth of knowledge (the entire internet, essentially) and operating at a speed so fast that it is irrelevant to even consider, AI can be applied to tax preparation, bookkeeping, forensics, auditing, and analytics. Not only does it automate tedious tasks, but its generative capabilities also can understand patterns, find and examine anomalies, and provide insights. This is the true “intelligence” of AI: the ability to deliver the product of thought and not just organize and “crunch” data.
AI is becoming ubiquitous in our professional as well as our personal lives. Beyond a doubt, AI is a time saver and a performance enhancer. However, the use of AI at work inevitably comes with added professional liability (i.e., malpractice) risk, which is the sober focus of this column.
All practitioners know that their work is inherently confidential. It is also legislated as such by Pennsylvania’s accountant-client privilege, which hovers over all engagements, and the federal tax code’s taxpayer privilege, which goes so far as to criminalize the unauthorized disclosure of taxpayer information under certain circumstances. In addition, there are numerous privacy laws that one must abide.
Even with the use of AI, these constraints are every bit as present. Without exercising due care when using AI, client confidentiality can vanish, irretrievably.
Imagine opening your browser and entering a query into Google or Bing, with that query including sufficient detail from which one could identify or even infer who the query involves. The query would likely be processed using AI – Google’s Gemini or Bing’s Copilot. These are both examples of AI that is open to the internet writ large. Queries and their results are added to the universe of data publicly available. Not to be too blunt, your query would be akin to a billboard that you cannot overwrite or take down, and which will in effect stand forever. Posting confidential information this way would be a grave error.
The solution, though, is simple: use only closed AI, accessed via a licensed version of a proprietary AI tool. This way, the universe remains closed and confidential, accessible only to the practitioner and their colleagues. Alternatively, one can use the free public tools available on their browser, but with great care so as to anonymize clients.
The expression “garbage in, garbage out” has been part of the vernacular since the early days of computing. The expression is no less instructive today when it comes to fashioning a query. The better the query, the better the output. To draft an effective query, one needs to be thinking very clearly about the subject at hand and the purpose of the search or command. This is a technical skill like any other.
Having access to instruction on search grammar and other aspects of constructing a query is an important resource to ensure both efficiency as well as accuracy and completeness. True, AI tools use natural language search prompts; but there are levels of clarity beyond everyday discourse. Writing (or speaking) with precision is the product of thinking with precision.
Another aspect of quality control when using an AI tool is to understand the tool’s limitations. AI is designed to meet the expectations of the user, but it cannot be relied upon to appreciate context and nuance and to gather and express its output accordingly (at least, not yet). So, one might be presented with contradictory authority, stale or questionable authority, or even hallucinated (nonexistent) authority all in the same result. Over-reliance on AI without independent critical analysis is a wayward path to be avoided. AI is a supplement, never a substitute.
Subordinating one’s critical thinking to an inanimate research tool is inconsistent with the duty to always act with due care. For the reasons stated above, overreliance on AI might, in fact, reveal the compound errors of misuse (e.g., a poorly constructed query) together with blind embrace of the search results. In a word, negligence.
On the other hand, acting with due care encompasses staying current, which implies keeping up with technology, among other things. So, increasingly, a failure to make use of AI tools can itself be a deviation from the standard of care.
Safe and sensible use of AI tools is ultimately the responsibility of management. Cultivating an environment in which staff feels comfortable explaining to supervisors where and how they used AI tools is a fundamental safeguard in this respect.
Additionally, as a management tool, having an acceptable-use policy for work devices connected to the internet is a well-established internal control throughout the business world. There needs to be an acceptable use policy governing access to AI tools in the office as well. Examples are widely available for consideration and customization before adopting and distributing them for employees to read and sign.
A good policy would touch on confidentiality in general, taking care not to thoughtlessly enter private or sensitive information even in a closed-source tool, keeping up with training and best practices, limiting use to office-related topics, avoiding creation of inappropriate content, disclosure of reliance on AI, checking AI results for accuracy and reliability, and so forth. An AI use policy should integrate with your privacy policy, which might need to be updated.
Note that different teams and departments might require special-use policies, depending upon the nature of the information that flows through their work. From a professional liability standpoint, a benefit of having an acceptable-use policy is that it demonstrates in writing your firm’s standards and its commitment to best practices around AI tools.
Cybersecurity is its own topic. Practitioners should always be on high alert to ensure that their IT systems are “hardened,” that their data is encrypted and constantly and consistently backed up, and that their colleagues are well informed about good email hygiene when it comes to unexpected instructions, attachments, and links that appear in email.
In relation to AI, cybersecurity can take an even darker turn than what most practitioners have already come to fear. Infiltration of your network by threat actors who access your AI tool can make their treachery even more difficult to detect. By using your AI tool to read email and other documents and to listen to voicemail, threat actors can deploy more realistic phishing emails (without telltale grammatical errors and using familiar expressions and “tone”) along with other enhanced social engineering tactics, such as voice mimicry.
An important professional liability risk management strategy, in addition to your existing cybersecurity, is to refer to your insurance coverage. Do you have cyberinsurance, and if so, are your limits appropriate for your practice? With practices consisting merely of people, hard drives, and the cloud, malware today is a peril just as an earthquake or fire was in the past. Relying on your cloud storage provider to indemnify you in the event of a cyberevent should probably not be your risk management Plan A.
Informing your clients that AI is among the tools that you use makes good sense for a few reasons. Among these is that AI is now seen as a best-in-class tool for efficiency. Toot your horn and let your clients in on the good news. Second, take the opportunity to reassure your clients that you consider their privacy to be paramount and that you take all reasonable measures to safeguard their data and their privacy. Third, invite questions about your use of AI, lest your clients harbor concerns about technology that all but invites suspicion in the popular imagination.
Lastly, reassure your clients that time-based billing reflects the efficiencies gained through AI tools. After all, if a task that would have taken two hours to accomplish manually takes just two minutes using AI, you wouldn’t charge for two hours of your time, right? In terms of professional risk management, inaccurate time charges can court ethics violations, complaints against your license, as well as allegations of misrepresentation if not outright fraud.
Whether you add a discussion of your use of AI tools to your engagement letter template or develop a stand-alone disclosure/consent form, be forthcoming rather than reactive. Contact an attorney knowledgeable about professional risk management with any questions.
The safe use of AI tools can enhance innovation, insight, productivity, and profit. Be mindful of both the opportunities and the professional risks of these powerful technologies as they become ever more present in your practice.