Taking an ethical approach to AI use in accounting and finance

By Charis Thomas, Chief Product Officer at Aqilla  

AI has taken the world by storm in the past few years, dominating multiple sectors and changing how we work. And the finance industry is no different. Accountants are embracing AI’s potential to automate some of the most time-consuming and tedious tasks, such as data entry and invoice reconciliation. But there’s so much more it could do.

AI has enormous potential to transform our sector. However, we risk proceeding without the necessary guardrails if we charge in at full speed. In a sector that handles so much sensitive data and has high stakes for any inaccuracies, we must consider the ethics around this new technology before rushing into implementation. That’s why I believe maintaining responsibility and accountability for our work, ensuring transparency, and upholding security and compliance standards are non-negotiable when working with AI-driven finance and accounting tools.

Striking a balance

This isn’t to say that accountants shouldn’t use AI. There are too many benefits to bury our heads in the sand—and by doing so, we would risk getting left behind. But a balance needs to be struck. With that in mind, let’s consider some of the key ethical concerns around using AI in the finance industry and assess how we embrace the opportunity while maintaining our integrity.

Concern: Lack of insight into AI’s processes

Solution: Maintain responsibility for the data output.

Transparency into AI use is essential for maintaining responsibility for its outputs. Even if AI is analysing the data and producing the report, we must always have insight into the raw data sources. Only then can we understand and verify each step the AI took to generate the report—and ensure the report’s integrity.

To ensure accuracy, I suggest manually reviewing a 10% sample of AI-generated financial data to verify consistency with our expectations and independent calculations. This diligence enables us to confidently present findings to leadership and clients.

Concern: Decline in accuracy

Solution: Treat AI as a junior or trainee, and review its outputs

When working with junior team members or accountants-in-training, we review their work for errors or areas of misunderstanding—and we teach them so they don’t make the same mistake again. We should adopt the same principles when reviewing AI outputs. We’ve all heard the stories of AI getting it wrong, whether creating an image where a person has three legs or confidently saying that 2 + 2 = 5.

Although AI is developing and improving every day, it can and will make mistakes. It is our job to spot and rectify these. We cannot implement AI and leave it to work unsupervised. It needs to be reviewed and trained, just like junior team members. Reviewing a 10% sample of AI’s output should be sufficient to understand its accuracy and correct any mistakes.

While reviewing AI outputs for accuracy is important, we should also watch out for any biases that might creep in while training AI, which can happen even with the best intentions. This can change how AI interprets financial information relating to individual, group, or corporate performance. Offering training programmes that teach accounting and finance teams about bias in AI can help improve output integrity and ensure higher-quality, trustworthy results.  

Concern: AI accessing, using and leaking sensitive information

Solution: Establish a framework for AI use, and be cautious when sharing information

It is all too easy to open ChatGPT or a similar open-source AI model when you need a quick answer to a question or a report within a tight deadline. However, by doing so, you’re providing it—and by extension, the entire internet—with potentially sensitive data. This contradicts most of our general and industry-specific confidentiality rules and ethics, breaches data privacy laws, and violates industry regulations. 

If companies want to benefit from using AI, they should establish a framework that sets out which information can be shared and how—such as only sharing non-sensitive or anonymous data. It is essential to be familiar with your organisation’s policies before deploying AI. If there’s no strategy, consider the potential risks of inputting data into these systems. Accountants must always adhere to their Code of Ethics regarding confidentiality—I’m sure we’d all agree that taking shortcuts in producing a report isn’t worth the potential consequences. 

Embracing the possibilities

Once the groundwork has been laid to ensure ethical AI use, the technology can deliver significant benefits. Automating basic, manual tasks is just the beginning. AI’s ability to gather data at unprecedented speeds and identify and analyse trends can add extra value, especially regarding organisational risk and compliance with new regulations. In the not-too-distant future, AI could also be used for zero-touch invoice payments—thereby streamlining one of our industry’s most time-consuming processes from start to finish. 

Our ethics shouldn’t change just because we adopt AI. With solid procedures, principles, and industry guidance, we already have much of the framework. Still, AI’s transformative power means we must educate ourselves on the technology and its commercial use. I hope this article offers ideas to help maintain ethical AI within your organisation.