Agentic AI

Building the Agentic Accounting System that Banks, Auditors, and Analysts Love

Agentic AI

Building the Agentic Accounting System that Banks, Auditors, and Analysts Love

Agentic AI

Building the Agentic Accounting System that Banks, Auditors, and Analysts Love

Agentic AI

Building the Agentic Accounting System that Banks, Auditors, and Analysts Love

Agentic AI

Building the Agentic Accounting System that Banks, Auditors, and Analysts Love

Aaron Schnider
Senior Finance Transformation Manager
Claire Jacobs
Senior Product Marketing Manager
April 30, 2026
 • 
#
 min read

Every issuer processor sits on top of a firehose. Billions of dollars move through billions of transactions, each one generating multiple internal events, each event touching multiple accounts. The data is there. The problem is what happens downstream.

At most companies, the pattern looks the same. Someone writes a query to answer a question from the CFO. Someone else writes a different query to answer the same question from a different angle. Slightly different logic, slightly different definitions, slightly different time boundaries. Both are defensible. Neither reconciles to the other. Multiply that by every reporting question the business needs to answer, and you have a finance team producing a lot of numbers but unable to explain why they all seem to disagree.

This is the ceiling finance and accounting teams eventually hit. It’s not a data problem. It’s a framework problem. For an underlying issuer processor like Lithic, that gap does not stay inside finance; it shapes what fintechs, sponsor banks, and auditors can trust. So we built a bottoms-up subledger system that reconciles all activity to external sources at the transaction level, then layered on AI agents to make the validation process scalable, repeatable, and able to operate at a depth and speed that would be difficult to match manually. Here's why, and how.

You cannot reconcile a firehose with buckets

Most companies do financial reporting by placing buckets under the firehose of transactional data. Each report catches some of the flow. Someone measures what is in the bucket, another person analyzes the result, and everyone learns something useful about that part of the business. But inevitably those buckets just do not seem to add up to a coherent picture of the whole system. Change the query, the definition, or the cutoff time and you catch a different amount. Each report may be reasonable on its own, but together they can’t be Frankensteined into something that actually reconciles without someone spending nights and weekends forcing the parts to resemble a whole.

The alternative is not a better bucket.  It is routing the firehose into a controlled water system.  

In that kind of system, every dollar has a defined path in, a defined path through, and a defined path out. If pressure changes in one part of the system, you can trace the effect everywhere else. Nothing disappears. Nothing appears out of thin air. The system stays balanced by design. And when something does leak, you know immediately and can trace it to the source.

That is what a real subledger is supposed to be.

In fintech, this kind of routed system is the only approach that holds up. To the consumer, something like buying a cup of coffee seems simple. But inside, that swipe triggers a chain of internal movements across various receivables, payables, cash, and other balance sheet accounts. There are billions of these, and someone needs to know precisely how each dollar came in, moved between accounts, and left.

The finance team may need that precision first, but sponsor banks, auditors, and clients all benefit from a clearly traceable system.

Lithic’s Sub-Ledger must reconcile with network and Fed level records

Sponsor banks face the examiner when books don't tie

Sponsor banks are on the hook for the programs they sponsor. When a bank examiner asks, "Can you show me the balance for this program, broken down by source?" the bank turns to the issuer processor. If the processor's internal accounting is buckets-of-water, the bank gets a spreadsheet that took someone a week to build and can't be independently verified. That is a regulatory conversation no bank wants to have.

Sponsor banks face the brunt of the pain when settlement volumes don't match what the issuer processor reports. External auditors need the query, the result, the reconciliation to an external source, and the explanation for any variance, and when the processor can't produce that, the audit finding lands on the program sponsor. Fintech clients building on the issuer processor inherit its data quality and end up constructing their own reconciliation layer on top, duplicating work the processor should have done.

A simple consumer swipe impacts multiple accounts across Lithic, network, and Fed records

What a $5 coffee looks like in the system

To make the accounting tangible, follow a single $5 coffee through the system. 

The exact flow depends on the card program, funding model, settlement timing, and other configuration details. But at a high level, an issuer processor needs to account for several things at once: the cardholder’s transaction activity, obligations to settlement counterparties, expected cash movement, and the balance sheet impact of timing differences.

That is where the subledger matters. A single transaction may touch receivables, payables, cash, client or program balances, and settlement-related accounts. Each movement needs to be recorded in a way that can be tied back to the underlying transaction and compared against external reference data.

The complexity comes from timing. The coffee purchase may be authorized on Day 1, while related cash movement and settlement activity may not fully resolve until Day 2 or Day 3. In practice, the system is reconciling three clocks at once: the card network’s settlement cutoff, the bank’s posting cutoff, and the ledger’s operating day. The same transaction can be validly present in one source, pending in another, and in transit in the ledger. The subledger’s job is to tie those views together and explain whether a variance is expected due to timing, which finance needs to account for, or an exception that operations needs to investigate and correct.

Those in-between states are where a lot of accounting complexity lives. They cannot be handled with rough estimates or rollup-level shortcuts. The system needs enough detail to explain why a balance exists, what transaction activity created it, and what still needs to happen before it clears.

The key accounts in the system are connected in the same closed system. When activity changes in one place, the corresponding effect should be visible elsewhere. That is the point of the controlled system: every movement is traceable, every balance has a source, and the system only works if all the accounts tie together.

Why reconciliation became an agent's job  

A bottoms-up subledger is only as good as its validation. Every account has to be checked against internal activity and external reference points, not as a quarter-end project, but as a continuous operating rhythm. For an issuer processor, that means asking whether card transaction activity ties to settlement activity, whether cash movement lines up with banking records, whether opening balance plus activity equals closing balance, and whether timing differences are explained instead of absorbed into a miscellaneous bucket.

That is not one reconciliation. It is a network of checks across accounts, datasets, cutoff times, and business rules. When something does not tie, the hard part is not spotting the variance. It is tracing the break and determining whether it reflects expected timing, a data issue, or something requiring remediation.

This is where AI changed the shape of the work. The accounting judgment still came from people who understood the business and could define what account signals indicated a timing, data, or other issue. But once that logic was defined, an agent could execute the validation suite, preserve context, and organize exceptions faster and more consistently than a manual process.

Empowering domain expertise with AI

The rules for how each transaction type moves from each account, the reconciliation logic that ties internal records to external sources, all of that comes from decades of doing this work and understanding how money actually flows through a card program.

This part can't be automated and shouldn't be. The shift was making that expertise executable. Instead of keeping the rules scattered across people’s heads, ticket threads, and one-off queries, the team codified the logic as structured context the agent could use: how transaction types should move through accounts, what reference points matter, and how to interpret common timing differences.

That context is now durable. The same way you would onboard a strong analyst before handing them a complex workstream, we give the agents the business context and approved validation logic it needed to operate inside the accounting system. Once that was in place, AI could do what software is good at: run the checks consistently, carry the context forward, and make expert judgment available every time the system is validated.

The judgment is human. AI makes it repeatable.

What the agent actually checks

Armed with blessed queries, approved SQL checks for proving completeness and accuracy, the agent runs baseline tests, then uses additional business context to investigate variances with targeted follow-ups. It doesn't design the accounting policy or decide what "right" looks like. It gets a precise definition of right and then shows where the data does or does not support that definition.

For anyone evaluating the system from a risk perspective, that distinction matters. The agent turns accounting judgment into a repeatable operating process. A human expert defines the rules. The agent executes them at scale, without skipping the tedious parts, losing the thread, or stopping after the first obvious break.

Agents investigate variances and discovers root causes

Flagging a variance is the easy part. The hard part is explaining it. 

When a balance does not tie, the agent doesn't just report the number. It chases the variance across datasets to categorize the root cause. In practice, it has caught things that would have been difficult, or even impossible, to find manually: migration artifacts from historical data loads, timing differences spanning settlement windows, classification mismatches between internal and external systems.

The agent's output still comes back to an expert for review. Does a variance pattern point to a systemic issue? Does a reconciliation tolerance need tightening? Does a finding change the accounting treatment for a category of transactions? Has something shifted in the underlying data that requires upstream changes? The agent doesn't close the loop. That's still a human job.

The agent’s chase leads to a structured explanation an auditor can easily understand, with full trails on the investigation path.

Every cycle makes the process sharper because the knowledge does not disappear after a one-off investigation. It gets fed back into the context the agent uses next time.

Asking the ledger questions

On top of those checks is an application layer. Instead of treating reconciliation as a static report, users can interrogate it: Why did this balance move? What is driving this exception? Which transactions explain this difference?

The agent translates those questions into context-aware analysis and returns an answer tied back to supporting data.

This makes the system feel less like a monthly close checklist and more like an accounting copilot for the platform. Finance can investigate by account, date, or issue type. Operators can see whether an exception is isolated or systemic. 

Meet Luca, the application layer our team interacts with to investigate variances

What the books say about the platform

Lithic built our own card issuing and processing infrastructure because the existing options weren't good enough for the card products the team wanted to create. The accounting system exists for the same reason. You see a structural problem, you design something from first principles, you build it, you validate it against real data. Same instinct. A company that builds this way internally builds this way for its clients.

That matters because the books are not separate from the platform. They are a signal of the platform’s data quality. If the system can connect transaction activity to financial position, explain timing differences, and produce supportable findings when something breaks, that says something meaningful about the infrastructure underneath.

For clients, banks, and auditors, the value is not another report.

We work with dozens of sponsor banks and hear the same story: on legacy issuing stacks, daily reconciliation is a fire drill.Their teams spend countless hours chasing breaks, manually investigating mismatches, and stitching together a “source of truth” across settlement files, ledgers, and bank cores.

With Lithic’s agentic reconciliation system, we can ensure that balances are supported by transaction-level evidence and that exceptions can be investigated without starting from scratch. AI makes that confidence more scalable. It gives the accounting system a way to inspect itself continuously, at a depth humans could not reasonably sustain by hand.

You can learn more about Lithic Ledgering Infrastructure here or reach out to a payments expert to see how Lithic can empower your program.

See Lithic for yourself
Schedule a chat with an expert from our team to see how Lithic can work for your business.
Talk to our team

Want a payments platform that helps you as you grow?