Banking & Financial Services 1,800 Employees 14 Weeks

A Regional Bank's Journey from 48-Hour Reporting Lag to Real-Time Fraud Alerts

Fraud detection running on yesterday's data. Three ledger systems that never agreed. 23 SOX audit findings. This is the story of how CDC and Snowflake changed the equation.

The Challenge

The bank's core banking system - Temenos T24 running on Oracle 19c - generated transaction logs that were batch-extracted once daily via a set of PL/SQL procedures. Those procedures ran at 11pm and typically finished around 3am, depending on transaction volume. The fraud detection engine ran against this batch, which meant fraudulent transactions had a 24 to 48 hour head start before anyone noticed.

That's not a theoretical risk. In the 6 months before we got involved, the bank had flagged $1.8M in fraudulent transactions - but most of those flags came after the funds had already moved. The fraud ops team was essentially doing forensics, not prevention.

On the compliance side, things weren't better. The bank had 3 separate ledger systems: the core T24 ledger, a loan servicing platform, and an investment accounting system. Every month-end, two analysts spent 8 full working days manually reconciling these three sources into a consolidated view. Despite the effort, the last SOX audit found 23 data discrepancy findings. The auditors weren't happy, and neither was the board.

What We Built

We implemented Talend CDC (Change Data Capture) connectors directly against T24's Oracle backend. Instead of waiting for the nightly batch, CDC captures every INSERT, UPDATE, and DELETE from the Oracle redo logs as they happen. Transactions now flow into Snowflake's raw layer within 90 seconds of occurring. The Talend CDC jobs run on a dedicated integration server and push to Snowflake via the Snowflake JDBC driver with batch inserts of 5,000 rows per commit.

On the Snowflake side, we built a fraud scoring pipeline using Streams and Tasks. A Stream on the raw transactions table captures new rows, and a Task running every 60 seconds processes those rows through a series of fraud rules: velocity checks (more than 5 transactions in 10 minutes from the same account), geo-anomaly detection (transactions in two different countries within an hour), and amount threshold alerts (single transactions exceeding 3x the account's 90-day average). When the scoring pipeline flags something, it writes to an alerts table that triggers a webhook to PagerDuty, which pages the fraud ops team.

For the reconciliation problem, we built a dbt project with models that join all three ledger sources hourly. The dbt models run on a Snowflake XSMALL warehouse (it's mostly joins and aggregations, not heavy compute). Discrepancies are surfaced automatically in a Slack channel via a Snowflake notification integration. What used to take 8 days now takes about 4 hours of human review - and most of that is investigating the handful of real discrepancies, not hunting for them.

Results

48hrs → 90sec Fraud detection latency
67% Reduction in fraud losses (6 months)
8 days → 4hrs Monthly reconciliation time
Zero SOX findings in next audit

Tech Stack

Talend CDC Snowflake Streams/Tasks dbt Oracle (T24) PagerDuty

What We Learned

  • CDC against T24's Oracle backend requires careful redo log configuration. The default Oracle redo log retention was set to 24 hours, which meant if the CDC connector went down for more than a day, we'd lose changes. We increased it to 72 hours and set up monitoring on the archive log space to avoid filling the disk. This is the kind of thing that doesn't show up in a proof-of-concept but will absolutely bite you in production.
  • Snowflake Tasks with a 60-second schedule are good enough for fraud detection. We initially prototyped a true streaming approach with Kafka, but the bank's fraud ops team confirmed that anything under 2 minutes was acceptable. The Streams/Tasks approach was dramatically simpler to maintain and cost a fraction of what a Kafka cluster would have.
  • Automated reconciliation doesn't eliminate human review - it makes it possible. Before, the analysts were spending 90% of their time finding discrepancies and 10% investigating them. Now it's flipped. The dbt models surface the 15-20 real discrepancies per month automatically, and the analysts spend their time on actual root cause analysis instead of data wrangling.

Compliance Deadlines Keeping You Up?

Stale fraud data, manual reconciliation, audit findings piling up - we've been in those rooms. Let's talk about what a realistic fix looks like for your setup.

Start a Conversation