Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

View all the Fabric Data Days sessions on demand. View schedule

Reply
mustafali1970
Regular Visitor

Microsoft Lakehouse Delta Table Logging

In our Microsoft Fabric Lakehouse environment, we are working with a managed Delta table named "Customer_Orders", which includes a primary key column `cust_num` and a timestamp column `RecordDate`. Change Data Feed (CDF) has been enabled using the following configuration:

ALTER TABLE Customer_Orders SET TBLPROPERTIES (delta.enableChangeDataFeed = true)


Our current ingestion process involves extracting Delta Parquet files from SQL Server based on a rolling window defined by `MAX(RecordDate) - 3 days`. This approach occasionally results in duplicate rows being reintroduced into the Lakehouse table. To handle this, we perform a `MERGE` operation using `cust_num` and `RecordDate` as composite keys, with the following logic:

- WHEN MATCHED THEN SKIP** (no updates)
- WHEN NOT MATCHED THEN INSERT** (new row)

However, the customer expects Fabric Lakehouse to automatically detect changes across all columns—without requiring explicit key definitions in the `MERGE` clause—and to perform updates or inserts accordingly. This expectation appears to be misaligned with Fabric’s native capabilities.

We are seeking guidance from Microsoft to help clarify the out-of-the-box behavior of Lakehouse-managed Delta tables, particularly around change detection and merge semantics, so we can realign the customer’s understanding and ensure the solution is both technically sound and aligned with platform best practices.

2 ACCEPTED SOLUTIONS
BalajiL
Helper III
Helper III

hi, Fabric Lakehouse does not perform automatic change detection across all columns. 

A robust ingestion pattern requires defining one or more keys that uniquely identify a record (cust_num + RecordDate in your case).

If the business expects column-by-column change detection, that must be implemented in your ingestion pipeline (e.g., by hashing all columns or comparing snapshots via CDF).

Best practices 

  1. Continue using MERGE with well-defined keys.
  2. If you want “all column” change detection, add a row hash column (MD5/SHA of all fields) and compare hashes during ingestion.
  3. Use CDF for incremental extraction instead of time-window-based logic (this avoids reintroducing duplicates).

View solution in original post

Shahid12523
Community Champion
Community Champion

Fabric Lakehouse Delta tables don’t auto-detect all column changes.

CDF gives raw inserts/updates/deletes, but won’t auto-merge.

MERGE always needs defined keys (like cust_num) and explicit update rules.

“All-column automatic change detection” is not supported out-of-the-box—must be coded manually.
👉 Best practice: use CDF + MERGE with business keys.

Shahed Shaikh

View solution in original post

2 REPLIES 2
Shahid12523
Community Champion
Community Champion

Fabric Lakehouse Delta tables don’t auto-detect all column changes.

CDF gives raw inserts/updates/deletes, but won’t auto-merge.

MERGE always needs defined keys (like cust_num) and explicit update rules.

“All-column automatic change detection” is not supported out-of-the-box—must be coded manually.
👉 Best practice: use CDF + MERGE with business keys.

Shahed Shaikh
BalajiL
Helper III
Helper III

hi, Fabric Lakehouse does not perform automatic change detection across all columns. 

A robust ingestion pattern requires defining one or more keys that uniquely identify a record (cust_num + RecordDate in your case).

If the business expects column-by-column change detection, that must be implemented in your ingestion pipeline (e.g., by hashing all columns or comparing snapshots via CDF).

Best practices 

  1. Continue using MERGE with well-defined keys.
  2. If you want “all column” change detection, add a row hash column (MD5/SHA of all fields) and compare hashes during ingestion.
  3. Use CDF for incremental extraction instead of time-window-based logic (this avoids reintroducing duplicates).

Helpful resources

Announcements
November Fabric Update Carousel

Fabric Monthly Update - November 2025

Check out the November 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Kudoed Authors