Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!View all the Fabric Data Days sessions on demand. View schedule
In our Microsoft Fabric Lakehouse environment, we are working with a managed Delta table named "Customer_Orders", which includes a primary key column `cust_num` and a timestamp column `RecordDate`. Change Data Feed (CDF) has been enabled using the following configuration:
ALTER TABLE Customer_Orders SET TBLPROPERTIES (delta.enableChangeDataFeed = true)
Our current ingestion process involves extracting Delta Parquet files from SQL Server based on a rolling window defined by `MAX(RecordDate) - 3 days`. This approach occasionally results in duplicate rows being reintroduced into the Lakehouse table. To handle this, we perform a `MERGE` operation using `cust_num` and `RecordDate` as composite keys, with the following logic:
- WHEN MATCHED THEN SKIP** (no updates)
- WHEN NOT MATCHED THEN INSERT** (new row)
However, the customer expects Fabric Lakehouse to automatically detect changes across all columns—without requiring explicit key definitions in the `MERGE` clause—and to perform updates or inserts accordingly. This expectation appears to be misaligned with Fabric’s native capabilities.
We are seeking guidance from Microsoft to help clarify the out-of-the-box behavior of Lakehouse-managed Delta tables, particularly around change detection and merge semantics, so we can realign the customer’s understanding and ensure the solution is both technically sound and aligned with platform best practices.
Solved! Go to Solution.
hi, Fabric Lakehouse does not perform automatic change detection across all columns.
A robust ingestion pattern requires defining one or more keys that uniquely identify a record (cust_num + RecordDate in your case).
If the business expects column-by-column change detection, that must be implemented in your ingestion pipeline (e.g., by hashing all columns or comparing snapshots via CDF).
Best practices
Fabric Lakehouse Delta tables don’t auto-detect all column changes.
CDF gives raw inserts/updates/deletes, but won’t auto-merge.
MERGE always needs defined keys (like cust_num) and explicit update rules.
“All-column automatic change detection” is not supported out-of-the-box—must be coded manually.
👉 Best practice: use CDF + MERGE with business keys.
Fabric Lakehouse Delta tables don’t auto-detect all column changes.
CDF gives raw inserts/updates/deletes, but won’t auto-merge.
MERGE always needs defined keys (like cust_num) and explicit update rules.
“All-column automatic change detection” is not supported out-of-the-box—must be coded manually.
👉 Best practice: use CDF + MERGE with business keys.
hi, Fabric Lakehouse does not perform automatic change detection across all columns.
A robust ingestion pattern requires defining one or more keys that uniquely identify a record (cust_num + RecordDate in your case).
If the business expects column-by-column change detection, that must be implemented in your ingestion pipeline (e.g., by hashing all columns or comparing snapshots via CDF).
Best practices