Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now

Change Data Feed (CDF) on Mirror Database: Expose Delta Lake table_changes Function to T-SQL

As part of enabling Change Data Feed (CDF) on Fabric mirror databases, please consider exposing the Delta Lake table_changes table-valued function so that it can be invoked directly through the Lakehouse T-SQL endpoint.

This functionality would allow users to query incremental changes using familiar T-SQL syntax, improving integration with downstream systems and simplifying change tracking workflows.

 

Example Usage:

-- Retrieve changes since version 2
SELECT * FROM table_changes('myschema.mytable', 2);

-- Retrieve changes since a specific timestamp
SELECT * FROM table_changes('myschema.mytable', '2025-11-14T12:00:00.000+0000')
ORDER BY _commit_version;

 

Why this would be helpful:

  • Enables consistent access to CDF data without requiring Spark or Python.
  • Supports real-time analytics and ETL scenarios using standard SQL tools.
  • Aligns with Fabric’s goal of providing unified query experiences across engines.

-> Fabric-Ideas/Enable-Change-Data-Feed-CDF-on-a-Mirror-Database/idi-p/4500759

Status: New
Comments
rocketporg
Advocate I

This would be good... in fact I'd go further, CDF should be able to be enabled on all delta tables under the hood where its not something you can do natively with code like in the lakehouse with pyspark. So for warehouses, SQL databases too etc...