Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
Anonymous
Not applicable

unable to have Backfill in Onelake avilability for KQL DB

Hi community,

I have a KQL DB with many large tables containing historical monitoring data. For the purpose of using DeltaLake based reports, I enabled OneLake Availability (also called mirroring) on all DB tables. The mirroring works fine for recently ingested data, but not for old data. I tried to enable backfill option in mirroring by disabling mirroring ( .alter-merge table MyTable policy mirroring kind=delta with (IsEnabled=false)) then reenabling it with Backfill ( .alter-mergetable MyTable policymirroringkind=deltawith(IsEnabled=true,Backfill=true)), but mirroring remains disabled as long as I have the backfill=true parameter present. When I enable it through the GUI (Availabilithy switch), the backfill is automatically disabled.

 

I'm completely struggling with this. Reingesting all tables could be a solution but this will be very challenging as well because the tables are extremly large (from hundreds of millions to billions of rows with update policies enabled on most of the tables)

Some help/guidance on this issue will be much appreciated

Thank you

1 ACCEPTED SOLUTION
nilendraFabric
Community Champion
Community Champion

Hello @Anonymous 


"When you turn availability back on, only new data is made available in OneLake with no backfill of the deleted data”

 

https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-house-onelake-availability

 

 

The `.alter-merge table` command does not support a `Backfill=true` option. Attempting to force it via KQL or the GUI will fail because Microsoft has not implemented this functionality

 

 

try Use `spark.read.format("kusto")` in a Fabric notebook to export historical data from KQL to OneLake as Delta tables

 

Ingest new data into a Lakehouse (using Eventstreams/Dataflows), then fork it to KQL via shortcuts. This ensures all data resides in OneLake while enabling real-time queries in KQL

if this is helpful please accept the answer and give kudos

 

 

 

"

View solution in original post

1 REPLY 1
nilendraFabric
Community Champion
Community Champion

Hello @Anonymous 


"When you turn availability back on, only new data is made available in OneLake with no backfill of the deleted data”

 

https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-house-onelake-availability

 

 

The `.alter-merge table` command does not support a `Backfill=true` option. Attempting to force it via KQL or the GUI will fail because Microsoft has not implemented this functionality

 

 

try Use `spark.read.format("kusto")` in a Fabric notebook to export historical data from KQL to OneLake as Delta tables

 

Ingest new data into a Lakehouse (using Eventstreams/Dataflows), then fork it to KQL via shortcuts. This ensures all data resides in OneLake while enabling real-time queries in KQL

if this is helpful please accept the answer and give kudos

 

 

 

"

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.

Top Solution Authors
Top Kudoed Authors