Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi community,
I have a KQL DB with many large tables containing historical monitoring data. For the purpose of using DeltaLake based reports, I enabled OneLake Availability (also called mirroring) on all DB tables. The mirroring works fine for recently ingested data, but not for old data. I tried to enable backfill option in mirroring by disabling mirroring ( .alter-merge table MyTable policy mirroring kind=delta with (IsEnabled=false)) then reenabling it with Backfill ( .alter-mergetable MyTable policymirroringkind=deltawith(IsEnabled=true,Backfill=true)), but mirroring remains disabled as long as I have the backfill=true parameter present. When I enable it through the GUI (Availabilithy switch), the backfill is automatically disabled.
I'm completely struggling with this. Reingesting all tables could be a solution but this will be very challenging as well because the tables are extremly large (from hundreds of millions to billions of rows with update policies enabled on most of the tables)
Some help/guidance on this issue will be much appreciated
Thank you
Solved! Go to Solution.
Hello @Anonymous
"When you turn availability back on, only new data is made available in OneLake with no backfill of the deleted data”
https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-house-onelake-availability
The `.alter-merge table` command does not support a `Backfill=true` option. Attempting to force it via KQL or the GUI will fail because Microsoft has not implemented this functionality
try Use `spark.read.format("kusto")` in a Fabric notebook to export historical data from KQL to OneLake as Delta tables
Ingest new data into a Lakehouse (using Eventstreams/Dataflows), then fork it to KQL via shortcuts. This ensures all data resides in OneLake while enabling real-time queries in KQL
if this is helpful please accept the answer and give kudos
"
Hello @Anonymous
"When you turn availability back on, only new data is made available in OneLake with no backfill of the deleted data”
https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-house-onelake-availability
The `.alter-merge table` command does not support a `Backfill=true` option. Attempting to force it via KQL or the GUI will fail because Microsoft has not implemented this functionality
try Use `spark.read.format("kusto")` in a Fabric notebook to export historical data from KQL to OneLake as Delta tables
Ingest new data into a Lakehouse (using Eventstreams/Dataflows), then fork it to KQL via shortcuts. This ensures all data resides in OneLake while enabling real-time queries in KQL
if this is helpful please accept the answer and give kudos
"
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.