Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi community,
I have a KQL DB with many large tables containing historical monitoring data. For the purpose of using DeltaLake based reports, I enabled OneLake Availability (also called mirroring) on all DB tables. The mirroring works fine for recently ingested data, but not for old data. I tried to enable backfill option in mirroring by disabling mirroring ( .alter-merge table MyTable policy mirroring kind=delta with (IsEnabled=false)) then reenabling it with Backfill ( .alter-mergetable MyTable policymirroringkind=deltawith(IsEnabled=true,Backfill=true)), but mirroring remains disabled as long as I have the backfill=true parameter present. When I enable it through the GUI (Availabilithy switch), the backfill is automatically disabled.
I'm completely struggling with this. Reingesting all tables could be a solution but this will be very challenging as well because the tables are extremly large (from hundreds of millions to billions of rows with update policies enabled on most of the tables)
Some help/guidance on this issue will be much appreciated
Thank you
Solved! Go to Solution.
Hello @Anonymous
"When you turn availability back on, only new data is made available in OneLake with no backfill of the deleted data”
https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-house-onelake-availability
The `.alter-merge table` command does not support a `Backfill=true` option. Attempting to force it via KQL or the GUI will fail because Microsoft has not implemented this functionality
try Use `spark.read.format("kusto")` in a Fabric notebook to export historical data from KQL to OneLake as Delta tables
Ingest new data into a Lakehouse (using Eventstreams/Dataflows), then fork it to KQL via shortcuts. This ensures all data resides in OneLake while enabling real-time queries in KQL
if this is helpful please accept the answer and give kudos
"
Hello @Anonymous
"When you turn availability back on, only new data is made available in OneLake with no backfill of the deleted data”
https://learn.microsoft.com/en-us/fabric/real-time-intelligence/event-house-onelake-availability
The `.alter-merge table` command does not support a `Backfill=true` option. Attempting to force it via KQL or the GUI will fail because Microsoft has not implemented this functionality
try Use `spark.read.format("kusto")` in a Fabric notebook to export historical data from KQL to OneLake as Delta tables
Ingest new data into a Lakehouse (using Eventstreams/Dataflows), then fork it to KQL via shortcuts. This ensures all data resides in OneLake while enabling real-time queries in KQL
if this is helpful please accept the answer and give kudos
"
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.