Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Dear all,
I have a problem with fast growing storage consumption of a Fabric Data Warehouse.
I have tables which I update with a full overwrite every day. Hence, completely new Parquet files are written every day.
Due to the documentation, https://learn.microsoft.com/en-us/fabric/data-warehouse/time-travel, old expired files should be deleted after the retention period of seven days.
However, I can still find all versions of the tables as parquet files, vie the OneLake File explorer.
Is there an explanation, why the old files are not deleted, or what can I do to trigger some kind of VACUUM of the tables (as in a Lakehouse).
Best regards
Jan
@foodd ,
thanks for your response. However, as fas as I konw, the VACUUM command is only possible for Lakehouses, since Warehouses can not be modified via Spark Code.
@JanL , agreed. You may find it of benefit to post this case over to r/MicrosoftFabric
If your requirement is solved, please make THIS ANSWER a SOLUTION ✔️ and help other users find the solution quickly. Please hit the LIKE 👍 button if this comment helps you.
@JanL ,
Remove old files with the Delta Lake Vacuum Command | Delta Lake
Table utility commands — Delta Lake Documentation
You may find it of benefit to post this case over to r/MicrosoftFabric
If your requirement is solved, please make THIS ANSWER a SOLUTION ✔️ and help other users find the solution quickly. Please hit the LIKE 👍 button if this comment helps you.
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!