Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
We are pushing data to a Deltatable on Onelake, managed by an external service using the open source delta-rs library. After creating a shortcut to this table, we are using a Direct Lake connection to visualize the data i PowerBI. This works really well, and we find it to be a major upgrade to our existing pipelines based on DirectQuery mode.
However, we recently discovered that this semantic model is loading our capacity quite a lot, in fact about 300,000 CUs the last 14 days. The user interaction with the PowerBI reports has been minimal, averaging to less than one user per day. This is really unexpected, given that no Fabric resources are used to update or maintain the Delta table.
From my understanding, all that is needed from Fabric to synch the table would be to load the content of the _delta_log folder. This corresponds to reading about 100 json files and a parquet file. We do not do any data caching. I recently turned off synching of the default model, which reduced the load by 50%.
Any suggestions on how I can reduce the load on the capacity? Can we hope for some improvements on the Fabric side in the future?
Solved! Go to Solution.
We have now succesfully reduced the load on the Fabric capacity to a low value, down from about 25 to 50%. The following steps were implemented:
Every refresh of the semantic model now takes about 20 seconds.
The capacity now spends about 20/300 of the time to refresh model. Note that this solution introduce some additional delay between the data source and the report - up to 5 minutes. .
Disabling the default model semantic model had a major impact on the capacity load. Exactly what is going on in the background with this model is unclear.
For our use case, triggering model refresh only from PowerBI or another downstream load would be the ideal solution.
Thanks for the good answer @v-veshwara-msft
In the meantime, we have done more research on this case. We are updating the table about 20 times per minute, which means that even if the framing operation lasts only a couple of seconds, it will be running more or less continuously. Hence the hight CU usage.
Our proposed solution is to disable the "Keep your Direct Lake data up to date" hoping it will disable framing operation triggered on every table change. We will set up a scheduled refresh every 5 minutes. Our theory is that this will reduce the number of framing operations with a factor of 100, which should reduce the CU.
Will update the post once we know the results.
Using a scheduled refresh does not work in our case, see following error message. Only 30 minute intervals allowed.
{
"error": {
"code": "InvalidRequest",
"message": "Refresh schedule time must be full or half hour (HH:00 or HH:30)"
}
}
Next step is to create a function app that updates the semantic model every 5 minutes via REST call. Will kee updating...
We have now succesfully reduced the load on the Fabric capacity to a low value, down from about 25 to 50%. The following steps were implemented:
Every refresh of the semantic model now takes about 20 seconds.
The capacity now spends about 20/300 of the time to refresh model. Note that this solution introduce some additional delay between the data source and the report - up to 5 minutes. .
Disabling the default model semantic model had a major impact on the capacity load. Exactly what is going on in the background with this model is unclear.
For our use case, triggering model refresh only from PowerBI or another downstream load would be the ideal solution.
Thanks for sharing the detailed resolution and steps you implemented. This is very helpful and will definitely benefit other community members facing similar capacity load issues with Direct Lake mode.
It's great to see that turning off the sync options and managing the semantic model refresh through an external timer-based approach significantly reduced the CU consumption. Your note about the impact of the default semantic model is particularly useful and highlights how background processes can affect capacity even when user activity is low.
Appreciate you taking the time to follow up with these insights.
Please continue to use the Fabric Community for any further queries or discussions.
HI @mrtn ,
Thanks for posting in Microsoft Fabric Community.
Based on your description, the high CU usage seems to be linked to how the semantic model interacts with the Delta table metadata via the shortcut, particularly through repeated access to the _delta_log folder in Direct Lake mode. Even without frequent report usage, the model might still be performing background operations such as schema validation or metadata refresh.
You've already seen a reduction by turning off sync for the default model, which confirms that metadata sync activity contributes significantly to the load. Some additional optimizations that can help reduce CU usage further include reviewing dataset settings in the workspace to ensure there's no scheduled refresh, auto page refresh, or model validation being triggered automatically. Also, simplifying the semantic model by limiting calculated columns, relationships, and expensive measures can help reduce background processing.
From the Delta table side, enabling frequent checkpointing on the delta writer helps limit the number of JSON files in the _delta_log folder that need to be scanned, reducing metadata read overhead. Compacting small files and maintaining a clean log structure can also reduce CU usage.
To further understand where the CU is being consumed, you can review usage patterns using the Fabric Capacity Metrics app, which provides detailed insights into resource consumption by individual items like semantic models and refresh activities.
This blog post outlines practical tips to optimize Direct Lake mode usage and might be helpful in your case:
https://community.fabric.microsoft.com/t5/Power-BI-Community-Blog/Optimizing-Direct-Lake-Mode-in-Mic...
Hope this helps. Please reach out for further assistance.
If this post helps, then please consider to give a kudos and Accept as the solution to help the other members find it more quickly.
Thank you.