Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
I have performance issues with some of my Power BI reports, some calculated measures and drilldown/drillup takes very long time.
The reports are sourced from a semantic model.
I am loooking for some guidance/references on how to best improve this.
My semantic model is sourced from tables in a warehouse (hoping to use direct lake mode but not really sure how to verify that).
I have one big semantic model. Does it matter for my performande issues?
In theory, I think the size of the model (in terms of number of tables) should not matter so much.
Because Direct Lake only loads the columns it needs from the model (or rather, the underlying delta tables), when a user interacts with the visuals in the report. This is called paging. The column data gets loaded into memory, which makes subsequent queries faster. However when the reports accessing direct lake data is opened first thing in the morning, the data in the delta table columns need to be loaded into memory, which takes some time. It will be faster when using a fewer number of columns in the visuals on a report page. Possibly, you can also schedule semantic-link in a Notebook to query the most used columns of the semantic model in the morning (pre-warm the direct lake semantic model's columns), before people open the reports.
Limiting the number of columns used in the visuals on a report page will probably help a lot.
Also, lakehouse (or warehouse) tables with many rows (tens or hundreds of million rows) will probably be slower than lakehouse tables with fewer rows.
https://www.sqlbi.com/blog/marco/2024/04/06/direct-lake-vs-import-mode-in-power-bi/
Because you said you are using Warehouse as the source, perhaps it helps to run some SQL Select queries against the tables frequently to optimize (compact the underlying parquet files).
https://youtu.be/uKgXKAw00XM?si=TumYZhewI_25GHtr
(this is a great video from the Warehouse product team)
https://blog.fabric.microsoft.com/blog/announcing-automatic-data-compaction-for-fabric-warehouse/
Perhaps some of these general performance considerations for Warehouse also applies:
https://learn.microsoft.com/en-us/fabric/data-warehouse/guidelines-warehouse-performance
You can use the Direct Lake Only setting to verify that your queries are indeed Direct Lake:
(You can also open the report in Power BI Desktop, and use Performance Analyzer to see if DirectQuery is being used. Or run queries in DAX studio to verify the same.)
Be sure to use a custom (new) direct lake semantic model. Don't use the default direct lake semantic model.
thank you
I have a follow up questions
1)
looking at one of my tables in the semantic model
the table is sourced from a view in the warehouse (as I understand direct lake is not supported for views. But why do the property (storage mode) say "Direct Lake"?
2)
if I have a model with a mix of direct lake (tables) and direct query (views)
will queries that only consumes direct lake objects use direct lake or is it all or nothing for a semantic model?
Hi @joakimfenno ,
frithjof_v answered the other questions very well, I only have one small point to add.
Direct Lake mode, based on loading parquet format files directly from a data lake, is a semantic modeling feature used to analyze very large data volumes in Power BI.
While Direct Lake can be used with views, queries to view-based model tables will always fall back to Direct Query mode, which can lead to reduced query performance. This is why the Storage Mode property is displayed as “Direct Lake”.
Develop Direct Lake semantic models - Microsoft Fabric | Microsoft Learn
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
1) I don't know, perhaps a bug in the user interface?
2) I believe only visuals which use the View will fall back to DirectQuery. Other visuals should be unaffected and keep using Direct Lake.
This can be tested, if you turn on "Direct Lake Only", then you will see the DirectQuery visuals failing.
Or you can open the reports in Power BI Desktop, and use performance analyzer to check for a DirectQuery activity.
If some visuals fall back to DirectQuery, then that is probably the biggest performance issue.
Here I am using Performance analyzer in Power BI desktop to query a table and also a similar view. The view causes a fallback to DirectQuery.
thanks, I get results like below even thought all tables accessed by the query are tables
Perhaps the query is just too big, then it falls back to DirectQuery.
Did you enable SQL granular permissions on any of the tables? In that case, it may fall back to DirectQuery.
Did you try using the "Direct Lake Only" setting?
I tried direct lake only but that gives me error message wven though this specific report/visual do not consume anything other than tables
no granular permission (only full on model level) are used
It seems your queries fall back to DirectQuery.
Does it fall back even if you make a report page with a very simple visual (a card visual which just shows the sum of a column)?
yes, seems like that
That's strange. What if you create a new, small table in the same Warehouse. And then add that table to the direct lake semantic model, and query it from Power BI. It should use Direct Lake.