Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.

Reply
KarolinaKa
Frequent Visitor

Live connection/Direct Query mode makes reports very slow

Hello PBI community!

My company decided to create DWH in Fabric and for me to create reports using not the import mode from SQL server as we did before, but DWH and that's how to improve reports performance.

 

Now as I created Fabric DWH and created a small report (Sales data of only 15 days, not the year or two) using views from DWH and created additional measures...Report is so slow, published report requires ~1-2min to display every visual even though it is really small.

 

Tried to use semantic model of all DWH table, then created semantic model of only 6 tables which are needed for that report, then took 6 tables but limited data in those tables not to use 2years tables of data, but only a couple of weeks, then live connection switched to direct query... Still the same ~2min loading for the visuals (slightly faster when switched to direct query but still too slow).

 

What could be the main reason?

 

Complex measures? F4 Capacity?

 

How do you handle these kind of situations? Our desire to have smaller size reports because of not using import mode anymore kinda crashed... 😄

 

What could be the solution to make those reports from DWH faster? Buy more capacity (does that help?)? Do not write measures and calculate everything in the DWH views (it is not the right solution for our complex tool based reports)?

 

Looking for help from anyone who uses live or direct mode connections from Fabric Data Warehouse or Lake!

1 ACCEPTED SOLUTION

- views should only be used for simple transformations like column selecting and some filtering to create different data sets for different countries for example. An exception is when the view is used in a step that would be used by another transformation in DWH, which will eventually lead to a table. If the data is used by any other process outside DWH (like PowerBI or other data consumers) I would suggest to always use a table.

 

- I would not suggest using many transformations in the copy data part. Try to split the data extraction from the data transformation. The transformation language in a copy data is limited and also obscure. You may google on a bronze silver gold structure. 

- stored procedure is a good practice. Another tool I like to use is dbt (data build tool), but that is a bit more complex and not natively embedded in Fabric, but very good for transformations. You can run dbt in Fabric without additional costs, but it takes a bit more time to setup. 

View solution in original post

7 REPLIES 7
djweatherstone
Regular Visitor

Your semantic model is falling back to Direct Query because the model is based on Views from your DWH. 

 

"When a Direct Lake semantic model table connects to a view in the SQL analytics endpoint, queries that involve that model table will always fall back to DirectQuery mode. Query performance might be slower."

 

Direct Lake overview - Microsoft Fabric | Microsoft Learn

 

If you wish to move away from Import Mode, you will have to change your engineering practices to so that tables are created that match the requirements of your reports.

Thank you.

Would it help with the report performance if I use live connection to my DWH but the measures for the reports, I'd create directly in the DWH and not in the report (maybe it's a bad practice I have from the times I used import mode from the database for my reports). It is hard to imagine better engineering practises for this DWH and reports when there are different measures for different reports and one DWH as a data source.

I understand that live connection works slower because of the measures. But does it affect measure and report performance whether I create measure in DWH and use it on the report or create measure in PBI desktop and then use it for the report?

FabianSchut
Super User
Super User

Hi, what logic do you have in your views? Would it be possible to convert those views into tables and see if that improves the performance? 

Our main logic for the DWH was to take raw data from the database -> calculate/join/change raw data -> transform it to views as a clean, informative DWH for the company.

If you have all those transformations in the views (calculate/join/change raw data), then that would absolutely impact the performance. Those transformations are always executed everytime you use it in PowerBI. Please run those transformations once and store it in a table. You can achieve that with a stored procedure for example.

FabianSchut,

thank you very much for this point, yesterday I searched a lot about it and thought about this 'slow perfomance because of using views' too, just wanted to get a comment like this from an expert. 

I tried connecting raw tables which are copied from the database first -> go to lakehouse -> go to dwh and from those raw tables then are made modified views. This is how external company which created one of our dwh made it and "taught" me this as the best practice which is clearly not. 

When I connected raw dwh tables to the PBI desktop it works as it should be...without lagging. Even created measures in PBI on top of these raw tables work as expected.

 

I will definitely contact external vendor about their "best solution" for our company, but maybe you will answer a couple of my following questions:

 

- Views usage: am I right that these views are only a good practice for other analysts to use for an easier data execution (without writing complex queries) with one code and without modification but only to use in DWH enviroment without connecting those views to any source as they are really slow?

 

- What could be the solution for my case - excluding using those views as a data source in PBI? Recreate copy data pipeline step where I extract data from the database by modifying code to make simple transformations in that 'copy data' part and using those modified raw tables as a data source in PBI with adding measures in the semantic model rather than calculating measures in the PBI? 

 

- Are those stored procedures is a good practice? I think I could then rewrite everything using stored procedures instead of views and handling these raw tables modifications but maybe my first given option ('copy data' pipeline modification) is a better way? 

 

Trying to become and understand data engineering in a really fast and hurtful way as you see.. 😄 😄 😄

 

 

Thank you for your help!

- views should only be used for simple transformations like column selecting and some filtering to create different data sets for different countries for example. An exception is when the view is used in a step that would be used by another transformation in DWH, which will eventually lead to a table. If the data is used by any other process outside DWH (like PowerBI or other data consumers) I would suggest to always use a table.

 

- I would not suggest using many transformations in the copy data part. Try to split the data extraction from the data transformation. The transformation language in a copy data is limited and also obscure. You may google on a bronze silver gold structure. 

- stored procedure is a good practice. Another tool I like to use is dbt (data build tool), but that is a bit more complex and not natively embedded in Fabric, but very good for transformations. You can run dbt in Fabric without additional costs, but it takes a bit more time to setup. 

Helpful resources

Announcements
September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.