Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
I adopted a Power BI dashboard that is used across my department. It houses 10 reports, four of which contain over 400 million rows. It currently uses DirectQuery for these large reports, but they are so large that slicers and the table will not even load. To combat this, we used dynamic parameters. We created these parameters and inserted them into the WHERE clauses of the SQL Query itself as placeholders. Then, we attached these parameters to slicers so that the selection gets inputted into the original query and filters it on command. This seemed like a great way to tackle the large load but it's not perfect. Direct Query sends a query to the database for each selection made to a slicer. With multiple slicers and cards on the screen, it was hitting the database with a large load of traffic. After learning this, I decided to remove all slicers, parameters, and cards and use the filter pane on the right-hand side. It appears that the filter pane does something similar to the dynamic parameters, in which it resends a new query to the DB with the applied constraints. Then the table will load data according to those filters. This worked in theory but the table struggled to load any more than a couple thousand rows before it timed out, saying resources exceeded
I've since tried to recreate these reports but using Import Mode. Not only am I having some difficulties loading the data to the semantic model this way, but I've heard there is 1 GB of allotted memory for Import mode. I fear with numerous reports of 400+ million rows we would exceed this. I did some digging and I heard about Hybrid tables. A mix of both query modes if you will. I think this could be a possible solution but I haven't dug deep into this concept yet. Would you all recommend this option? Or should I stick to either Direct/Import mode? If so, how should I go about increasing speed and efficiency?
@JackFrench If you can swing it, Import mode will perform better than anything else. The 1 GB is the limit per semantic model. So, you can have multiple 1 GB semantic models. I believe for each person with a Pro license you get 10 GB of capacity in your tenant. Keep it simple if at all possible. Reduce the columns in your fact table to only those that you absolutely need.
You can increase the size of your semantic model by using Premium Per User (PPU) or a Fabric capacity. I also made a "Poor Man's Premium" solution that you could potentially use as well: