Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi,
I have a large fact table, which caused my query to underperform. To address this, I partitioned the table into three different tables named A, B, and C based on my needs. I wrote a DAX formula like this:
IF(value = "A", SUM(A[value]),
IF(value = "B", COUNT(B[value]),
AVG(C[value])
)
)
However, when I added this measure to my matrix table, I got an error saying, "Visual exceeded the available resources." The same measure worked when the fact table was not partitioned. Why does this happen with partitioning?
Thanks for replying.
This is just an example. I also tried using SWITCH. The issue is that I have a dynamic aggregation where some rows need to be summed, some need the minimum value, and some need a count. So, I split the data into three tables: one for SUM, one for MIN, and another for COUNT. I use IF or SWITCH to determine the aggregation type and then apply the necessary calculation.
Since the original table consists of 9 million rows, I split it into three tables with sizes of approximately 5 million, 4 million, and 1 million rows. I thought that having separate tables for each aggregation would improve performance compared to performing all calculations on a single 9 million-row table.
I'm not sure if the approach I chose is correct or wrong; I'm doing this purely for performance improvement.
I just need an opinion about whether what I’m thinking is right or wrong. If it’s wrong, what are some other things I can do to correct it?
Thank you !
In my opinion split the table like that it's not going to improve performance.
One table with 9M rows it's not that much, what do you mean with performance issue? Right now what is the performance of your measure? Have you measured it? How much time it takes?
What data type is value column?
Also I don't understand the column with value type A,B,C what data type is it?
Let me know
Hi @Gabry
I have one main fact table; let's call it Example_Fact with 9 million rows.
I’m splitting Example_Fact into three tables:
Sum_Example_Fact: Aggregation used on these rows should be SUM - Column name is Value (type int).
Min_Example_Fact: Aggregation used on these rows should be MIN - Column name is Value (type int).
Count_Example_Fact: Aggregation used on these rows should be COUNT - Column name is Value (type int).
Each table has a column named Value, which is a numeric field on which the aggregation should be applied.
The visual where I use this measure is a matrix table with 6 hierarchies. The measure performs well with 4 hierarchies, but it slows down when it reaches 5 and 6 levels.
The Matrix consist of 6 hierarchies, and the column group is Category. Each category has its own aggregation type, so I use an IF statement.
The measure looks like this for a single table 9m rows:
SWITCH(TRUE(),
category_table[agg_type] = "Sum", SUM(Example_Fact[Value]),
category_table[agg_type] = "Min", MIN(Example_Fact[Value]),
COUNT(Example_Fact[Value])
)
This is the measure for the split tables:
SWITCH(TRUE(),
category_table[agg_type] = "Sum", SUM(Sum_Example_Fact[Value]),
category_table[agg_type] = "Min", MIN(Min_Example_Fact[Value]),
COUNT(Count_Example_Fact[Value])
)
I know hierarchies can be problematic, but I am trying my best to optimize them. The timing was around 20 seconds for the final hierarchy. All my tables are in import mode.
Thank you.
The hieararchy on wich table is made? How is set up the data model? Also in the matrix there's just this measure?
The data model is a proper star schema. Yes, this is the only measure in my table.
With only 1:* relationship?
I see, I think the issue is the hierarchy, I don't know how to solve it, too complicated. Need the help of a PRO.
I faced once a similar problem and I ended making the report using python + angular.
Probably there's not much you can do to optimize your forluma but maybe you can do something on your model, I don't know.
Let's see what others thinks about it, sorry 🙂
Thanks for the reply from Gabry and shafiz_p , please allow me to provide another insight:
Hi, @Abishek
There is some truth to your approach, but there is also some room for improvement. Splitting data into multiple tables can indeed improve performance in some cases, but it can also introduce complexity and maintenance costs. Here are some suggestions:
1.First, you can use aggregation tables: create one or more aggregation tables that contain pre-computed summary data. This reduces the amount of computation when querying.
Here's the documentation:
Aggregation to speed up the performance of a Power BI report even if all IMPORTED - RADACAD
2.Second, use automatic aggregations: Power BI provides automatic aggregation capabilities that automatically create and maintain aggregation tables based on query patterns to optimize performance.
Here's a screenshot of the documentation:
Automatic aggregations overview - Power BI | Microsoft Learn
3.Finally, there are official links related to performance optimization, I hope it will be helpful to you:
Optimization guide for Power BI - Power BI | Microsoft Learn
Troubleshoot report performance in Power BI - Power BI | Microsoft Learn
Of course, if you have any new ideas, you are welcome to contact us.
Best Regards,
Leroy Lu
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hello,
I'm not sure why that formula isn't working; you might want to try using SWITCH
instead of IF
.
However, in my opinion, the main issue is that partitioning the table as you've done might not improve performance. The engine generally works more efficiently with a single table, allowing it to apply all possible compression and optimization techniques. Partitioning the table this way is unlikely to provide any performance benefits.
How many rows are in that table?
Have you already checked the data types? It seems unusual for a simple SUM
to cause performance issues.
Hi @Abishek There are two possible causes. Query taking too long to run, default is 225 sec. Inefficient dax calculation taking too much memory. Aggregations (such as SUM, COUNT, etc.) can be resource-intensive, especially when applied across partitions. It is better to tune your model specially fact tables. If possible make it a single table. Remove unnecessary columns and rows. If possible to reduce cardinality. Try grouping. Rather filtering the whole table, try using a single column to filter. Calculated column consumes memory. So try to avoid calculated column. Try using switch instead of if function.
SWITCH( TRUE(), value = "A", SUM(A[value]), value = "B", COUNT(B[value]), AVG(C[value]) )
You can also read Chris Webb's articles:
Chris Webb's Ariticle
Power BI Antipatterns #9: Filtering whole tables, part I
My Power BI report is slow: what should I do?
Hope this helps!!
If this solved your problem, please accept it as a solution!!
Best Regards,
Shahariar Hafiz
Check out the July 2025 Power BI update to learn about new features.
User | Count |
---|---|
72 | |
72 | |
38 | |
31 | |
26 |
User | Count |
---|---|
95 | |
50 | |
43 | |
40 | |
35 |