Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Anonymous
Not applicable

Summarize nested measure loads forever due to complex filtering.

Hello. I need help with the calculation, I want to optimize it, but don't know how. I'll try to explain:

I have two survey questions. Fist question assignes 8 brands per person. So I have it stored in the database like 8 rows per person, I have person ID, brand order(1-8) and the brand assigned to each order per person.

In next question those 8 brands are asked in the matrix question, where the rows headers are 10 attributes. So person selects 1+ brand associated with each attribute. So in the database, for each person for each attribute, only selected brands stored.

One remark about attributes, I then group them into 2 factors, so I have 5 attributes per factor.

Then I have two tables:

 

Person IDBrand OrderBrandCountry
11GoogleUK
12MicrosoftUK
13AmazonUK
14MetaUK
15TelegramUK
16SamsungUK
17IMBUK
18DellUK
22OracleUSA

 

Person_idCountryBrandsAttributesFactors
1UKGoogleHave integrityFactor 1
1UKDellStands for something I believe inFactor 1
1UKOracleAre reliableFactor 1
1UKDellIs a visionaryFactor 1
2USAOraclePlays an important role in societyFactor 1
2USADellWill have a clear role in the futureFactor 1
3GermanyIMBWill be popular in the futureFactor 2
3GermanySamsungCares about its environmental impact Factor 2
3GermanyGoogleAre innovative Factor 2

 

 

I calculate the association metric, per each brand per each country, I calculate the number of times it was selected per attribute and divide by the number of times it was exposed in total(in other words, number of people selected it/number of people was asked about it)

The formula:

Nominator = CALCULATEDISTINCTCOUNTTable2[person_id] ) ) 
Denominator = 
CALCULATE(DISTINCTCOUNT('Table1'[person_id]),  'Table1'[country] = SELECTEDVALUE(Table2[country]),
'Table1'[brands] = SELECTEDVALUE(Table2[brands]))
 
Then I need to calculate average result of this formula between attributes per each factor, formula:

Average_attribute_result_per_factor = 
AVERAGEX(SUMMARIZE(CALCULATETABLE(Table2, ALLEXCEPT(Table2, Table2[Factors], Table2[Attributes], Table2[country], Table2[brands])), Table2[Factors], Table2[Attributes], Table2[country], Table2[brabds]), [nominator]/[denominator])


I did my best with explanation, hope it's clear. The problem is that I have the filters in denominator and combined with summarize function it loads for too long when I visualize it on a big table for all the brands.
I was thinking about joing the Table1 with Table2, just to calculate the denominator number and join by related(), but the data model is the following:
Gleb_0-1706166004283.png


Because I will also apply filters, like age group, gender, etc.

Thanks in advance, really need help, have been struglling with it for a long time already





1 REPLY 1
123abc
Community Champion
Community Champion

It seems like your Power BI model is experiencing performance issues due to the complexity of the calculations and the size of the dataset. Here are some suggestions to optimize your model:

  1. Use Relationships in the Data Model:

    • Ensure that you have established relationships between the tables using the appropriate keys. This helps Power BI optimize queries.
  2. Use RELATED() Function:

    • Instead of relying on ALLEXCEPT in your calculations, try using RELATED() to fetch related values from other tables. This might simplify your DAX expressions.
  3. Filter Context Optimization:

    • Avoid using functions like CALCULATE and FILTER excessively, as they might impact performance. Instead, rely on the natural filter context created by relationships.
  4. Consider Aggregating Data in Query Editor:

    • If your data allows it, consider aggregating some of the calculations in Power Query Editor before loading the data into Power BI. This might reduce the number of rows Power BI has to process.
  5. Use Measures Wisely:

    • Instead of calculating the same values multiple times within your measures, create intermediate measures to store those values. This might help in simplifying your final expressions.
  6. Data Model Size:

    • Check the size of your data model. If it's too large, it might be beneficial to remove unnecessary columns or tables that are not used in your analysis.
  7. Indexing and Sorting:

    • Ensure that your tables are properly indexed and sorted in the data model. This can significantly speed up certain calculations.
  8. Partitioning:

    • If your dataset is very large, consider partitioning your data. This involves dividing your data into smaller, more manageable chunks.
  9. Optimize Visuals:

    • If you are displaying the data in visuals, limit the number of rows displayed, and avoid using unnecessary detail. Large tables can significantly impact performance.
  10. DAX Studio:

  • Use DAX Studio to profile and analyze your DAX queries. It can help identify bottlenecks and areas for optimization.

Remember to test each optimization step to ensure it improves performance. Additionally, some trade-offs may be needed between performance and the level of detail displayed in your reports.

 

 

Helpful resources

Announcements
July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.

July PBI25 Carousel

Power BI Monthly Update - July 2025

Check out the July 2025 Power BI update to learn about new features.

Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.