Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!
Hi everyone,
I am currently optimizing a report and have encountered a critical performance issue with a couple of Matrix visuals. According to the Performance Analyzer, a single matrix is taking over 230,000 ms to load, often resulting in the "The query exceeded the resources" error.
I have a few specific questions:
"Invisible" Calculations: If certain DAX measures are not explicitly displayed in the matrix but are part of the model or related to the data shown, are they still calculated during the visual rendering? What could cause such high load times if the visual setup appears "simple"?
Semantic Model Architecture: Is it considered best practice to split semantic models? I currently use a single model with many complex DAX measures shared across multiple reports. Would it be more efficient to have a "lean" model (data/schema only) and a separate derivative model containing heavy measures only when needed?
Documentation: Is there any official or advanced technical documentation that specifically deep-dives into Matrix visual performance and the impact of row/column hierarchies?
I would appreciate any advice on best practices to resolve this bottleneck. Thanks in advance!
Solved! Go to Solution.
Hi @almagp ,
Yes, adding shipment triggers complex internal DAX, even if no measures or calculated columns are visible. A Matrix evaluates data per cell (Row × Column × Hierarchy × Totals). Shipment is typically high-cardinality, so adding it multiplies the number of evaluation contexts, forces Power BI to resolve relationships, grouping, and totals at shipment grain. This results in very large internal DAX queries and storage engine scans, which explains the 230s runtime.
Avoid placing fact-table IDs (Shipment) on Matrix rows.
Use pre-aggregated dimensions, separate detail/drillthrough pages or summarized tables for detail views.
Measures not used in the Matrix are NOT calculated. Matrix slowness is almost always due to model design + cardinality, not whether measures are visible. Calculated columns are evaluated if they belong to tables used in rows/columns or relationships. Matrix performance cost grows as Rows × Columns × Hierarchies, even without visible measures.
High load times in a “simple” Matrix are usually caused by:
High-cardinality fields (IDs like Shipment) on rows/columns > cell explosion
Calculated columns on fact tables
Totals / subtotals
Relationship complexity
Conditional formatting or tooltips invoking additional measures
No, splitting models just to isolate measures is NOT best practice.
Recommended approach is to keep one shared semantic model, move heavy row-level logic to Power Query / ETL, avoid calculated columns on fact tables, use display folders / perspectives to manage complexity
Splitting into “lean + heavy” models increases duplication and maintenance without guaranteed performance gains.
Documentations:
Optimization guide for Power BI - Power BI | Microsoft Learn
Automatic tuning - SQL Server | Microsoft Learn
Hope this helps.
Thank you.
Hii @almagp
If these optimization steps helped you resolve your performance bottleneck, please mark this as an "Accepted Solution"!
Hi @almagp ,
May I ask if you have resolved this issue? Please let us know if you have any further issues, we are happy to help.
Thank you.
Hi @almagp ,
I just wanted to check if the issue has been resolved on your end, or if you require any further assistance. Please feel free to let us know, we’re happy to help!
Thank you
Chaithra E.
Hi @almagp ,
Yes, adding shipment triggers complex internal DAX, even if no measures or calculated columns are visible. A Matrix evaluates data per cell (Row × Column × Hierarchy × Totals). Shipment is typically high-cardinality, so adding it multiplies the number of evaluation contexts, forces Power BI to resolve relationships, grouping, and totals at shipment grain. This results in very large internal DAX queries and storage engine scans, which explains the 230s runtime.
Avoid placing fact-table IDs (Shipment) on Matrix rows.
Use pre-aggregated dimensions, separate detail/drillthrough pages or summarized tables for detail views.
Measures not used in the Matrix are NOT calculated. Matrix slowness is almost always due to model design + cardinality, not whether measures are visible. Calculated columns are evaluated if they belong to tables used in rows/columns or relationships. Matrix performance cost grows as Rows × Columns × Hierarchies, even without visible measures.
High load times in a “simple” Matrix are usually caused by:
High-cardinality fields (IDs like Shipment) on rows/columns > cell explosion
Calculated columns on fact tables
Totals / subtotals
Relationship complexity
Conditional formatting or tooltips invoking additional measures
No, splitting models just to isolate measures is NOT best practice.
Recommended approach is to keep one shared semantic model, move heavy row-level logic to Power Query / ETL, avoid calculated columns on fact tables, use display folders / perspectives to manage complexity
Splitting into “lean + heavy” models increases duplication and maintenance without guaranteed performance gains.
Documentations:
Optimization guide for Power BI - Power BI | Microsoft Learn
Automatic tuning - SQL Server | Microsoft Learn
Hope this helps.
Thank you.
Matrix evaluates measures per cell. If you have (Rows × Columns × Measures) in the thousands+ and the measures do iterators / complex FILTERs, it explodes. Big offenders:
High cardinality on rows/columns (IDs, dates at day level, many levels expanded)
Measures using SUMX/FILTER over large tables, ALLSELECTED, RANKX in-cell
Many-to-many / bi-directional relationships causing large filter propagation
Using a fact-table column directly on rows instead of a dimension
Conditional formatting calling heavy logic again per cell
Quick sanity check: collapse to one row level + one column level. If time drops massively, it’s “cell explosion”.
What to do:
Reduce the number of cells: fewer row/column levels, avoid high-cardinality fields, default to collapsed state.
Remove conditional formatting/tooltips temporarily to see if they’re the cause.
Replace expensive iterators in measures with simpler patterns (push logic to ETL, pre-aggregate, use dimensions).
Ensure proper star schema: matrix axes from dimensions, single-direction relationships.
Hii @almagp
Subject: [SOLVED] Critical Matrix Performance: Optimizing Queries Exceeding 230s
The Challenge:
Matrix visuals are "expensive" because they generate a separate query context for every single cell (intersection of row and column). When you have complex DAX measures combined with deep hierarchies, the number of sub-queries explodes, leading to the "Query exceeded resources" error or massive load times like the 230,000ms seen in your Performance Analyzer.
The Solution:
To resolve this, you must address the "Invisible" calculations and optimize the way the Semantic Model interacts with the visual.
To bring that 230s load time down:
If these optimization steps helped you resolve your performance bottleneck, please mark this as an "Accepted Solution"!
Best regards,
Vishwanath
Copy that DAX query and run it through DAX Studio. Study the query plan and the server timings. Then decide your plan of action accordingly.
I had columns with DAX in my model, but none of them are visible in that Matrix, that the main question...
@almagp The issue is almost certainly caused by a poor performing DAX query meaning that it is likely tied to some measure that you have in your matrix visual. You can see this in your Performance Analyzer logs, the DAX query is taking up all of the time. Can you post your DAX measure(s) that you are using in your matrix visual?
Hi Gerald,
That is actually what surprises me the most: I don't have any columns or sub-rows in my matrix that use DAX. As you can see in the attached image showing the fields, none of them are DAX measures or calculated columns; they are all raw fields.
However, I am using 'Shipment' (Envío) as a sub-row to show the data at a detail level. I wonder if adding this 'Shipment' level triggers complex internal DAX queries even if they aren't explicitly shown.
On the other hand, if I remove the 'Shipment' sub-row, I don't get a correct 'count' of my data. For instance, if I remove it from the rows and add it directly to the 'Values' section as a Distinct Count (as seen in the 'Missing Shipments' field), the count is different from what I see in the hierarchy. Do you know why this might be happening as well?
The Power BI Data Visualization World Championships is back! It's time to submit your entry.
Check out the January 2026 Power BI update to learn about new features.
| User | Count |
|---|---|
| 4 | |
| 3 | |
| 2 | |
| 1 | |
| 1 |