Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!

Reply
almagp
New Member

Critical Matrix Performance (230s+): Optimization tips and technical documentation?

Hi everyone,

I am currently optimizing a report and have encountered a critical performance issue with a couple of Matrix visuals. According to the Performance Analyzer, a single matrix is taking over 230,000 ms to load, often resulting in the "The query exceeded the resources" error.

I have a few specific questions:

  1. "Invisible" Calculations: If certain DAX measures are not explicitly displayed in the matrix but are part of the model or related to the data shown, are they still calculated during the visual rendering? What could cause such high load times if the visual setup appears "simple"?

  2. Semantic Model Architecture: Is it considered best practice to split semantic models? I currently use a single model with many complex DAX measures shared across multiple reports. Would it be more efficient to have a "lean" model (data/schema only) and a separate derivative model containing heavy measures only when needed?

  3. Documentation: Is there any official or advanced technical documentation that specifically deep-dives into Matrix visual performance and the impact of row/column hierarchies?

I would appreciate any advice on best practices to resolve this bottleneck. Thanks in advance!

almagp_0-1767339540860.png

 

1 ACCEPTED SOLUTION
v-echaithra
Community Support
Community Support

Hi @almagp ,

Yes, adding shipment triggers complex internal DAX, even if no measures or calculated columns are visible. A Matrix evaluates data per cell (Row × Column × Hierarchy × Totals). Shipment is typically high-cardinality, so adding it multiplies the number of evaluation contexts, forces Power BI to resolve relationships, grouping, and totals at shipment grain. This results in very large internal DAX queries and storage engine scans, which explains the 230s runtime.
Avoid placing fact-table IDs (Shipment) on Matrix rows.
Use pre-aggregated dimensions, separate detail/drillthrough pages or summarized tables for detail views.

Measures not used in the Matrix are NOT calculated. Matrix slowness is almost always due to model design + cardinality, not whether measures are visible. Calculated columns are evaluated if they belong to tables used in rows/columns or relationships. Matrix performance cost grows as Rows × Columns × Hierarchies, even without visible measures.
High load times in a “simple” Matrix are usually caused by:

High-cardinality fields (IDs like Shipment) on rows/columns > cell explosion
Calculated columns on fact tables
Totals / subtotals
Relationship complexity
Conditional formatting or tooltips invoking additional measures

 

No, splitting models just to isolate measures is NOT best practice.
Recommended approach is to keep one shared semantic model, move heavy row-level logic to Power Query / ETL, avoid calculated columns on fact tables, use display folders / perspectives to manage complexity
Splitting into “lean + heavy” models increases duplication and maintenance without guaranteed performance gains.

Documentations:

Optimization guide for Power BI - Power BI | Microsoft Learn
Automatic tuning - SQL Server | Microsoft Learn

Hope this helps.
Thank you.

View solution in original post

10 REPLIES 10
AshokKunwar
Responsive Resident
Responsive Resident

Hii @almagp 

 

If these optimization steps helped you resolve your performance bottleneck, please mark this as an "Accepted Solution"!

v-echaithra
Community Support
Community Support

Hi @almagp ,

May I ask if you have resolved this issue? Please let us know if you have any further issues, we are happy to help.

Thank you.

v-echaithra
Community Support
Community Support

Hi @almagp ,

I just wanted to check if the issue has been resolved on your end, or if you require any further assistance. Please feel free to let us know, we’re happy to help!


Thank you 
Chaithra E.

v-echaithra
Community Support
Community Support

Hi @almagp ,

Yes, adding shipment triggers complex internal DAX, even if no measures or calculated columns are visible. A Matrix evaluates data per cell (Row × Column × Hierarchy × Totals). Shipment is typically high-cardinality, so adding it multiplies the number of evaluation contexts, forces Power BI to resolve relationships, grouping, and totals at shipment grain. This results in very large internal DAX queries and storage engine scans, which explains the 230s runtime.
Avoid placing fact-table IDs (Shipment) on Matrix rows.
Use pre-aggregated dimensions, separate detail/drillthrough pages or summarized tables for detail views.

Measures not used in the Matrix are NOT calculated. Matrix slowness is almost always due to model design + cardinality, not whether measures are visible. Calculated columns are evaluated if they belong to tables used in rows/columns or relationships. Matrix performance cost grows as Rows × Columns × Hierarchies, even without visible measures.
High load times in a “simple” Matrix are usually caused by:

High-cardinality fields (IDs like Shipment) on rows/columns > cell explosion
Calculated columns on fact tables
Totals / subtotals
Relationship complexity
Conditional formatting or tooltips invoking additional measures

 

No, splitting models just to isolate measures is NOT best practice.
Recommended approach is to keep one shared semantic model, move heavy row-level logic to Power Query / ETL, avoid calculated columns on fact tables, use display folders / perspectives to manage complexity
Splitting into “lean + heavy” models increases duplication and maintenance without guaranteed performance gains.

Documentations:

Optimization guide for Power BI - Power BI | Microsoft Learn
Automatic tuning - SQL Server | Microsoft Learn

Hope this helps.
Thank you.

cengizhanarslan
Memorable Member
Memorable Member

Matrix evaluates measures per cell. If you have (Rows × Columns × Measures) in the thousands+ and the measures do iterators / complex FILTERs, it explodes. Big offenders:

  • High cardinality on rows/columns (IDs, dates at day level, many levels expanded)

  • Measures using SUMX/FILTER over large tables, ALLSELECTED, RANKX in-cell

  • Many-to-many / bi-directional relationships causing large filter propagation

  • Using a fact-table column directly on rows instead of a dimension

  • Conditional formatting calling heavy logic again per cell

Quick sanity check: collapse to one row level + one column level. If time drops massively, it’s “cell explosion”.

 

What to do:

  • Reduce the number of cells: fewer row/column levels, avoid high-cardinality fields, default to collapsed state.

  • Remove conditional formatting/tooltips temporarily to see if they’re the cause.

  • Replace expensive iterators in measures with simpler patterns (push logic to ETL, pre-aggregate, use dimensions).

  • Ensure proper star schema: matrix axes from dimensions, single-direction relationships.

_________________________________________________________
If this helped, ✓ Mark as Solution | Kudos appreciated
Connect on LinkedIn
AshokKunwar
Responsive Resident
Responsive Resident

Hii @almagp 

Subject: [SOLVED] Critical Matrix Performance: Optimizing Queries Exceeding 230s

 

The Challenge:

Matrix visuals are "expensive" because they generate a separate query context for every single cell (intersection of row and column). When you have complex DAX measures combined with deep hierarchies, the number of sub-queries explodes, leading to the "Query exceeded resources" error or massive load times like the 230,000ms seen in your Performance Analyzer.

The Solution:

To resolve this, you must address the "Invisible" calculations and optimize the way the Semantic Model interacts with the visual.

​1. The "Invisible" Calculation Myth

  • Are they calculated? No, measures not in the visual are not calculated.
  • What causes the load then? Even a "simple" visual can be slow if your DAX uses Iterators (like SUMX or FILTER) over large tables, or if Auto Date/Time is enabled, creating hidden date tables that bloat the filter context.
  • Check Conditional Formatting: "Invisible" load often comes from conditional formatting rules. If you have "Color by Value" set up, Power BI calculates those measures for every cell even if the text isn't displayed.

​2. Semantic Model Architecture: To Split or Not?

  • Best Practice: It is generally not recommended to split models just to hide measures. Instead, use Perspective or Display Folders to keep the model "lean" for users.
  • The "Lean" Model approach: Rather than splitting, move heavy logic from DAX measures into Calculated Columns or Power Query (M) during the ETL phase if the logic doesn't need to be dynamic. This shifts the processing cost from "Report View Time" to "Data Refresh Time."

​3. Technical Optimization Steps

​To bring that 230s load time down:

  • Reduce Column Headers: Matrices scale poorly with many columns. Try to move attributes from "Columns" to "Rows" to reduce the total number of cell intersections.
  • DAX Fusion: Use DAX Studio to check for "VertiPaq Cache" hits. If your measures are repetitive, rewrite them to use VAR (Variables) to ensure the same logic isn't calculated multiple times within a single measure.
  • Turn off Totals: If you don't need them, turn off Row and Column totals. Totals require a separate, broader calculation pass that often takes longer than the individual cells.

​Official Documentation & Deep Dives:

  • Microsoft Docs: Search for "Optimization guide for Power BI" which has a dedicated section on Matrix performance.
  • The "SQLBI" Methodology: For advanced deep-dives, look up "The Whitepaper on Power BI Performance" by Marco Russo and Alberto Ferrari.

If these optimization steps helped you resolve your performance bottleneck, please mark this as an "Accepted Solution"!

 

Best regards,

Vishwanath 

lbendlin
Super User
Super User

Copy that DAX query and run it through DAX Studio. Study the query plan and the server timings. Then decide your plan of action accordingly.

I had columns with DAX in my model, but none of them are visible in that Matrix, that the main question...

GeraldGEmerick
Memorable Member
Memorable Member

@almagp The issue is almost certainly caused by a poor performing DAX query meaning that it is likely tied to some measure that you have in your matrix visual. You can see this in your Performance Analyzer logs, the DAX query is taking up all of the time. Can you post your DAX measure(s) that you are using in your matrix visual?

almagp_0-1767608533718.png

Hi Gerald,

That is actually what surprises me the most: I don't have any columns or sub-rows in my matrix that use DAX. As you can see in the attached image showing the fields, none of them are DAX measures or calculated columns; they are all raw fields.

However, I am using 'Shipment' (Envío) as a sub-row to show the data at a detail level. I wonder if adding this 'Shipment' level triggers complex internal DAX queries even if they aren't explicitly shown.

On the other hand, if I remove the 'Shipment' sub-row, I don't get a correct 'count' of my data. For instance, if I remove it from the rows and add it directly to the 'Values' section as a Distinct Count (as seen in the 'Missing Shipments' field), the count is different from what I see in the hierarchy. Do you know why this might be happening as well?

 

Helpful resources

Announcements
Power BI DataViz World Championships

Power BI Dataviz World Championships

The Power BI Data Visualization World Championships is back! It's time to submit your entry.

January Power BI Update Carousel

Power BI Monthly Update - January 2026

Check out the January 2026 Power BI update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.