Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
akim_no
Helper I
Helper I

Price / Volume / Mix Effects Calculated at Row Level – Performance Optimization Help Needed

I'm working on a Price / Volume / Mix variance analysis where all effect calculations must be done at the most granular level (row by row), and then aggregated properly for totals in visuals (i.e. no recalculation on subtotal level).

At the same time, I’m facing performance issues, especially with high volume (~2 million rows).

Existing measures:

 Current Metric 

Metric_N =
SUMX (
    'Fact_Table',
    DIVIDE('Fact_Table'[Raw_Metric], [User_Selected_Factor])
)

 

 Previous Year Metric

Metric_PY = 
CALCULATE(
    [Metric_N],  
    SAMEPERIODLASTYEAR('Calendar'[Date])
)

 

 Average Index

Avg_Index_N = DIVIDE([Metric_N], [Units_N])
Avg_Index_PY = DIVIDE([Metric_PY], [Units_PY])

 

 Price Effect

Effect_Index = 
VAR current = [Avg_Index_N]
VAR previous = [Avg_Index_PY]
VAR base_units = [Units_PY]
RETURN 
    IF(
        NOT ISBLANK(current) && NOT ISBLANK(previous),
        (current - previous) * base_units,
        0
    )

 

 Volume Effect

Effect_Volume = 
VAR current_units = [Units_N]
VAR previous_units = [Units_PY]
VAR base_index = [Avg_Index_PY]
RETURN 
    IF(
        NOT ISBLANK(base_index),
        (current_units - previous_units) * base_index,
        0
    )

 

 

Mix Effect

Effect_Mix = 
VAR current = [Avg_Index_N]
VAR previous = [Avg_Index_PY]
VAR current_units = [Units_N]
VAR previous_units = [Units_PY]
RETURN 
    IF(
        NOT ISBLANK(current) && NOT ISBLANK(previous),
        (current - previous) * (current_units - previous_units),
        0
    )

 

What I’ve tried so far:

  • I attempted to pre-aggregate using SUMMARIZE at the Month-Year level for Metric_N, but this caused nested measure iterations which negated the performance gains.

  • Since effects must be computed per row, I’m unable to simplify the calculations via subtotal-level aggregations

    Looking for:

    • How can I optimize row-level calculations without breaking the logic?

    • Any best practices or architectural advice for models requiring heavy row-by-row logic?

       

       

      Thanks in advance for your help! 🙏

1 ACCEPTED SOLUTION

Hi @akim_no ,

Thanks for sharing the update.

Noted that you're currently using SUMMARIZE for the conversion and it seems to be doing the job in your scenario. That’s good to hear.

 

If ADDCOLUMNS is returning unexpected results, it’s worth double-checking two things:

Ensure any calculated column inside ADDCOLUMNS (like Raw_Metric) is wrapped in CALCULATE, for example:

"Raw_Metric", CALCULATE(SUM('Fact Table'[Raw_Metric]))
This helps maintain the correct context for aggregation within each group.

 

Also, check that the table you're passing to ADDCOLUMNS includes all the necessary columns to define the grouping. If any key column is missing, the results might not match due to changes in context.

 

While SUMMARIZE can work in many scenarios, ADDCOLUMNS tends to preserve row context better, especially when dealing with calculations that depend on multiple dimensions or filter propagation.

 

If you're still seeing performance issues or unexpected results, feel free to share a sample PBIX or a few rows of sample data.

 

Hope this helps. Please reach out for further assistance.

Thank you.

 

 

View solution in original post

16 REPLIES 16
v-veshwara-msft
Community Support
Community Support

Hi @akim_no ,

Thanks for reaching out to Microsoft Fabric Community.

Just checking in to see if you had a chance to try the suggestion of using ADDCOLUMNS over SUMMARIZE as mentioned by @johnt75 .

Using ADDCOLUMNS when defining calculated columns such as Raw_Metric can help improve both performance and accuracy. This approach ensures proper context transition when creating temporary aggregation tables for your measure logic.

 

Thanks to @johnt75 for the helpful guidance.

Let us know if that helped or if you need any further assistance.

Thank you.

 

 

I only used Summarize to do the conversion instead. It does the job, but the ADDCOLUMNS gives me results that are very different from what I expect, so I didn’t use it.

Hi @akim_no ,

Thanks for sharing the update.

Noted that you're currently using SUMMARIZE for the conversion and it seems to be doing the job in your scenario. That’s good to hear.

 

If ADDCOLUMNS is returning unexpected results, it’s worth double-checking two things:

Ensure any calculated column inside ADDCOLUMNS (like Raw_Metric) is wrapped in CALCULATE, for example:

"Raw_Metric", CALCULATE(SUM('Fact Table'[Raw_Metric]))
This helps maintain the correct context for aggregation within each group.

 

Also, check that the table you're passing to ADDCOLUMNS includes all the necessary columns to define the grouping. If any key column is missing, the results might not match due to changes in context.

 

While SUMMARIZE can work in many scenarios, ADDCOLUMNS tends to preserve row context better, especially when dealing with calculations that depend on multiple dimensions or filter propagation.

 

If you're still seeing performance issues or unexpected results, feel free to share a sample PBIX or a few rows of sample data.

 

Hope this helps. Please reach out for further assistance.

Thank you.

 

 

Hi @akim_no ,
Just wanted to check if the response provided was helpful. If further assistance is needed, please reach out.

johnt75
Super User
Super User

Does [User_Selected_Factor] vary on a row by row basis, or can that be computed outside the SUMX in [Metric_N] and stored in a variable? e.g.

Metric_N =
VAR UserSelectedFactor = [User_Selected_Factor]
RETURN
    SUMX ( 'Fact_Table', DIVIDE ( 'Fact_Table'[Raw_Metric], UserSelectedFactor ) )

Yes, [User_Selected_Factor] changes at the row level, so I intentionally avoided storing it in a variable to ensure it gets recalculated per row during evaluation.

I don't understand what went wrong when you tried with an aggregate table. If you store the values of Metric N in an aggregate table, would the new definition not be

Metric_N =
SUM ( 'Summary Table'[Metric N] )

And then all other measures would stay the same? 

Also, what is the definition of [Units_N] ? Can you pre-aggregate that as well ?

"[Metric_N]" represents my revenue converted to USD, using a conversion rate dynamically selected by the user (User_Selected_Factor).
I then calculate the other indicators based on this converted value.

"Units_N" corresponds to the quantity sold, and it is also pre-aggregated upstream to improve performance.

However, despite these optimizations, performance remains poor.

I’m required to calculate Price, Volume, and Mix effects at the most granular level (row by row).
Even though the indicators themselves remain the same, the calculation logic must change:
instead of computing effects from aggregated totals, I now need to compute them for each row individually and then aggregate the results.

The need to compute all effect measures row by row means performance is even worse

You could try and minimise the number of rows you need to iterate by creating a temporary table with the distinct values from the revenue and the number of occurences, e.g.

Metric_N =
VAR SummaryTable =
    ADDCOLUMNS (
        VALUES ( 'Fact Table'[Raw Metric] ),
        "@num rows", CALCULATE ( COUNTROWS ( 'Fact Table' ) )
    )
VAR Result =
    SUMX (
        SummaryTable,
        DIVIDE ( 'Fact Table'[Raw Metric] * [@num rows], [User_Selected_Factor] )
    )
RETURN
    Result

It doesn't allow for correct sales conversion.

You could include in the temporary table any columns which are needed to drive the conversion. e.g. if you need a currency code column you could use

Metric_N =
VAR SummaryTable =
    ADDCOLUMNS (
        SUMMARIZE (
            'Fact Table',
            'Fact Table'[Currency Code],
            'Fact Table'[Raw Metric]
        ),
        "@num rows", CALCULATE ( COUNTROWS ( 'Fact Table' ) )
    )
VAR Result =
    SUMX (
        SummaryTable,
        DIVIDE ( 'Fact Table'[Raw Metric] * [@num rows], [User_Selected_Factor] )
    )
RETURN
    Result

 

I actually used a different approach:

 

 Metric_N VAR AggTable =

SUMMARIZE( 'Fact Table', 'Fact Table'[Month Year], 'Fact Table'[Local Currency], "Raw_Metric", SUM('Fact_Table'[Raw_Metric]) )

RETURN SUMX(

           AggTable,

                         VAR fxRate =[User_Selected_Factor]

                           RETURN DIVIDE([Raw_Metric], fxRate) ) 

 

This method works, but I’m noticing lower performance when using it for effect calculations (e.g., price/volume/mix)

 

 

Try using ADDCOLUMNS,

Metric_N =
VAR AggTable =
    ADDCOLUMNS (
        SUMMARIZE (
            'Fact Table',
            'Fact Table'[Month Year],
            'Fact Table'[Local Currency]
        ),
        "Raw_Metric", CALCULATE ( SUM ( 'Fact_Table'[Raw_Metric] ) )
    )
RETURN
    SUMX (
        AggTable,
        VAR fxRate = [User_Selected_Factor] RETURN DIVIDE ( [Raw_Metric], fxRate )
    )

SUMMARIZE is great for grouping, but don't use it to add calculated columns. Wrapping ADDCOLUMNS around SUMMARIZE to create the calculated columns is both more efficient and more accurate. The only thing to bear in mind is that when using ADDCOLUMNS you need to trigger context transition, so you need to wrap the code for the column in CALCULATE unless the code is just a measure reference.

I need to calculate the Price, Volume, and Mix effects at the row level, but the effects do not depend on a single dimension — they must be combined with other dimensions as well.

I tried using SUMX with VALUES on a single dimension, and it works only when I slice the data using that exact same dimension. However, as soon as I introduce other dimensions, the calculation no longer returns the expected results.

Hi @akim_no ,

Thanks for the update.

If the Price, Volume, and Mix effects rely on combinations of multiple dimensions, then the temporary table used inside SUMX needs to reflect that full grain.

As you've observed, using VALUES on a single column or grouping by fewer columns will cause inconsistencies when other fields are added to the visual.

 

You can continue using the same ADDCOLUMNS over SUMMARIZE structure as suggested earlier by @johnt75 , but extend it to include all relevant dimensions needed for the row-level context. For example:

Metric_N =
VAR AggTable =
    ADDCOLUMNS (
        SUMMARIZE (
            'Fact Table',
            'Fact Table'[Month Year],
            'Fact Table'[Product],
            'Fact Table'[Customer],
            'Fact Table'[Local Currency]
        ),
        "Raw_Metric", CALCULATE ( SUM ( 'Fact Table'[Raw_Metric] ) ),
        "Units", CALCULATE ( SUM ( 'Fact Table'[Units] ) )
    )
RETURN
    SUMX (
        AggTable,
        VAR fxRate = [User_Selected_Factor]
        VAR ConvertedMetric = DIVIDE ( [Raw_Metric], fxRate )
        RETURN ConvertedMetric
    )

 

Similar threads for reference:

Solved: SUMX Help with Price part of Price Volume Mix anal... - Microsoft Fabric Community

Solved: Dax for price volume mix effect calculation - Microsoft Fabric Community

Solved: Price Volume Mix analysis - Subtotals & Totals don... - Microsoft Fabric Community

 

If you're able to share sample PBIX to reproduce the issue, we can help validate the pattern more precisely.

How to provide sample data in the Power BI Forum - Microsoft Fabric Community

 

Hope this helps. Please reach out for further assistance.

Thank you.

Hi @akim_no ,
Just wanted to check if the response provided was helpful. If further assistance is needed, please reach out. Consider sharing sample data or sample .pbix to guide more accurately.
How to provide sample data in the Power BI Forum - Microsoft Fabric Community 
Thank you.

Helpful resources

Announcements
November Power BI Update Carousel

Power BI Monthly Update - November 2025

Check out the November 2025 Power BI update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors
Top Kudoed Authors