Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

View all the Fabric Data Days sessions on demand. View schedule

Reply
Mohan128256
Helper IV
Helper IV

Optimize complex dax measure

Hello All,

After going through all the possible ways of optimization of a measure which i have written, I am reaching out to you community members to help me out.

 

below is dax query which i have written to get the new customer reference name if there are any changes happend in within the date range selected based on sales order id, product id and line creation number.

 

Customer Reference (H) New = 
VAR _MinDate =
    CALCULATETABLE(
        ADDCOLUMNS(
            SUMMARIZE(
                'Sales Orders Snapshots',
                'Sales Orders Snapshots'[Sales Order ID (H)],
                'Sales Orders Snapshots'[Product ID (L)],
                'Sales Orders Snapshots'[Line Creation Sequence Number (L)]
            ),
            "@SnapDate", Max( 'Sales Orders Snapshots'[SnapDate] )
        ),
        ALLSELECTED()
    )
VAR _FilterSnap =
    TREATAS(
        _MinDate,
        'Sales Orders Snapshots'[Sales Order ID (H)],
        'Sales Orders Snapshots'[Product ID (L)],
        'Sales Orders Snapshots'[Line Creation Sequence Number (L)],
        'Sales Orders Snapshots'[SnapDate]
    )
VAR _Result =
    CALCULATE(
        MAX( 'Sales Orders Snapshots'[Customer Reference (H)]),
        ALLEXCEPT(
            'Sales Orders Snapshots',
            'Sales Orders Snapshots'[Sales Order ID (H)],
            'Sales Orders Snapshots'[Product ID (L)],
            'Sales Orders Snapshots'[Line Creation Sequence Number (L)]
        ),
        KEEPFILTERS( _FilterSnap )
    )
RETURN _Result

 this measure gives me right value but when i see the Server engine and formula engine timings it still bothering me as it is taking 18+seconds.

 

DAX Query from performance analyzer

// DAX Query
DEFINE
	VAR __DS0Core = 
		SUMMARIZECOLUMNS(
			'Sales Orders Snapshots'[Sales Order ID (H)],
			'Sales Orders Snapshots'[Product ID (L)],
			'Sales Orders Snapshots'[MGA Original Customer ID (H)],
			'Sales Orders Snapshots'[MGA Original Customer Name],
			'Sales Orders Snapshots'[Original Legal Entity (L)],
			'Sales Orders Snapshots'[Order Created Date (H)],
			'Sales Orders Snapshots'[Original Region Name (H)],
			'Sales Orders Snapshots'[Original Sales Entity Name (H)],
			'Sales Orders Snapshots'[Original Forecast Responsible Group Name (H)],
			'Sales Orders Snapshots'[Original SPA Name (H)],
			'Sales Orders Snapshots'[Sales Responsible Name (H)],
			'Sales Orders Snapshots'[D365 SO Link],
			'Sales Orders Snapshots'[Product Name],
			'Sales Orders Snapshots'[Line Creation Sequence Number (L)],
			'Sales Orders Snapshots'[Line Created Date (L)],
			'Sales Orders Snapshots'[MGA Original Order ID (L)],
			"Customer_Reference__H__New", 'Sales Orders Snapshots'[Customer Reference (H) New]
		)

	VAR __DS0PrimaryWindowed = 
		TOPN(
			501,
			__DS0Core,
			[Customer_Reference__H__New],
			0,
			'Sales Orders Snapshots'[Sales Order ID (H)],
			1,
			'Sales Orders Snapshots'[Product ID (L)],
			1,
			'Sales Orders Snapshots'[MGA Original Customer ID (H)],
			1,
			'Sales Orders Snapshots'[MGA Original Customer Name],
			1,
			'Sales Orders Snapshots'[Original Legal Entity (L)],
			1,
			'Sales Orders Snapshots'[Order Created Date (H)],
			1,
			'Sales Orders Snapshots'[Original Region Name (H)],
			1,
			'Sales Orders Snapshots'[Original Sales Entity Name (H)],
			1,
			'Sales Orders Snapshots'[Original Forecast Responsible Group Name (H)],
			1,
			'Sales Orders Snapshots'[Original SPA Name (H)],
			1,
			'Sales Orders Snapshots'[Sales Responsible Name (H)],
			1,
			'Sales Orders Snapshots'[D365 SO Link],
			1,
			'Sales Orders Snapshots'[Product Name],
			1,
			'Sales Orders Snapshots'[Line Creation Sequence Number (L)],
			1,
			'Sales Orders Snapshots'[Line Created Date (L)],
			1,
			'Sales Orders Snapshots'[MGA Original Order ID (L)],
			1
		)

EVALUATE
	__DS0PrimaryWindowed

ORDER BY
	[Customer_Reference__H__New] DESC,
	'Sales Orders Snapshots'[Sales Order ID (H)],
	'Sales Orders Snapshots'[Product ID (L)],
	'Sales Orders Snapshots'[MGA Original Customer ID (H)],
	'Sales Orders Snapshots'[MGA Original Customer Name],
	'Sales Orders Snapshots'[Original Legal Entity (L)],
	'Sales Orders Snapshots'[Order Created Date (H)],
	'Sales Orders Snapshots'[Original Region Name (H)],
	'Sales Orders Snapshots'[Original Sales Entity Name (H)],
	'Sales Orders Snapshots'[Original Forecast Responsible Group Name (H)],
	'Sales Orders Snapshots'[Original SPA Name (H)],
	'Sales Orders Snapshots'[Sales Responsible Name (H)],
	'Sales Orders Snapshots'[D365 SO Link],
	'Sales Orders Snapshots'[Product Name],
	'Sales Orders Snapshots'[Line Creation Sequence Number (L)],
	'Sales Orders Snapshots'[Line Created Date (L)],
	'Sales Orders Snapshots'[MGA Original Order ID (L)]

 

Server Timings:

Mohan128256_0-1726688895233.png

 

Server engine as 3 scans:

Scan1 Query:

 

SELECT 'Sales Orders Snapshots'[Sales Order ID ( H ) ], 'Sales Orders Snapshots'[Line Creation Sequence Number ( L ) ], 'Sales Orders Snapshots'[Product ID ( L ) ] FROM 'Sales Orders Snapshots';   

 

Scan2 Query: 

SELECT 'Sales Orders Snapshots'[Sales Order ID ( H ) ], 'Sales Orders Snapshots'[Line Creation Sequence Number ( L ) ], 'Sales Orders Snapshots'[Product ID ( L ) ], MAX ( 'MinMaxColumnPositionCallback' ( PFDATAID ( 'Sales Orders Snapshots'[Customer Reference ( H ) ] ) ) ) FROM 'Sales Orders Snapshots' WHERE  ( 'Sales Orders Snapshots'[Sales Order ID ( H ) ], 'Sales Orders Snapshots'[Product ID ( L ) ], 'Sales Orders Snapshots'[Line Creation Sequence Number ( L ) ], 'Sales Orders Snapshots'[SnapDate] ) IN { ( '100S1040554', '656040-M8', '1', 45553.000000 ) , ( '100S0587644', '662737-000', '1', 45553.000000 ) , ( '411S0083387', '651489-E7', '4', 45553.000000 ) , ( '411S0094134', '593140-RF', '4', 45553.000000 ) , ( '100100S1115042', '541301-C3', '1', 45553.000000 ) , ( '100S0987039', '514527-CBULK', '7', 45553.000000 ) , ( '100S0112135', '618338-M', '1', 45553.000000 ) , ( '100S0331324', '587187-C3', '7', 45553.000000 ) , ( '100100S0377131', '587378-EUC', '39', 45553.000000 ) , ( '100S0599708', '591856-X2EUCALT', '3', 45553.000000 ) ..[20,49,990 total tuples, not all displayed]};   Estimated size: rows = 20,47,054  bytes = 3,27,52,864

Scan3 Query:

SELECT 'Sales Orders Snapshots'[D365 SO Link], 'Sales Orders Snapshots'[Product Name], 'Sales Orders Snapshots'[Sales Order ID ( H ) ], 'Sales Orders Snapshots'[Line Creation Sequence Number ( L ) ], 'Sales Orders Snapshots'[Product ID ( L ) ], 'Sales Orders Snapshots'[MGA Original Customer ID ( H ) ], 'Sales Orders Snapshots'[MGA Original Customer Name], 'Sales Orders Snapshots'[Original Forecast Responsible Group Name ( H ) ], 'Sales Orders Snapshots'[Original Region Name ( H ) ], 'Sales Orders Snapshots'[Original Sales Entity Name ( H ) ], 'Sales Orders Snapshots'[Original SPA Name ( H ) ], 'Sales Orders Snapshots'[Order Created Date ( H ) ], 'Sales Orders Snapshots'[Original Legal Entity ( L ) ], 'Sales Orders Snapshots'[Sales Responsible Name ( H ) ], 'Sales Orders Snapshots'[MGA Original Order ID ( L ) ], 'Sales Orders Snapshots'[Line Created Date ( L ) ] FROM 'Sales Orders Snapshots';   Estimated size: rows = 20,49,991  bytes = 13,11,99,424

 

Any suggestions or changes over the measure which i have written.

Please help.

 

Thanks,

Mohan V.

 

15 REPLIES 15
Mohan128256
Helper IV
Helper IV

@lbendlin @parry2k @Greg_Deckler @rajendraongole1 @amitchandak @Ritaf1983 

I have done whatever i could. I need your help to get this done.

 

I know i am asking your personal time, but its my humble request you to please check the file 
https://drive.google.com/file/d/17KcLzaen5rJUB5T4iiBB0sbPi6xAF2c1/view?usp=drive_link
and help me out with possible changes that i could do for this.

 

Thanks,

Mohan V.

Still way too big. (619 MB)

 

Distill it down further.

@lbendlini have reduced to 1.3 mb with only 4 Salesorder id's data.
Please check and let me know.

I think you can simplify your measure like this.

Customer Reference (H) New = 
VAR _MaxDate =
    CALCULATE (
        MAX ( 'Sales Orders Snapshots'[SnapDate] ),
        ALLSELECTED ()
    )
VAR _Result =
    CALCULATE (
        MAX ( 'Sales Orders Snapshots'[Customer Reference (H)] ),
        ALLEXCEPT (
            'Sales Orders Snapshots',
            'Sales Orders Snapshots'[Sales Order ID (H)],
            'Sales Orders Snapshots'[Product ID (L)],
            'Sales Orders Snapshots'[Line Creation Sequence Number (L)]
        ),
        TREATAS ( { _MaxDate }, 'Sales Orders Snapshots'[SnapDate] )
    )
RETURN
    _Result

 

This produces only two queries in my testing and neither query had those annoying tuples.

Thanks for the suggestion @AlexisOlson .

This works fine when i try it on the sample dataset which i have shared with you.

 

But the problem is all with when we have that whole table data and it takes almost 3min of time to render the visual.

 

This is where the users are all complaining about report to have better performance.

 

Please help. @lbendlin @AlexisOlson @parry2k 

@Mohan128256 In my experience, there is no magic bullet that's going to let you materialize millions of rows in fractions of a second in a report, no matter how optimized your DAX is. As others have said in this thread, there is simply way too many rows and columns in your table visual to render in a fast way. To test this theory, I'd be curious to see how long it takes your table visual to load with a simpler DAX measure, like a SUM over a column in your fact table. It may not be ~3 minutes, but I bet you it will still take a bit of time to load.

 

In my opinion, I think the real solution here is working with your end users and see what they're trying to do with all that raw data. Perhaps you can negotiate with them and see if a different report design like a drillthrough/tooltip/single-select slicer will give them what they need without taking forever to load. That being said, it is our responsibility to give the business what they need per their requirements. So if your users truly need this much raw data, then you should help them understand that the cost of this approach will be a slow query, and there's no avoiding that.

 

Sorry to be the bearer of bad news here. I'd be interested in getting other people's opinions on this subject if there's something I'm missing. As someone who works in Finance (and they love their Excel spreadsheets), if there's a way to load a large amount of data very quickly in a table/matrix, then please let me know!

Mohan128256
Helper IV
Helper IV

@lbendlin @parry2k I have shared the file.. Request you to please have a look and help me out with the suggestions to make.

highly appreciate your efforts on this.

 

Thanks,

Mohan V.

Mohan128256
Helper IV
Helper IV

@lbendlin @parry2k  here is the pbix file.
https://drive.google.com/file/d/17KcLzaen5rJUB5T4iiBB0sbPi6xAF2c1/view?usp=drive_link

i request you to please go through it and help me out on this.

 

Thanks,

Mohan V.

 

 

There are too many columns in that table visual.  Anytime you see a horizontal scrollbar in a visual you know you have too many columns.

lbendlin_0-1726702055041.png

 

Your _minDate filter is not filtering much. You seemingly spend over 4 seconds to recalculate the same SnapDate over 250000 rows.

CALCULATETABLE(
        ADDCOLUMNS(
            SUMMARIZE(
                'Sales Orders Snapshots',
                'Sales Orders Snapshots'[Sales Order ID (H)],
                'Sales Orders Snapshots'[Product ID (L)],
                'Sales Orders Snapshots'[Line Creation Sequence Number (L)]
            ),
            "@SnapDate", MIN( 'Sales Orders Snapshots'[SnapDate] )
        ),
        ALLSELECTED()
    )
 
And then you throw ALLEXCEPT, KEEPFILTERS and TREATAS all into a pile
 
VAR _Result =
    CALCULATE(
        MAX( 'Sales Orders Snapshots'[Customer Item Number (L)]),
        ALLEXCEPT(
            'Sales Orders Snapshots',
            'Sales Orders Snapshots'[Sales Order ID (H)],
            'Sales Orders Snapshots'[Product ID (L)],
            'Sales Orders Snapshots'[Line Creation Sequence Number (L)]
        ),
        KEEPFILTERS( _FilterSnap )
    )
 
I don't understand what you are trying to achieve with that. Maybe you can explain the business intent?
 
As a side note - That file is way too big. Please provide sample data that fully covers your issue- but not more.. Do not include anything not related to the issue. Do not include any sensitive information.
Please show the expected outcome based on the sample data you provided.

@lbendlin my heartfelt thanks and appreciate you on spending your personal time on this to help me out here.

 

Here is what i am trying to do with the measures which i have created.

 

Ex:- Customer Reference (H) column.

 

Mohan128256_0-1726717891793.png

As per the above screenshot,

I have 30days of data for the salesorderid = 100S1077426, Productid = 594963-W, line creation number = 1.

Customer Reference (H) New measure is used to calculate the latest value available in the timeperiod which we have filtered using date range slicer.

In this case, for the max date 18-09-2024, we have customer reference value as 6631649637

 

Customer Reference (H) Before measure is used to calculate the initial value available in the timeperiod which we have filtered using date range slicer.

In this case, for the min date 20-08-2024, we have customer reference value as PENDING2

 

So Customer Reference (H) New should show 6631649637, Customer Reference (H) Beforeshould show PENDING2

 

But if i change the date range to 10-09-2024 to 18-09-2024 then both should show the same value, as the max date = 18-09-2024, min date=10-09-2024 has the same value.

Mohan128256_1-1726718255444.pngMohan128256_2-1726718310245.png

As you see, the measures which i have created are giving me the right result and they are working fine when i filter out the data with limited records.

But actually, the table has 100+ millions of records, where I have to show the columns with their New and before values with conditional formatting if there is difference between New and before, hightlit it.

 

I have limited the data.

Please check the same file over the drive.

 

Hope i did gave you the details which you require to look into this.

 

Let me know if you need anymore details.

 

Thanks,

Mohan V.

 

@lbendlin this all works fine when I have individual measures to show along with the required columns in a table visual.

But when i have all the required measures with New and before to show in the table visual along with conditional formatting then the visual takes to load almost 3 min for rendering.

I request you to look at the actual file and suggest what i can change to make it more effiecient.

 

Thanks,

Mohan V.

@AlexisOlson @lbendlin @parry2k  could you please help me on this.

 

I am still looking out for a better solution.

 

 

I can only optimize a file that I can actually tinker with. Do you have something you can share that demonstrates the issue? (You said the sample file works fine.)

lbendlin
Super User
Super User

Look at the query plan and find rows with excessive number of records. That's where the cartesians happen that you need to eliminate.

 

Break your measure down into smaller parts and evaluate these part by part to see where it becomes expensive.

parry2k
Super User
Super User

@Mohan128256 It will be easier if you share pbix file using one drive/google drive with the expected output. Remove any sensitive information before sharing.



Subscribe to the @PowerBIHowTo YT channel for an upcoming video on List and Record functions in Power Query!!

Learn Power BI and Fabric - subscribe to our YT channel - Click here: @PowerBIHowTo

If my solution proved useful, I'd be delighted to receive Kudos. When you put effort into asking a question, it's equally thoughtful to acknowledge and give Kudos to the individual who helped you solve the problem. It's a small gesture that shows appreciation and encouragement! ❤


Did I answer your question? Mark my post as a solution. Proud to be a Super User! Appreciate your Kudos 🙂
Feel free to email me with any of your BI needs.

Helpful resources

Announcements
November Power BI Update Carousel

Power BI Monthly Update - November 2025

Check out the November 2025 Power BI update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors
Top Kudoed Authors