March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Early bird discount ends December 31.
Register NowBe one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now
Hi
I'm using PBI with DirectQuery to SSAS Tabular 2016 standard.
Scenario: Costs per project, sliceable by time, for current value and total project value, up to the chosen date.
Current cost:
SUM(FactTable[Costs])
Cumul cost:
CALCULATE (
[Current cost];
FILTER (
ALL ( 'Dim_Date'[PK_Date] );
'Dim_Date'[PK_Date] <= MAX ( 'Dim_'[PK_Date] )
)
)
Running the 'Current cost' measure for all projects (18k): <0.5sec
Running the 'Cumul cost' measure for all projects: 5sec
Fact table has about 10m rows.
This seems odd to me, since all i'm doing is changing the date filter to be smaller then the maximum in the date dim.
I'm running into the issue that if i use this logic for more complex base measures, i'm facing loading times of 30sec to one minute, where the base formula takes about 2 to 3 seconds.
Also, ram memory seems to build up, i can't seem to find the necessity in this, as no intermediate calculations are made.
Any more performant solutions possible?
as long as you have the some sort of time filter (Year, Month), [Current Cost] will actually run on smaller dataset, cause it will be evaluated only in that filter context (e.g. 2018 only)
the other measure:
1) determines the max date in the given filter context (coming from slicers, visuals etc.)
2) overwrites that filter context with ALL and applies the filter from 1) so everything before or equal to 2018. As FILTER is iterator it does the comparison for each row of the Dim_Date[PK_Date]
have a read here
https://www.sqlbi.com/articles/filter-vs-calculatetable-optimization-using-cardinality-estimation/
you could try this, my hope would be that it would save time by not evalulating MAX ( 'Dim_'[PK_Date] ) in each iteration, but I'm not sure how effective this would be
Cumul cost = VAR _MaxDate = MAX ( 'Dim_'[PK_Date] ) RETURN CALCULATE ( [Current cost]; FILTER ( ALL ( 'Dim_Date'[PK_Date] ); 'Dim_Date'[PK_Date] <= _MaxDate ) )
Thanks for the explanation.
I have extensively tried the parameters suggestion, it unfortunately does not result in faster times. Not worse either though, exactly the same.
Iit seems that even though it evaluates each row, it does not compute each time the max(), hance no improvement with parameter.
this video is quite helpful in regards to optimization
https://www.sqlbi.com/tv/dax-optimization-examples/
a bit over 1h, but very informative with hands on examples
User | Count |
---|---|
119 | |
77 | |
58 | |
52 | |
46 |
User | Count |
---|---|
171 | |
117 | |
63 | |
57 | |
51 |