Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi
I have huge performance issues by implementing a scenario with this setup:
- Fact table containing ~ 14 Mio. rows and 5 columns
- Proper star scheme implemented
- Import mode used
- Filesize in Power BI Desktop: ~ 250MB
The requirements are:
The datamodel is quite simple and looks like this:
The final goal is a measure which represents a specific calculated value which must be correct on row level (Area) and on level total.
To get the correct result on level total, I have to calculate the values on row level first and use a SUMX() for the final measure.
When I visualize the data within a table for just one Area, the performance is not the best, but (almost) acceptable. But when I'm selecting all areas (~ 170), the performance is so bad that I'm facing the error "There's not enough memory to complete this operation. Please try again later when there may be more memory available." - in Power BI Desktop and in the service as well.
Here's what my DAX measures look like:
Calendar =
VAR _calendar = CALENDAR("2023/01/01","2023/12/31")
RETURN
ADDCOLUMNS(
_calendar,
"Snapshot Date", [Date]
)
ValueOriginal = SUM(Fact[Value])
CountDays =
VAR _snapShotDate = [Snapshot Date]
VAR _startDate = MAX('Fact'[StartDate])
VAR _endDate = MAX('Fact'[EndDate])
RETURN
DATEDIFF(
_startDate,
IF(_snapShotDate < _endDate, _snapShotDate, _endDate),
DAY
)
DeltaDays =
VAR _dateDiff =
DATEDIFF(
MAX('Fact'[StartDate]),
MAX('Fact'[EndDate]),
DAY
)+1
RETURN
IF(
ISBLANK(_dateDiff),
BLANK(),
_dateDiff
)
FinalDays =
IF(
[CountDays] < 0 || ISBLANK([CountDays]),
BLANK(),
[CountDays] + 1
)
ValueEarned =
VAR _Risk_Delta = [DeltaDays]
VAR _Risk_Days_Final = [FinalDays]
RETURN
[ValueOriginal] * DIVIDE(_Risk_Days_Final, _Risk_Delta)
ValueEarnedTotal =
SUMX (
VALUES ( Fact[UniqueID] ),
[ValueEarned]
)
This last Measure [ValueEarnedTotal] is the actual bottleneck, here I'm getting the memory exception because of the SUMX().
I don't see any other solution then calculating the values on row level first to fulfill this requirement or to filter the amount of rows, because I need all of them at this specific granularity.
Additionally I'm not able to precalculate the values in the source or Power Query, because of the dynamic selection of the snapshot date within the report.
Is there any chance to improve the DAX code and / or change the calculation to getting a good performance without any memory exceptions? ~ 14 Mio. rows don't seem that much for todays technology, or am I wrong?
I've already tried different things to improve the performance without much impact:
Appreciate any help - thx!
Best regards
PBI-Enthusiast
You are doing a SUMX without any filters. is that intentional?
Instead of VALUES try to use SUMARIZE or SUMMARIZECOLUMNS with ADDCOLUMNS.
Hi @lbendlin
Thanks for your response.
It's basically intentional, because I need every single record to get the correct values for the total and I'm not aware of any possibility to summarize / group the data from a business point of view.
But I will definitely take a look a the suggested functions!
Best regards,
PBI-Enthusiast
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 40 | |
| 35 | |
| 34 | |
| 31 | |
| 27 |
| User | Count |
|---|---|
| 135 | |
| 102 | |
| 67 | |
| 65 | |
| 56 |