Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Hi, I have a dataset thats has 11m rows and I am trying to add a column to run a sumX that will then be used in a different column but this calcualation keeps running out of memory. The calculation won't even run on a third of the dataset.
@aaelansr Try something like this:
VAR _Table =
SELECTCOLUMNS(
FILTER(
'Master',
[Observe] <= _CurrentYear && [Observer] > _MinYear && [Building-Asset] = _Asset
),
"Amount to be Depreciated", [Amount to be Depreciated],
"Percentage", [Percentage],
"Period of Depreciation", [Period of Depreciation],
"Total Number of Months", [Total Number of Months]
)
VAR _Return =
SUMX(
_Table,
DIVIDE(
[Amount to be Depreciated] * [Percentage] * [Period of Depreciation],
[Total Number of Months]
)
)
RETURN
_Return
Another trick you can do is to utilize SUMMARIZE. SUMMARIZE can make things crazy fast.
@Greg_Deckler Great! That worked beautifully and enabled me to process more than twice as many rows, but still didn't get me all the way. Can you help me with what this would look like using SUMMERIZE? Would SUMMERIZE also use up less memory?
Thank you!
ER