Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
I've created function that returns the running total for all dates. I want to break the calculation for each date and then pick the rows that corresponds to first Cumm G above 100 similar to the output table below:
let
Source = (RT_Name1 as text, RT_Name2 as text, MyTable as table, RT_ColumnName1 as text, RT_ColumnName2 as text) =>
let
Source2 = MyTable,
BufferedValues1 = List.Buffer(Table.Column(MyTable, RT_ColumnName1)),
BufferedValues2 = List.Buffer(Table.Column(MyTable, RT_ColumnName2)),
RunningTotalList1 = List.Generate(
() =>[RunningTotal1= BufferedValues1{0},Counter=0],
each [Counter]<List.Count(BufferedValues1),
each [RunningTotal1 = [RunningTotal1] + BufferedValues1{[Counter]+1}, Counter = [Counter]+1],
each [RunningTotal1]
),
RunningTotalList2 = List.Generate(
() =>[RunningTotal2= BufferedValues2{0},Counter=0],
each [Counter]<List.Count(BufferedValues2),
each [RunningTotal2 = [RunningTotal2] + BufferedValues2{[Counter]+1}, Counter = [Counter]+1],
each [RunningTotal2]
),
Consolidation = Table.FromColumns(
Table.ToColumns(Source2) & {Value.ReplaceType(RunningTotalList1, type{Int64.Type})} & {Value.ReplaceType(RunningTotalList2, type{Int64.Type})},
Table.ColumnNames(Source2) & {RT_Name1} & {RT_Name2})
in
Consolidation
in
Source
Solved! Go to Solution.
Just copy and past the below code into the advance editor.
It is also more efficeint in term of time to calculate running total
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("Zc9BCsQgDAXQqxTXBU200W1bmOWsC8X7X6P5pjCSWRgUH9/vfYf9CGvgGilFTlz0UDYdaay+Kjg9YFxS1ZFN/EVsECzYmbi8yEOwDpkyZBYZXfAKsZHdE0JTgaO3ydeT0VWaFf59ZyYFAdUCX/PxBg81pCzUQu8P", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Type = _t, Run_Date = _t, G = _t, O = _t, #"Sort F" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Type", type text}, {"Run_Date", type date}, {"G", Int64.Type}, {"O", Int64.Type}, {"Sort F", Int64.Type}}),
#"Grouped Rows" = Table.Group(#"Changed Type", {"Run_Date"}, {{"Added Index", each Table.AddIndexColumn(_,"in",1)}}),
#"Added Custom" = Table.AddColumn(#"Grouped Rows", "Selected row", each [a=List.Buffer([Added Index][G]),b=List.Last(List.Generate(()=>1,each List.Sum(List.FirstN(a,_))<100, each _+1))+1][b]),
#"Added Custom1" = Table.AddColumn(#"Added Custom", "Custom", each Table.Range(Table.AddColumn(Table.AddColumn([Added Index], "Cumm G", (x)=> List.Sum(List.FirstN(_[Added Index][G],[Selected row]))),"Cumm O", (x)=> List.Sum(List.FirstN(_[Added Index][O],[Selected row]))),[Selected row],1)),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom1",{"Added Index", "Selected row", "Run_Date"}),
#"Expanded Custom" = Table.ExpandTableColumn(#"Removed Columns", "Custom", {"Type", "Run_Date", "G", "O", "Sort F", "in", "Cumm G", "Cumm O"}, {"Type", "Run_Date", "G", "O", "Sort F", "in", "Cumm G", "Cumm O"})
in
#"Expanded Custom"
You can filter the value grather than or equal to 100 on the column G, then use grooup by for the Run_Date adn pick all rows as operation, then in the resulted formula in the third argument, instead of each _ use each Table.FirstN(_,1) so it results in the desigred table.
If you face a problem, please provide your data as table here
Omid,
I can't filter the value >= 100 before I run the function. I need a modified function that calculate the running total (Cumm G) the then filter >=100 for each date.
this is the table:
Type | Run_Date | G | O | Sort F |
AB | 27/10/2024 | 45 | 0 | 0 |
AC | 27/10/2024 | 20 | 17 | 3 |
AB | 27/10/2024 | 50 | 26 | 5 |
AX | 27/10/2024 | 30 | 22 | 6 |
AB | 26/10/2024 | 33 | 47 | 12 |
AA | 26/10/2024 | 15 | 63 | 13 |
AN | 26/10/2024 | 20 | 68 | 17 |
AC | 26/10/2024 | 43 | 724 | 17 |
AF | 26/10/2024 | 5 | 88 | 18 |
Just copy and past the below code into the advance editor.
It is also more efficeint in term of time to calculate running total
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("Zc9BCsQgDAXQqxTXBU200W1bmOWsC8X7X6P5pjCSWRgUH9/vfYf9CGvgGilFTlz0UDYdaay+Kjg9YFxS1ZFN/EVsECzYmbi8yEOwDpkyZBYZXfAKsZHdE0JTgaO3ydeT0VWaFf59ZyYFAdUCX/PxBg81pCzUQu8P", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type nullable text) meta [Serialized.Text = true]) in type table [Type = _t, Run_Date = _t, G = _t, O = _t, #"Sort F" = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Type", type text}, {"Run_Date", type date}, {"G", Int64.Type}, {"O", Int64.Type}, {"Sort F", Int64.Type}}),
#"Grouped Rows" = Table.Group(#"Changed Type", {"Run_Date"}, {{"Added Index", each Table.AddIndexColumn(_,"in",1)}}),
#"Added Custom" = Table.AddColumn(#"Grouped Rows", "Selected row", each [a=List.Buffer([Added Index][G]),b=List.Last(List.Generate(()=>1,each List.Sum(List.FirstN(a,_))<100, each _+1))+1][b]),
#"Added Custom1" = Table.AddColumn(#"Added Custom", "Custom", each Table.Range(Table.AddColumn(Table.AddColumn([Added Index], "Cumm G", (x)=> List.Sum(List.FirstN(_[Added Index][G],[Selected row]))),"Cumm O", (x)=> List.Sum(List.FirstN(_[Added Index][O],[Selected row]))),[Selected row],1)),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom1",{"Added Index", "Selected row", "Run_Date"}),
#"Expanded Custom" = Table.ExpandTableColumn(#"Removed Columns", "Custom", {"Type", "Run_Date", "G", "O", "Sort F", "in", "Cumm G", "Cumm O"}, {"Type", "Run_Date", "G", "O", "Sort F", "in", "Cumm G", "Cumm O"})
in
#"Expanded Custom"
Omid, Thank you for your support. The code is working fine but for small set of data.
I have nearly +300K rows of data and it is taking very long time to run.
Is there a way that you can speed up the process of running total, i.e using the puffering?