Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Score big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount

Reply
Revolution
Frequent Visitor

Running Total From Transaction Data

I'm trying to chart historical stock levels by starting from the first transaction (Qty in/out) in 2012. I have seen a lot of examples of running totals but none that quite fit my dataset:

 

DateTransactionID    SKUWarehouse     Qty In     Qty Out

 

I have grouped this table by date, SKU and Warehouse and then added an index column:

Revolution_0-1715814877788.png

 

RTIn & RTOut are running totals off Qty in & Out each day eg:

= Table.AddColumn(#"Added Index", "RTOut", each List.Sum( List.Range( BufferedOut, 0, [Index]) ))

 

I need these columns to be specific to each SKU and Warehouse. The difference between the two columns should then be the stock level for the SKU in a particular warehouse on that date. I'm having a lot of trouble trying out potential solutions because it is a large dataset (over 1mil rows). Every time I make a change I wait 10-20min or freeze altogether.

 

Would measures be a better option? What can I do to optimise memory usage to update changes faster? I am very new to Power BI, any advice would be much appreciated.

 

Is this the right solution for me?

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @Revolution ,

At Power Query, the most effective way to improve performance is still to use group by to reduce the amount of data that needs to be processed for each query. But you've already mentioned that you're using group by, but the query is still slow. I'm afraid that the amount of data you have is really too large, and even using the List.Generate function you mentioned in your link may not significantly improve your performance.
My suggestion is to do some aggregation operations in advance, according to the requirements of the report some of the granularity requirements of the data is not high in advance in the data source side of aggregation. Also, if your data source supports query folding, then please try to share the load of Power Query by having all the steps that support query folding processed up front by the data source, rather than done in Power Query.
Query folding guidance in Power BI Desktop - Power BI | Microsoft Learn

Best Regards,
Dino Tao
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

View solution in original post

1 REPLY 1
Anonymous
Not applicable

Hi @Revolution ,

At Power Query, the most effective way to improve performance is still to use group by to reduce the amount of data that needs to be processed for each query. But you've already mentioned that you're using group by, but the query is still slow. I'm afraid that the amount of data you have is really too large, and even using the List.Generate function you mentioned in your link may not significantly improve your performance.
My suggestion is to do some aggregation operations in advance, according to the requirements of the report some of the granularity requirements of the data is not high in advance in the data source side of aggregation. Also, if your data source supports query folding, then please try to share the load of Power Query by having all the steps that support query folding processed up front by the data source, rather than done in Power Query.
Query folding guidance in Power BI Desktop - Power BI | Microsoft Learn

Best Regards,
Dino Tao
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

Helpful resources

Announcements
August Power BI Update Carousel

Power BI Monthly Update - August 2025

Check out the August 2025 Power BI update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.

Top Solution Authors