Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi all,
I receive data on a weekly basis in CSV format which I save in a file and combine the sheets using PowerBI, however as I have well over three years of weekly data (100mb per week) this causes my refresh to be extreemly slow, so I was after some ideas on how to best manage this data and if there is a way to refresh only the latest but keep all my historical data in the file.
Any ideas would be greatly appreciated.
Solved! Go to Solution.
Hope that helps.
My background is heavily SQL so my default suggestion would be to ingest each weeks file into a SQL database; you could spin one up in Azure.
The other way to go is to look at a data lake in azure which I suspect is cheaper. I'm no expect in that area but following might give you a starting point:
Create a storage account for Azure Data Lake Storage Gen2 | Microsoft Docs
Drop the csv files direct in there
You could then hook the dataflow storage layer direct into it:
Configuring dataflow storage to use Azure Data Lake Gen 2 - Power BI | Microsoft Docs
Hi,
Sounds like you need incremental refresh. Have a look at https://www.fourmoo.com/2020/06/10/how-you-can-incrementally-refresh-any-power-bi-data-source-this-e...
That said can you give a little more detail:
1) Do you have a single file that contains 3 years data or are you merging each weeks file?
2) Do you have ppu or premium licences.
3) Do you have a sql server?
Hi,
I get a file that is one weeks data at a time then I merge them together using PowerBI, so at the moment I have a folder with over 180 csv's.
I have a PPU Licence and no I dont have an SQL Server.
I will have a read through the article you sent that might be my answer.
Hope that helps.
My background is heavily SQL so my default suggestion would be to ingest each weeks file into a SQL database; you could spin one up in Azure.
The other way to go is to look at a data lake in azure which I suspect is cheaper. I'm no expect in that area but following might give you a starting point:
Create a storage account for Azure Data Lake Storage Gen2 | Microsoft Docs
Drop the csv files direct in there
You could then hook the dataflow storage layer direct into it:
Configuring dataflow storage to use Azure Data Lake Gen 2 - Power BI | Microsoft Docs
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!