Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hi,
Is there a way to do incremental loads from csv files? I have csv files delivered weekly, currently being overwritten.
I want to know
1. Can we store the previously loaded data and not get it overwritten when I do the data load of the new file into Power BI? The file name remains the same. I will have to add the week, month and year in Power query
2. If we cant do the above, I can get the team to append the week number and year to the file but can we then just load the new data file and not load all the previous ones?
Thanks a lot
Proud to be a Super User!
Paul on Linkedin.
According to this article, no:
https://docs.microsoft.com/en-us/power-bi/service-premium-incremental-refresh
Currently, for composite models, incremental refresh is supported for SQL Server, Azure SQL Database, SQL Data Warehouse, Oracle, and Teradata data sources only.
Sometimes the documentation is out-of-date though.
Yes, @Greg_Deckler is correct...
BUT, you can use the same M code and create a Dataflow. Dataflows DO enable incremental refresh on sources like .csv flat files.
When the dataflow refreshes, it will only pick up the newest data, perform the transformations, and load that newly shaped data into the Azure Data Lake.
If you do all of your transformations in the dataflow, the Power BI just has to read from the Data Lake to pull data into the model. That read is usually much faster than reading from .csv files in SharePoint and performing transformations.
I'm pretty sure you can also enable incremental refresh for pulling data from the dataflow.
Best,
~ Chris
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the October 2025 Power BI update to learn about new features.