Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hello,
like in the topic, how to do incremental refresh using datalake one lake tables (Fabric).
I have tables coming from azure blob storage into one Lake in Fabric.
How to set up incremental refresh ? I have delta parquets tables with year, month and day in the name...
Best,
Jacek
Solved! Go to Solution.
Hi @jaryszek ,
Sorry for delay in responce. Good point in Direct Lake mode you’re right, Power Query isn’t available so you can’t set up incremental refresh the usual way. In that case the trick is to manage incrementality at the data source / lakehouse level.
So you can try bellow ways.
With Direct Lake = keep your data partitioned properly and let Fabric read the latest partitions.
With Import = use Power Query + incremental refresh policy.
Thanks for calling this out — it’s an important distinction between Direct Lake vs. Import.
Thanks,
Akhil.
Hi @jaryszek ,
Sorry for delay in responce. Good point in Direct Lake mode you’re right, Power Query isn’t available so you can’t set up incremental refresh the usual way. In that case the trick is to manage incrementality at the data source / lakehouse level.
So you can try bellow ways.
With Direct Lake = keep your data partitioned properly and let Fabric read the latest partitions.
With Import = use Power Query + incremental refresh policy.
Thanks for calling this out — it’s an important distinction between Direct Lake vs. Import.
Thanks,
Akhil.
Hi @jaryszek ,
Thanks for raising this. Since your data is already in delta parquet with year/month/day folders, you can leverage Fabric’s incremental refresh at the semantic model level. Just create RangeStart / RangeEnd parameters in Power Query, filter on your date column, and then configure incremental refresh in the dataset settings. Fabric will push filters down to your OneLake delta table so only new partitions are scanned. This way you avoid reloading full history every time and only process the latest data. Thanks to our super users for sharing these best practices earlier they really make it easier to set up.
Regards,
Akhil.
Ok but the issue is that I can not transform any data (power query is not working) where I am connecting to One Lake :
Power query is not available, it is not SQL endpoint there but direct lake...
What now? 🙂
Best,
Jacek
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 4 | |
| 2 | |
| 2 | |
| 1 | |
| 1 |
| User | Count |
|---|---|
| 4 | |
| 4 | |
| 4 | |
| 3 | |
| 2 |