Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
Hello,
like in the topic, how to do incremental refresh using datalake one lake tables (Fabric).
I have tables coming from azure blob storage into one Lake in Fabric.
How to set up incremental refresh ? I have delta parquets tables with year, month and day in the name...
Best,
Jacek
Solved! Go to Solution.
Hi @jaryszek ,
Sorry for delay in responce. Good point in Direct Lake mode you’re right, Power Query isn’t available so you can’t set up incremental refresh the usual way. In that case the trick is to manage incrementality at the data source / lakehouse level.
So you can try bellow ways.
With Direct Lake = keep your data partitioned properly and let Fabric read the latest partitions.
With Import = use Power Query + incremental refresh policy.
Thanks for calling this out — it’s an important distinction between Direct Lake vs. Import.
Thanks,
Akhil.
Hi @jaryszek ,
Sorry for delay in responce. Good point in Direct Lake mode you’re right, Power Query isn’t available so you can’t set up incremental refresh the usual way. In that case the trick is to manage incrementality at the data source / lakehouse level.
So you can try bellow ways.
With Direct Lake = keep your data partitioned properly and let Fabric read the latest partitions.
With Import = use Power Query + incremental refresh policy.
Thanks for calling this out — it’s an important distinction between Direct Lake vs. Import.
Thanks,
Akhil.
Hi @jaryszek ,
Thanks for raising this. Since your data is already in delta parquet with year/month/day folders, you can leverage Fabric’s incremental refresh at the semantic model level. Just create RangeStart / RangeEnd parameters in Power Query, filter on your date column, and then configure incremental refresh in the dataset settings. Fabric will push filters down to your OneLake delta table so only new partitions are scanned. This way you avoid reloading full history every time and only process the latest data. Thanks to our super users for sharing these best practices earlier they really make it easier to set up.
Regards,
Akhil.
Ok but the issue is that I can not transform any data (power query is not working) where I am connecting to One Lake :
Power query is not available, it is not SQL endpoint there but direct lake...
What now? 🙂
Best,
Jacek
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Power BI update to learn about new features.
| User | Count |
|---|---|
| 6 | |
| 3 | |
| 1 | |
| 1 | |
| 1 |