Don't miss your chance to take the Fabric Data Engineer (DP-700) exam on us!
Learn moreWe've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now
Hello,
like in the topic, how to do incremental refresh using datalake one lake tables (Fabric).
I have tables coming from azure blob storage into one Lake in Fabric.
How to set up incremental refresh ? I have delta parquets tables with year, month and day in the name...
Best,
Jacek
Solved! Go to Solution.
Hi @jaryszek ,
Sorry for delay in responce. Good point in Direct Lake mode you’re right, Power Query isn’t available so you can’t set up incremental refresh the usual way. In that case the trick is to manage incrementality at the data source / lakehouse level.
So you can try bellow ways.
With Direct Lake = keep your data partitioned properly and let Fabric read the latest partitions.
With Import = use Power Query + incremental refresh policy.
Thanks for calling this out — it’s an important distinction between Direct Lake vs. Import.
Thanks,
Akhil.
Hi @jaryszek ,
Sorry for delay in responce. Good point in Direct Lake mode you’re right, Power Query isn’t available so you can’t set up incremental refresh the usual way. In that case the trick is to manage incrementality at the data source / lakehouse level.
So you can try bellow ways.
With Direct Lake = keep your data partitioned properly and let Fabric read the latest partitions.
With Import = use Power Query + incremental refresh policy.
Thanks for calling this out — it’s an important distinction between Direct Lake vs. Import.
Thanks,
Akhil.
Hi @jaryszek ,
Thanks for raising this. Since your data is already in delta parquet with year/month/day folders, you can leverage Fabric’s incremental refresh at the semantic model level. Just create RangeStart / RangeEnd parameters in Power Query, filter on your date column, and then configure incremental refresh in the dataset settings. Fabric will push filters down to your OneLake delta table so only new partitions are scanned. This way you avoid reloading full history every time and only process the latest data. Thanks to our super users for sharing these best practices earlier they really make it easier to set up.
Regards,
Akhil.
Ok but the issue is that I can not transform any data (power query is not working) where I am connecting to One Lake :
Power query is not available, it is not SQL endpoint there but direct lake...
What now? 🙂
Best,
Jacek
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
A new Power BI DataViz World Championship is coming this June! Don't miss out on submitting your entry.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
| User | Count |
|---|---|
| 1 | |
| 1 | |
| 1 | |
| 1 | |
| 1 |