Don't miss your chance to take exam DP-600 or DP-700 on us!
Request nowFabric Data Days Monthly is back. Join us on March 26th for two expert-led sessions on 1) Getting Started with Fabric IQ and 2) Mapping & Spacial Analytics in Fabric. Register now
Hi,
Me and my colleague have noticed an issue with a few pipelines recently.
For context the pipelines architecture is made up of 4 dataflows in a linear flow, the first two extract data from a postgresql database and send it to a bronze lakehouse.
Dataflow 3 and dataflow 4 move the data from bronze to silver and silver to gold respectively.
What we are seeing is that the initial extracts run fine i.e. dataflows 1 and 2, but when the 3rd one runs its like its viewing a cached version of the lakehouse which does not contain the newly populated data.
We have tried setting waits inbetween the dataflows which has not worked.
We have also tried removing dataflows 3 and 4 from the pipeline and scheduling them to run directly in their settings which also did not work.
Is there a current work around for this?
Solved! Go to Solution.
Hi @Jester_3
Insert a lightweight Notebook that:
Kudos done
What you’re hitting is the lag between physical writes and metadata visibility in Fabric’s SQL analytics endpoint. Dataflow 3 is likely reading stale metadata even though the Bronze‑to‑Silver write has completed.
Your commit‑check workaround is solid. A few refinements worth noting for others:
- Poll Delta version rather than just the date, so multiple commits in a day don’t collide.
- Tune timeouts — most syncs finish in minutes, so shorter waits with retries can reduce pipeline latency.
In short, explicit synchronization is the safest way to ensure downstream dataflows operate on the latest state until Fabric improves metadata propagation.
Hi @Jester_3
Insert a lightweight Notebook that:
Thanks for this suggested fix, I think I'll have to apply this to all my pipelines currently as this issue is happening more requently this past week.
The long description of the fix may be beneficial for others so i've went into detail below.
I ended up adding an until loop with a timeout of an hour in my pipeline:
The conditional check on the pipeline was :
@equals(
activity('DELTA_CHECK').output.result.exitValue,
formatDateTime(utcNow(), 'yyyy-MM-dd')
)
The notebook content :
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
Check out the February 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 17 | |
| 7 | |
| 5 | |
| 4 | |
| 3 |