Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredPower BI is turning 10! Let’s celebrate together with dataviz contests, interactive sessions, and giveaways. Register now.
Environment summary
• One “main” Lakehouse in workspace DataOps holds all refined tables.
• A Fabric pipeline (Copy Data + notebook) runs every two hours – 06:30, 08:30, etc.
• At the end of each run the notebook executes an Oracle query 'SELECT SYSDATE AS date_time FROM dual' and overwrites a single-row table called date_time_2h.
• Each business unit has a dedicated workspace with its own Lake named shortcut_<BU> that contains shortcuts to the authorized tables, including date_time_2h.
• A PBIX published to each BU workspace connects to those shortcuts through import mode. The dataset is scheduled to refresh 30 minutes after the pipeline (for example, refresh at 07:00 after the 06:30 pipeline run). The report shows the timestamp from date_time_2h as “Last data update”.
Expected behavior
When the pipeline finishes at 06:30 on 16 June, the table should hold 2025-06-16 06:30, so the 07:00 dataset refresh should display exactly that value.
Actual intermittent issue
Roughly once a day, the report still shows the previous timestamp. Example:
• Pipeline finishes successfully at 06:30 on 16 June (verified in logs).
• Dataset refresh occurs at 07:00 as scheduled.
• Report still displays 2025-06-15 18:30 – the timestamp from the prior pipeline run.
What I have already checked
• Immediately after 06:30 I query date_time_2h in Lakehouse Explorer and it already shows the correct 06:30 value.
• A manual dataset refresh later in the day always picks up the right timestamp.
• Behavior reproduces across several BU workspaces.
• No incremental-refresh or dynamic partitioning rules are applied to this table. It is literally one row, overwritten each run.
What should I do to fix this? As it stands, I can't tell if this is the only table with this problem or if all the others are also not updating
Hi @Pedro_Rosa,
I hope you had a chance to review the solution shared by @GilbertQ @BhavinVyas3003 @Poojara_D12 . If it addressed your question, please consider accepting it as the solution — it helps others find answers more quickly.
If you're still facing the issue, feel free to reply, and we’ll be happy to assist further.
Thank you.
Hi @Pedro_Rosa
You're facing an intermittent issue in Microsoft Fabric where, despite your pipeline successfully updating the date_time_2h table in the main Lakehouse every two hours (e.g., at 06:30), the connected Power BI datasets in each business unit (BU) workspace sometimes fail to reflect this update during their scheduled refresh (e.g., at 07:00). This results in the report still showing the previous timestamp (e.g., 2025-06-15 18:30), even though querying the table directly in Lakehouse Explorer confirms that the new timestamp (e.g., 2025-06-16 06:30) is present immediately after the pipeline completes. Since each BU connects via shortcuts and uses import mode, the refresh is expected to re-import the latest data from date_time_2h, but that isn’t always happening. The fact that a manual refresh later in the day correctly retrieves the latest value suggests a possible caching delay or metadata synchronization lag between the Lakehouse and Power BI's semantic model at the time of the scheduled refresh. Since there's no incremental refresh or partitioning logic that might skip reading this one-row table, the issue likely lies in the timing and internal propagation of the Lakehouse update. As a potential solution, consider introducing a short buffer (e.g., 10–15 minutes) between the pipeline completion and dataset refresh to ensure that all Lakehouse metadata changes fully propagate. Additionally, using Fabric’s Dataflow Gen2 instead of direct import from Lakehouse shortcuts may offer better control and observability over the data refresh process. Finally, if the issue persists and affects other tables, setting up logging for a small set of critical tables to capture their snapshot at refresh time could help determine the scope of the problem.
The issue is caused by a short delay in OneLake shortcut metadata syncing after your Fabric pipeline updates the date_time_2h table. Even though the data is updated, Power BI may read stale values during scheduled import. To fix this, run a REFRESH <shortcut_name>; SQL command at the end of your pipeline to force OneLake to sync the latest data. Then, schedule the Power BI dataset refresh a few minutes later or trigger it directly using the Power BI REST API.
Hi @Pedro_Rosa
I'm not 100% sure if this relates to your issue, but please note that all the servers in the power beyond fabric service use UTC as their time zone. So what that means is potentially you could be in this issue due to the fact that the time zone you are in is different to the UTC time zone and that is why you are seeing this daytime offset maybe adjust this. in your query to offset for the time zone that you are in.
User | Count |
---|---|
43 | |
32 | |
30 | |
27 | |
25 |
User | Count |
---|---|
55 | |
54 | |
35 | |
33 | |
28 |