Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric certified for FREE! Don't miss your chance! Learn more

Reply
Nicolas_Mattos
New Member

Lakehouse >Shortcut Delay

Hello everyone,
 

I’d like to explain my situation:

 

We have a main Lakehouse that contains all our client’s data. We use Shortcuts to implement OLS (Object-Level Security). Each sector of our client’s company has its own workspace, and each workspace has a Lakehouse that contains only shortcut tables, restricting access to the data that users in that workspace are allowed to work with.

 

The main Lakehouse has a table called "date_time_2h", which is intended to record the last data refresh every two hours.

 

The problem is that some dashboards are refreshing with the wrong date_time. For example, a dashboard refreshes at 8:30 PM but displays the last refresh timestamp as 6:00 PM. All dashboards are scheduled to refresh 30 minutes after each pipeline run.

 

We've also noticed that some shortcuts cause more issues than others.

 

I understand that a slight delay between shortcuts and Lakehouses in Fabric is expected, but I believe a 30-minute delay shouldn't be happening.

 

Thanks for your help!

1 ACCEPTED SOLUTION
nilendraFabric
Super User
Super User

Hi @Nicolas_Mattos 

 

Here is main reason of delay :

 

Shortcuts rely on periodic metadata synchronization (typically in few minutes), which combines with pipeline execution and dashboard refresh schedules to create compounding delays. The `date_time_2h` table’s updates must propagate through multiple synchronization checkpoints:

 

 

Main Lakehouse → OneLake Cache → Shortcut Metadata → Dashboard Query

 

Your OLS implementation using cross-workspace shortcuts introduces additional validation layers at each workspace boundary. Each security checkpoint adds 2-3 minutes to data propagation

 

OneLake maintains cached snapshots of Warehouse data for query performance, with refresh cycles that prioritize stability over immediacy. Critical timestamp tables like `date_time_2h` become particularly susceptible to this latency.


you can try doing this , lets say schedule every 5 mins or whatever feels right in ur scenario;

 

 

from semantic_link import refresh_metadata
refresh_metadata(lakehouse="main_lh", tables=["date_time_2h"])

 

see if this helps

 

thanks

 

View solution in original post

4 REPLIES 4
v-hashadapu
Community Support
Community Support

Hi @Nicolas_Mattos , Hope your issue is solved. If it is, please consider marking the answer 'Accept as solution', so others with similar issues may find it easily. If it isn't, please share the details.
Thank you.

 

v-hashadapu
Community Support
Community Support

Hi @Nicolas_Mattos , Hope your issue is solved. If it is, please consider marking the answer 'Accept as solution', so others with similar issues may find it easily. If it isn't, please share the details.
Thank you.

v-hashadapu
Community Support
Community Support

Hi @Nicolas_Mattos , Hope your issue is solved. If it is, please consider marking the answer 'Accept as solution', so others with similar issues may find it easily. If it isn't, please share the details.
Thank you.

Thank you for your prompt response @nilendraFabric .

nilendraFabric
Super User
Super User

Hi @Nicolas_Mattos 

 

Here is main reason of delay :

 

Shortcuts rely on periodic metadata synchronization (typically in few minutes), which combines with pipeline execution and dashboard refresh schedules to create compounding delays. The `date_time_2h` table’s updates must propagate through multiple synchronization checkpoints:

 

 

Main Lakehouse → OneLake Cache → Shortcut Metadata → Dashboard Query

 

Your OLS implementation using cross-workspace shortcuts introduces additional validation layers at each workspace boundary. Each security checkpoint adds 2-3 minutes to data propagation

 

OneLake maintains cached snapshots of Warehouse data for query performance, with refresh cycles that prioritize stability over immediacy. Critical timestamp tables like `date_time_2h` become particularly susceptible to this latency.


you can try doing this , lets say schedule every 5 mins or whatever feels right in ur scenario;

 

 

from semantic_link import refresh_metadata
refresh_metadata(lakehouse="main_lh", tables=["date_time_2h"])

 

see if this helps

 

thanks

 

Helpful resources

Announcements
Sticker Challenge 2026 Carousel

Join our Community Sticker Challenge 2026

If you love stickers, then you will definitely want to check out our Community Sticker Challenge!

Free Fabric Certifications

Free Fabric Certifications

Get Fabric certified for free! Don't miss your chance.

January Fabric Update Carousel

Fabric Monthly Update - January 2026

Check out the January 2026 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.