Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hi,
As mentioned in the subject, when the Stored Procedure gets executed in the pipeline run, it is not fetching the updated table from Lakehouse. When I manually execute the Stored Procedure, it is working properly. I am stuck with this issue from the past week. Instead of fetching the updated table, it is retrieving data from the old table, because of this issue the data is not getting updated/inserted properly. It is happening only through pipeline run. Can someone help ? does anyone faced the same issue ?
Solved! Go to Solution.
Hi Alwin-Raj
Sorry....I will try again to post the link
The code
Fix SQL Analytics Endpoint Sync Issues in Microsoft Fabric
and the Video
https://youtu.be/toTKGYwr278?feature=shared
I am checking the situation and I also had days when the datalake update process (still under capacity free) took 9 minutes. I am trying to figure out if the shortcut link between a warehouse and a lakehouse have this kind of problem and therefore it is better to deal with the warehouse update by other methods.
Hi @Alwin-Raj ,
@giupegiupe ,has shared a valid link, and everything is functioning correctly now. Could you please review it and confirm.
If the issue is resolved, we kindly request you to share the solution or key insights here to assist others in the community. If we don’t receive an update, we will proceed with closing this thread.
For any future assistance, feel free to reach out via the Microsoft Fabric Community Forum and create a new thread. We would be happy to help.
Thank you for your cooperation and participation.
The solution for me:
Fix SQL Analytics Endpoint Sync Issues in Microsoft Fabric – Data Not Showing? Here's the Solution!
The thing I find “disarming” is that the same thing does not happen between lakehouses in the same workspace but happens between a lakehouse and a warehouse connected by shortcuts.
I have not tried with gen2 and paython to see if it is possible to have the same kind of problem but I imagine that if the lakehouse remains in “process” these two "solutions "could also fail if one does not apsect the lakehouse to be “consistent”
Hi, we have tried so many ways to tackle the issue, but nothing really helped. We have implemented a fix suggested by the MS support team as well, it has significantly reduced the issue but not solved the problem fully. Still sometimes this issue is coming. I can see a link related to the fix shared by you,but the link seems to be broken, could you share the link again ?
Hi Alwin-Raj
Sorry....I will try again to post the link
The code
Fix SQL Analytics Endpoint Sync Issues in Microsoft Fabric
and the Video
https://youtu.be/toTKGYwr278?feature=shared
I am checking the situation and I also had days when the datalake update process (still under capacity free) took 9 minutes. I am trying to figure out if the shortcut link between a warehouse and a lakehouse have this kind of problem and therefore it is better to deal with the warehouse update by other methods.
Hello @Alwin-Raj ,
we wanted to check in as we haven't heard back from you. Did our solution work for you? If you need any more help, please don't hesitate to ask. Your feedback is very important to us. We hope to hear from you soon.
Thnak You.
Hi @Alwin-Raj ,
As mentioned by @AlexanderPowBI , it appears there may be an issue with the SQL endpoint, which is currently experiencing performance-related problems.
@giupegiupe , you mentioned that you ["put in 5 minutes, and it seems to be working, but I'll wait a few days to confirm it."] Could you please provide an update on this?
@Alwin-Raj , if you are still encountering any issues, please inform us. If your issue has been resolved, kindly share your solution here to assist other members. Additionally, marking it as the Accepted Solution will make it easier for others with similar issues to find the answer.
Regards,
Yugandhar.
If the problem was in the delay (I put in 5 minutes and it seems to be working), but I'll wait a few days to confirm it, then it would mean that there should be something telling me that the lakehose update is finished (the "History table select" seems to me to be a little too shallow a log).
The uncertainty of the update leads to time dilation and is probably due to asynchronism of activities that remain in the queue either due to capacity or internal fabric (azure) cause.
It seems that the termination of a pipeline does not indicate certainty of the termination of tasks that have been requested, particularly on data whether Lakehouse, Warehouse, or SQL DB, that give the impression of being asynchronous with respect to the pipeline task that has them
If the problem was in the delay (I put in 5 minutes and it seems to be working), but I'll wait a few days to confirm it, then it would mean that there should be something telling me that the lakehose update is finished (the "History table select" seems to me to be a little too shallow a log).
The uncertainty of the update leads to time dilation and is probably due to asynchronism of activities that remain in the queue either due to capacity or internal fabric (azure) cause.
It seems that the termination of a pipeline does not indicate certainty of the termination of tasks that have been requested, particularly on data whether Lakehouse, Warehouse, or SQL DB, that give the impression of being asynchronous with respect to the pipeline task that has them
Hi @Alwin-Raj ,
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution we provided for your issue worked for you or let us know if you need any further assistance?
Your feedback is important to us, Looking forward to your response.
Thank You.
I've had the same issue, and I'm quite sure it has to do with the SQL endpoint issue. I believe I have solved it in the way i have written here.
However, as stated in that script, its "unsupported, don't use it" etc. However, I decided to use in the way described in my post, as I see no other solution until MS comes with the official API for this.
//Alexander
Hello @Alwin-Raj
Could you please tell me the full flow of your pipeline.
The issue is likely caused by synchronization delays between Lakehouse updates and their reflection in SQL Analytics Endpoints. Implementing a delay or forcing metadata refreshes are effective solutions to ensure that your stored procedure fetches updated data during pipeline runs.
but before I commit something please let me know the full flow.
it is medallion architecture based pipeline where the bronze layer will have raw data in parquet after some validations and cleansing. then this data will be upserted into silver in tables and it is consumed by gold after running a SP with necessary transformations. the problem is when we run the SP through pipeline it is not fetching the updated data from silver.
Already tried implementing delay, isolation levels, execution plan recompile, CTEs to retrieve latest data from silver etc. but nothing helped
Hi @Alwin-Raj ,
Thank you for reaching out to the Microsoft community and providing the details. Thanks to @nilendraFabric , for gathering more information from the user.It seems like you're facing an issue where the stored procedure in your pipeline isn't fetching the updated data from the silver layer.
Here are some steps you can try to fix this.
Medallion Architecture Overview:
If my response solved your query, please mark it as the Accepted solution to help others find it easily.
And if my answer was helpful, I'd really appreciate a 'Kudos'.
I have tried everything except your 4th point, but nothing helped. 4th one, I can't do because it will be break the pipeline flow. It was working fine with when I was doing historical load even with prod, but now it's not working when I started the incremental load
Hi @Alwin-Raj ,
Thanks for your response. Since this issue started after switching to incremental load, it’s possible that the stored procedure is running before the silver table updates are fully committed.
Here are some additional checks you can try.
If the issue persists, could you provide more details on how the incremental load is structured? That will help in pinpointing the root cause more accurately.
Regards,
Yugandhar.
It does not refresh the data with store procedure via pipeline instead it refresh executed manually works.
My process involves a store (multiple stores) updating data, via pileline, in a warehouse taking it from a lakehouse in the same workspace.
The table it takes from, at least in my process is communally always full, so at the limit it should load old data, if it were a refresh problem, instead it seems to be pretending to read and write data. A 5-minute wait doesn't help it either.
Any ideas besides the refresh ?
Hi @giupegiupe ,
Thank you for providing more details on the issue. It seems like there might be some additional factors at play. Let's try a few more steps to troubleshoot and resolve the problem.
Please try these steps and let me know if they help resolve the issue.
Regards,
Yugandhar.
Already tried adding delay, but the result was just random. Sometimes it works properly sometimes not. For the past 2 days, I didn't encounter the same issue. I didn't implement anything new, it's all same as before, I think the issue was related to the metadata sync in SQL endpoint of the Lakehouse, Looks like now they have fixed that issue
User | Count |
---|---|
13 | |
5 | |
4 | |
3 | |
3 |
User | Count |
---|---|
8 | |
8 | |
7 | |
6 | |
5 |