Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
Hi,
I am testing the Dataverse shortcut functionality for a single table.
I was under the impression the shorcut provided a direct read (unless caching is enabled) so am struggling to understand what is going on here or how to get this fixed.
See attached screenshots
Solved! Go to Solution.
Hi @Franck_SR
I need to test only syncing a couple of tables so I (bearing in mind this is a POC system 🙂 )
I still have some testing to do but in principle, this is working for me.
Before the use case for only a couple of tables presented itself, this also worked for me
What I did to 'break' was
I could still see the Lakehouse and synced data - but did not appreciate that this 'Dataverse shortcut' was actually a synchronisation. I have not yet retried setting up the fabric link purely due to time pressure on other things.
TBH the way I now plan to structure Fabric for the teams data projects I probably will use the Fabric Link.
Update : Marking this as the solution. N.B. My original case study was to synchronise 1 table. After testing I think the fabric link to be a much more robust solution as it utilises the delta change feed. My setup will be to use shortcuts via a second lakehouse item to surface individual tables.
Cheers
Dan
I researched some similar threads which reported the similar Dataverse shortcut data latency problem. It seems that there is still some synchronization delay with shortcuts. These delays may be caused by some reasons, leading to that the background metadata sync job is halted.
One possible mitigation workaround is to manually refresh Lakehouse or shortcuts in the Web Lakehouse Explorer, which can trigger an on-demand refresh of metadata. Then the data shoud be updated.
Another possible option is to scale up the capacity. This may shorten the latency, but I can't guarantee how short it will be. This requires you to observe and adjust yourself through testing.
I don't find a solid solution to this yet. If you get any insight or solution from the support ticket, can you share it with the community? Thank you in advance.
Best Regards,
Jing
Community Support Team
Raised support ticket.
Hello,
Have you solved the problem, I have the same behavior ?
Thank's
Best regards
Hi @Franck_SR , The problem has been solved with some confirmations from MS support. I havent yet closed the ticket as I still have some internal testing to do but the short version of my findings so far is :-
Hope this helps for now. Feel free to ask more specifics if needed and if I can help I will.
Thanks @DanielAmbler for your quick reply !
In the end, what did you do on your side?
Did you destroy and remake the Fabric link?
Or did you go through SynapseLink?
Now you can see the changes via the shortcuts?
Hi @Franck_SR
I need to test only syncing a couple of tables so I (bearing in mind this is a POC system 🙂 )
I still have some testing to do but in principle, this is working for me.
Before the use case for only a couple of tables presented itself, this also worked for me
What I did to 'break' was
I could still see the Lakehouse and synced data - but did not appreciate that this 'Dataverse shortcut' was actually a synchronisation. I have not yet retried setting up the fabric link purely due to time pressure on other things.
TBH the way I now plan to structure Fabric for the teams data projects I probably will use the Fabric Link.
Update : Marking this as the solution. N.B. My original case study was to synchronise 1 table. After testing I think the fabric link to be a much more robust solution as it utilises the delta change feed. My setup will be to use shortcuts via a second lakehouse item to surface individual tables.
Cheers
Dan
Thank you very much Daniel 😉 I'll test it
Cheers
Franck
And interestingly - I cannot do this the other way using the Dataverse -> Fabric link - which worked yesterday.
Hi @DanielAmbler ,
I have a same issue as you. I has linked to Fabric successfully before and now I received this error when trying to un-link the Fabric Link and re-link again. Did you resolve this issue?
Hi @hoanghiepng92 - see my full answer in this thread. Regarding the unlink/relink I had to wait for a period of approximately 24 hours before I was able to relink successfully. I dont know if this is by design or I just got lucky.
Kind regards
Dear @DanielAmbler
In my case, after multiple attempts, the issue has been resolved. This is also solution from MS Support. The said that lots of their customers meet this error and after 2~3 tries the issue will resolve itself.
Tested the same in a new workspace/lakehouse/dataverse shortcut - get the same result of only 6 records.
User | Count |
---|---|
17 | |
10 | |
6 | |
2 | |
1 |
User | Count |
---|---|
27 | |
22 | |
11 | |
7 | |
7 |