Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hey all, I'm hoping someone on here may be able to help me. I've already entered a ticket and been in a back and forth with them but getting nowhere 😞.
On a daily basis, I have a few reports that I will go into where the visuals are all errored out. The detail of the error message says "Error fetching data for this visual. Failed to complete the command because the underlying location does not exist." Then it has the specific table and a few other details.
Here's the crazy part - the table exists in the lakehouse AND when I view the table in the lakehouse (pull it up to see the rows of data), it works fine. EVEN WEIRDER - I can then go back to the report, refresh the visuals and it all works fine!!! This is a workaround that I found even before I put the support ticket in. The crazy part is that the table that is mentioned in the report is my date table and it's refreshed weekly but the error happens daily. On occassion, I do see the error for other tables. The same workaround works. Those tables tend to be refreshed daily or more frequently.
What's even more weird - this same exact table is accessed in many other lakehouses and many other workspaces that don't have this issue at all. It's isolated to a few workspaces (less frequently used) and their lakehouses and reports...
ALL of the tables that are mentioned in the error messages are used in other workspaces and many other reports which do not exhibit this behavior.
More on my structure:
The report pulls data from a Lakehouse using direct query. The table is a shortcut to another workspace where I manage the data that comes in. I've built out a medallion architecture and then create a lakehouse for each user group in their own workspace. The lakehouse in the user's workspace links back (via shortcut) to a lakehouse that I use to serve the data - it basically is a bunch of shortcuts back to the most refined version of each table. The date table that is usually the issue is created in a notebook and saved in a lakehouse.
The date table shortcut goes back to the lakehouse where the data is actually stored.
The other table shorcuts go back to a lakehouse which has a shortcut to the data warehouse where the data is stored.
We have an F64 sku. I've built up a bunch of pipelines to process our data and provide it for reporting. I've been using Fabric since it was in preview.
For the date table, that means:
Shared Tables Lakehouse --|new workspace|-> End User Lakehouse (via Shortcut to the Serving Lakehouse) ---> Semantic Model via direct query
For most of my other tables, this means:
Staging LH ---> Raw Datawarehouse ---> Clean Datawarehouse ---> Refined Datawarehouse ---> Serving Lakehouse (via shortcut to DW) --|new workspace|-> End User Lakehouse (via Shortcut to the Serving Lakehouse) ---> Semantic Model via direct query
What I've tried:
I'm trying to avoid completely recreating the Workspace. I'm also trying to avoid copying the data from the source lakehouse to the end user's lakehouses - that would undermine the value of Fabric.
Has anyone else seen this and, if you have, found a fix?
Hi @GregMarbais,
Thank you @lbendlin, for your insights.
Make sure Lakehouse shortcuts are updated, rerun the necessary ingestion pipelines, and, if needed, use a metadata initialization query or Direct Lake to help minimize latency and schema mismatches. This will ensure reports are accessible without manual steps.
Troubleshoot healthcare data solutions - Microsoft Cloud for Healthcare | Microsoft Learn
Thank you.
@v-saisrao-msft I'm not sure what you mean by updating the lakehouse shortcuts - I've definitely recreated them multiple times - at all points along the chain of lakehouses. The ingestion pipelines run regularly so they've definitely re-run many times since this behavior started. I'm pretty sure the issue is local to the workspace and lakehouse - the same tables showing the issue are working fine for other reports in other workspaces, pulling from other lakehouses.
I've never done a metadata initialization query so I'll have to look into that. And will look into Direct Lake as an option as well.
Hi @GregMarbais,
We haven’t heard back from you in a while regarding your issue. let us know if your issue has been resolved or if you still require support.
Thank you.
Hi @GregMarbais,
Could you let me know if the issue has been resolved, and have you checked using the direct lake option?
Thank you.
@v-saisrao-msft The issue hasn't been resolved but I've been told by the Tech Support team that the product team is working on a long-term fix. The issue has to do with how frequent the SQL endpoint refreshes metadata in lakehouses that aren't used often.
I have to change things around in the lakehouse and report before I can use direct lake so I'm going to try that but I'm not going to get a chance to do that for a bit.
Hi @GregMarbais,
Hi, just checking back in — do you know if the product team’s fix has been rolled out to your tenant yet, and if things are working correctly now on your side?
Thank you.
The report pulls data from a Lakehouse using direct query.
I don't think that is possible - did you mean to say from the lakehouse's SQL endpoint? There are many stories here about latency and schema mismatches when lakehouse updates have to be synchronized to the SQL enpoint.
Have you considered using Direct Lake instead?
@lbendlin, Yep, the direct query is to the SQL endpoint. I haven't played with direct lake at all yet. These reports pre-date the launch of direct lake for reports. I'll give that a shot and maybe it gets around whatever is happening under the hood in these lakehouses.
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.