Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Special holiday offer! You and a friend can attend FabCon with a BOGO code. Supplies are limited. Register now.
i have multiple tables on a lakehouse and suddenly from yesterday they started to show this error
there are few tables which are showing this error at the lakehouse and they are not even showing in the endpoint. but when i am trying to read the same table from the lakehouse instead of sql endpoint using notebook with spark, i am able to read the successfully. I am having Admin access to the workspace.
Things tried so far.
* it tried to create shortcut on another lakehouse in completely different workspace but pointing to the same location but still it shows same error.
* to check whether it is data issue, i also copied the data to another lakehouse and then created shortcut on top of it, then it is working fine.
seems like some issue with the lakehouse, but if it was happening with few tables then it should have happened with other tables also.
Solved! Go to Solution.
Hi @YashRaj5
It looks like a SQL Endpoint sync / OneLake permissions issue, not a data issue. The tables still exist in the Lakehouse (since Spark can read them), but the SQL Analytics Endpoint is unable to index or access them, which results in the 403 Forbidden error and missing tables in the endpoint.
Recommended Fixes / Workarounds
Validate OneLake folder permissions
Make sure your identity has at least ReadAll on the specific table folders.
SQL Endpoint uses these permissions even if you are Workspace Admin.
Trigger Lakehouse → SQL Endpoint metadata refresh
Refresh the SQL Analytics Endpoint in the Fabric UI
Or open the Lakehouse and run a small Spark notebook write (this often forces a sync)
Check table folder structure
Confirm the problematic tables are under:
If they were manually written under /Files, SQL endpoint will not pick them up.
Recreate the table metadata
As you tested, copying data to another lakehouse or recreating a shortcut works, which confirms metadata corruption.
You can also try recreating the table in the same lakehouse:
If the issue persists, log a Fabric support ticket
This issue matches an active Fabric bug where a few Delta tables fail to sync to SQL endpoints while others work fine.
Hi @YashRaj5
Thank you for reaching out to the Microsoft Fabric Forum Community.
@AmiGarala Thanks for the inputs
I hope the information provided by user was helpful. If you still have questions, please don't hesitate to reach out to the community.
Hi @YashRaj5
It looks like a SQL Endpoint sync / OneLake permissions issue, not a data issue. The tables still exist in the Lakehouse (since Spark can read them), but the SQL Analytics Endpoint is unable to index or access them, which results in the 403 Forbidden error and missing tables in the endpoint.
Recommended Fixes / Workarounds
Validate OneLake folder permissions
Make sure your identity has at least ReadAll on the specific table folders.
SQL Endpoint uses these permissions even if you are Workspace Admin.
Trigger Lakehouse → SQL Endpoint metadata refresh
Refresh the SQL Analytics Endpoint in the Fabric UI
Or open the Lakehouse and run a small Spark notebook write (this often forces a sync)
Check table folder structure
Confirm the problematic tables are under:
If they were manually written under /Files, SQL endpoint will not pick them up.
Recreate the table metadata
As you tested, copying data to another lakehouse or recreating a shortcut works, which confirms metadata corruption.
You can also try recreating the table in the same lakehouse:
If the issue persists, log a Fabric support ticket
This issue matches an active Fabric bug where a few Delta tables fail to sync to SQL endpoints while others work fine.
Thanks for the prompt response.
But all the steps you have mentioned above were already implemented, and I am also Admin on the workspace where I am facing the issue, so not sure the permissions will be applied in that case.
But eventually we found that there was an option called 'Data Access Mode' at the SQL endpoint level that was somehow causing issue, after some trial and error it started working. Thanks