Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes! Register now.

Reply
Anonymous
Not applicable

Ghost tables showing in lakehouse sql endpoint that no longer exist?

Hi all, anyone else experiencing that tables backed up through a "create table [new schema].[tablename] as select from old_table" routine,  end up being impossible to delete in the sql endpoint afterwards? 

 

I have tried to drop the tables and the schema, but nothing will budg. I cannot select from the tables (since they dont exist) and cannot drop them either. 

I suspect that the metadata are stuck in an older state, but after a weeks wait and endless refreshes, both login/logout, refresh buttons and all, they will not go away. 

 

When querying the sys.tables - table, I see the tables, but trying to deleted them there is of course prohibited.

 

The message when trying to drop them through sql code is:

 

The external policy action 'Microsoft.Sql/Sqlservers/Databases/Schemas/Tables/Drop' was denied on the requested resource.
 Cannot drop the table 'mytablename', because it does not exist or you do not have permission.

 

Just for clarity, I am on a trial version and I have admin access to the workspace.

 

Any good ideas are much appreciated. Thanx all!

4 REPLIES 4
Anonymous
Not applicable

Hi AndyDDC, thanks for reaching out, a very crucial detail the read only point. Howerver, in the sql endpoint i am able to create schemas, so most likelig support for schema preview as you mentioned (I am not the one who created the workspace). 

 

I now remember what I did, 

 

I first created a notebook cell and used

 

%%sql

 

create table ip_adresse_final3 as select * from ip_adresse;

 

This went well and i could see the the new table in sql endpoint under dbo schema. 

 

I wanted to clear away the clutter of having backuptables in the dbo schema, so I created a new schema in sql endpoint mode: 

 

create schema nmd_log_backup;

 

then I transfered the table ip_adresse_final3 from dbo to nmd_log_backup schema:

 

alter schema nmd_log_backup transfer dbo.ip_adresse_final3;

 

This went well and I could se the content in preview straight after. However, next time I wanted to get the content, the table was unreachable, and would  not respond to select top 1000 * from ....

 

So now I cannot drop ip_adresse_final3, reach it or if I in the notebook try to overwrite it, it will actually make a new table with the same name, but then under the dbo-schema, I guess it shold not be possible to have to identical table names under the same lakehouse(?). 

 

A noisy screenprint, sorry about that, but check out the table named ip_adresse_final3, now under two different schemas, dbo and nmd_log_backup:

 

svenerikh_0-1732903321486.png

 

 

Anyways, the work around is to transfer the orphaned table from  the nmd_log_backup schema to dbo, overwrite it in a notebook cell with a dummy dataframe and THEN, then I am able to drop if.

 

And finally when all tables are transferred from the bacukupschema, I am able to drop the schema.

 

Either this is a bug, or maybe I am missing out on something(?)

 

Update; I am actually able to recreate it all now

 

First i create a table from a dataframe, it is then placed in dbo schema, 

I transfer the table in sql endpoint mode to another schema, for instance the backup schema

I go to the notebook and run a %%sql select * from the table, that works

Then i run a %%sql drop table, all looks like its a success, however, the table is now gone in the lakehouse, but not in the sql endpoint, so it has not deleted it totally, just gotten rid of the reference of it.

 

In the sql endpoint mode, i now transfer the table, which i can still preview in this mode, back to the dbo schema. 

 

svenerikh_0-1732905527799.png

 

I would think i could read it now and it would appear in the lakehouse tree structure, but its not present

svenerikh_2-1732906410119.png

 

If i try to drop it or select it its not reachable from spark any more, 

 

So bottom line, if you transfer a table to another schema in sql endoint, dropping it when the table not in the dbo schema, the notebook drop command only removes the reference of it and it will live on in some form of state,  not reachable from spark notebook any longer.

The only way to drop the table (since it is read only in sql endpoint) it is to transfer it back to dbo schema and overwrite it from a notebook cell, so it appears again in the lakehouse and then can be dropped from a notebook drop command afterwards.

 

Pheew, long story long... sorry about that.

 

 

Kind regards

Anonymous
Not applicable

Hi @Anonymous 

 

I did some tests, and here are my thoughts and guessing:

 

1. There is indeed some synchronization delay between the Lakehouse and the SQL endpoint.

 

I manually created a schema in the Lakehouse, but no matter how many times I refreshed the SQL endpoint, I couldn't see the new schema. However, when I wrote a table into the new schema using a notebook, the new schema and the new table appeared in the SQL endpoint.

 

2. The schemas in the Lakehouse and the SQL endpoint are not entirely equivalent.

 

Before the introduction of schema-enabled Lakehouses, it was possible to create new schemas directly in the SQL endpoint. You can also try testing with a Lakehouse that does not have schema enabled, switch to its SQL endpoint, and create new schemas there. You will find that creating new schemas in the SQL endpoint mode is always supported. 

 

Another observation is that I deleted some created schemas (without tables) from the Lakehouse, but no matter how many times I refreshed the SQL endpoint, these previously synchronized schemas still existed and were not deleted.

 

Then I tried changing the schema of a table from dbo to new_logs in the SQL endpoint mode, and it was successful. However, when I switched to the Lakehouse mode, the table was still under the dbo schema. Additionally, the new schema new_logs created in the SQL endpoint was not synchronized to the Lakehouse mode. Thus, the same table exists under different schemas in the two modes. 

vjingzhanmsft_0-1733304313025.png

 

These observations highlight the differences in schema handling between the SQL endpoint and the Lakehouse. It seems that changes made in the SQL endpoint do not always reflect in the Lakehouse, and vice versa. This could be an important consideration for managing your data and schemas across these environments. 

 

As the SQL enpoint of a lakehouse is read-only, to modify data in lakehouse Delta tables or delete Delta tables, you have to switch to lakehouse mode and use Apache Spark. When you use the notebook, it is running in Spark kernel. 

 

Therefore, in notebooks, it is recommended to use the schemas from the Lakehouse to read or reference tables, rather than using the schemas from the SQL endpoint. This is because the schemas are not fully synchronized between the two modes, and each mode operates in different kernel environments.

 

Hope this would be helpful. If you have any more observations and insights, feel free to share!

 

Best Regards,
Jing
If this post helps, please Accept it as Solution to help other members find it. Appreciate your Kudos!

I have had this same issue. If you create a table shortcut in the Lakehouse and transfer the table to another schema in the SQL endpoint and delete the originial table shortcut, you end up with these orphaned schemas/tables in the SQL endpoint and no way to delete them. I would recommend that other users just avoid using custom schemas or you will just accumulate ghost tables over time.

AndyDDC
Super User
Super User

Hi @Anonymous is this in a lakehouse with schemas enabled?  I have tried to recreate the issue but I can drop the new table in the custom schema and it is dropped in the SQL Endpoint view.

 

You won't be able to drop it from the SQL Endpoint as that's read only.

 

Does this happen with a new table you create/drop or is is stuck on this one particular table?  If so then yes something has gone awry with the metadata sync between the lakehouse and sql endpoint, you'll need to contact MS support.

Helpful resources

Announcements
FabCon Global Hackathon Carousel

FabCon Global Hackathon

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!

September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Kudoed Authors