Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
KimTutein
Advocate II
Advocate II

"ALTER TABLE RENAME TO" does not work with tables in upper casee

Hi Community

 

I think I have discovered a bug in the ALTER TABLE RENAME TO command.  It does not seem to be able to handle capital letters when the spark property “spark.sql.caseSensitive” is sat to “true”. In my company we have had tables in upper case and therefor we have this property to “true” in our Spark environment.

We are able to work with our table using upper case letters (create tables /read and write). However the ALTER TABLE TABLE_A RENAME TO TABLE_B does not work in upper case – only in lower case.

 

Anyone have the same issue and found a solution if you want to rename to upper case table name? As I see it the command has a bug.

 

Below I include two examples from a spark notebook (remember to set spark.sql.caseSensitive = True in environment). Appendix A will fail – renaming “upper case table name” to “new upper case table name”. Appendix B will run ok – renaming “upper case table name” to new “lower case table name”.

 

Appendix A – this code will fail.

%%sql

/*

  1. 1. having set spark.sql.caseSensitive = True (in spark environment)
  2. 2. this does now work when we rename using capital letters
  3. 3. Youe will get the error DELTA_TABLE_NOT_FOUND
  4. 4. you will se the table A_TEST_RENAME_1 (in upper case) under manged tables

and then at the same time a_test_rename (in lower case) under Unidentified  

 

*/

 

create table A_TEST (number_1 decimal(10,0));

 

insert into A_TEST values(1);

 

select * from A_TEST;

 

alter table A_TEST rename to A_TEST_RENAME_1;

 

select * from A_TEST_RENAME_1;

 

 

Appendix B – this code will run

%%sql

/*

  1. 1. having set spark.sql.caseSensitive = True (in spark environment)
  2. 2. this works fine when we rename using renaming to lower case case naming in the rename

*/

 

create table A_TEST2 (number_1 decimal(10,0));

 

insert into A_TEST2 values(1);

 

select * from A_TEST2;

 

alter table A_TEST2 rename to a_test2_rename_2;

 

select * from a_test2_rename_2;

1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @KimTutein ,

 

I can reproduce your error, running the command will, despite successfully renaming the table name to another uppercase table name, also create an unidentified folder, and when querying the data it will report an error saying that the current table doesn't exist.

 

This issue may be related to Delta Lake's behavior when dealing with uppercase table names. Specifically, the way Delta Lake handles table names can lead to inconsistent metadata in some cases, especially when the spark.sql.caseSensitive property is set to true.

vhuijieymsft_0-1731914321604.png

 

To rename an uppercase table name to another uppercase table name, you can manually copy the data from the old table to the new table, make sure the data is written correctly in the new table, and then delete the old table. This method avoids the problems that can result from using the ALTER TABLE RENAME TO command.

 

The steps are as follows:

First, create a table named in uppercase letters and insert data.

vhuijieymsft_1-1731914321606.png

 

Next, use the following command to rename and delete the original table.

# read original table
df = spark.sql(“SELECT * FROM A_TEST”)
# create new table
df.write.format(“delta”).mode(“overwrite”).saveAsTable(“A_TEST_RENAME_1”)
# drop table
spark.sql(“DROP TABLE IF EXISTS A_TEST”)

 

As you can see, it runs fine. Finally, perform a query on the data.

%%sql
SELECT * FROM A_TEST_RENAME_1

vhuijieymsft_2-1731914476077.png

 

If you have any other questions please feel free to contact me.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

View solution in original post

2 REPLIES 2
Anonymous
Not applicable

Hi @KimTutein ,

 

I can reproduce your error, running the command will, despite successfully renaming the table name to another uppercase table name, also create an unidentified folder, and when querying the data it will report an error saying that the current table doesn't exist.

 

This issue may be related to Delta Lake's behavior when dealing with uppercase table names. Specifically, the way Delta Lake handles table names can lead to inconsistent metadata in some cases, especially when the spark.sql.caseSensitive property is set to true.

vhuijieymsft_0-1731914321604.png

 

To rename an uppercase table name to another uppercase table name, you can manually copy the data from the old table to the new table, make sure the data is written correctly in the new table, and then delete the old table. This method avoids the problems that can result from using the ALTER TABLE RENAME TO command.

 

The steps are as follows:

First, create a table named in uppercase letters and insert data.

vhuijieymsft_1-1731914321606.png

 

Next, use the following command to rename and delete the original table.

# read original table
df = spark.sql(“SELECT * FROM A_TEST”)
# create new table
df.write.format(“delta”).mode(“overwrite”).saveAsTable(“A_TEST_RENAME_1”)
# drop table
spark.sql(“DROP TABLE IF EXISTS A_TEST”)

 

As you can see, it runs fine. Finally, perform a query on the data.

%%sql
SELECT * FROM A_TEST_RENAME_1

vhuijieymsft_2-1731914476077.png

 

If you have any other questions please feel free to contact me.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Hi @Anonymous 

Thank you for you reply - it is much appreciated.

The reason I wanted to make the renaming is that I need to  be able to change metadata on field in table (I thought I would rename original table as a backup and then make new table with old name). We have tables defined using spark.sql which for instance varchar(100)  the core system some times add length to the filds and we then needed to add the lenght to the field. As pyspark only define using for instance string type we decided to make new table with new definition. Read that table into spark dataframe and overwrite existing table with schema overwrite option  = true. I then get a new table with new metadata (for instance varchar 150) and preserve time-travel on the table. My understanding is that having the correct metadata limitation on field will help the system use optimal execution plan when queriyng data via. the sql end-point.

 

Friendly regards

Kim

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.