Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Scott_Powell
Advocate III
Advocate III

Notebooks writing tables with names lowercased - how to fix?

Hi, I have a spark notebook that's applying changes to a table in a lakehouse. Everything works fine until I try to run this last line of code:

 

final_df.write.format("delta").mode("overwrite").saveAsTable(target_table)
 
Here's the issue - my original table is called ThisIsASample, and I've verified that the target_table variable holds "ThisIsASample". But when the above line of code runs, it recreates the table as "thisisasample". This mucks up everything downstream - I need it to write the table name respecting the case I specified.
 
Any ideas on why the notebook is doing this, and what I can do to correct it?
 
Thanks,
Scott
1 ACCEPTED SOLUTION
Anonymous
Not applicable

Hi @Scott_Powell ,

Spark is not case sensitive by default.

 

There is a way to handle this issue by adding spark config , using a SparkSession object named spark:

 

spark.conf.set('spark.sql.caseSensitive', True)

 

By default it is False.

Inorder to check the configuration status -

 

print(spark.conf.get('spark.sql.caseSensitive'))

 


Can you please try and let me know if it resolves your issue?

Hope this is helpful. Please let me know incase of further queries.

View solution in original post

7 REPLIES 7
Anonymous
Not applicable

Hi @Scott_Powell ,

Thanks for using Fabric Community.

Yes, this is a limitation of the HIVE metastore. It stores the schema of a table in all lowercase.

A table name can contain only lowercase alphanumeric characters and underscores and must start with a lowercase letter or underscore.

Hope this will help. Please let us know if any further queries.

The very odd thing though is that I have other lakehouses where this is working fine. See image below. It's using the exact same code, but you can see the output table is properly named IP_Addresses, not ip_addresses.

 

The lakehouse where I'm seeing this error is very old - it was created 3 or 4 months ago at least. The one shown below where case is being properly respected is new. I wonder if something changed?

 

Thanks,

Scott

 

Scott_Powell_0-1710169477286.png

 

Anonymous
Not applicable

Hi @Scott_Powell ,

Spark is not case sensitive by default.

 

There is a way to handle this issue by adding spark config , using a SparkSession object named spark:

 

spark.conf.set('spark.sql.caseSensitive', True)

 

By default it is False.

Inorder to check the configuration status -

 

print(spark.conf.get('spark.sql.caseSensitive'))

 


Can you please try and let me know if it resolves your issue?

Hope this is helpful. Please let me know incase of further queries.

Thanks this works!

@Anonymous this seems to work perfectly - thank you! I'm not very comfortable with Spark stuff yet - is there a way to set this option "globally" either across all of Fabric, or maybe at the workspace level, so that we don't have to remember to put this code into every notebook?

 

This helped me a ton - thank you!

Scott

Anonymous
Not applicable

Hi @Scott_Powell ,

Unfortunately we don't have any option to do this "globally.

Glad to know your query got resolved. Please continue using Fabric Community for your further queries.


StrategicSavvy
Resolver II
Resolver II

hi @Scott_Powell 

 

As for now spark enforce lowercase on write and there is no way to change it.

 

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.

Top Solution Authors