The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hi, I have a spark notebook that's applying changes to a table in a lakehouse. Everything works fine until I try to run this last line of code:
Solved! Go to Solution.
Hi @Scott_Powell ,
Spark is not case sensitive by default.
There is a way to handle this issue by adding spark config , using a SparkSession object named spark:
spark.conf.set('spark.sql.caseSensitive', True)
By default it is False.
Inorder to check the configuration status -
print(spark.conf.get('spark.sql.caseSensitive'))
Can you please try and let me know if it resolves your issue?
Hope this is helpful. Please let me know incase of further queries.
Hi @Scott_Powell ,
Thanks for using Fabric Community.
Yes, this is a limitation of the HIVE metastore. It stores the schema of a table in all lowercase.
A table name can contain only lowercase alphanumeric characters and underscores and must start with a lowercase letter or underscore.
Hope this will help. Please let us know if any further queries.
The very odd thing though is that I have other lakehouses where this is working fine. See image below. It's using the exact same code, but you can see the output table is properly named IP_Addresses, not ip_addresses.
The lakehouse where I'm seeing this error is very old - it was created 3 or 4 months ago at least. The one shown below where case is being properly respected is new. I wonder if something changed?
Thanks,
Scott
Hi @Scott_Powell ,
Spark is not case sensitive by default.
There is a way to handle this issue by adding spark config , using a SparkSession object named spark:
spark.conf.set('spark.sql.caseSensitive', True)
By default it is False.
Inorder to check the configuration status -
print(spark.conf.get('spark.sql.caseSensitive'))
Can you please try and let me know if it resolves your issue?
Hope this is helpful. Please let me know incase of further queries.
Thanks this works!
@Anonymous this seems to work perfectly - thank you! I'm not very comfortable with Spark stuff yet - is there a way to set this option "globally" either across all of Fabric, or maybe at the workspace level, so that we don't have to remember to put this code into every notebook?
This helped me a ton - thank you!
Scott
Hi @Scott_Powell ,
Unfortunately we don't have any option to do this "globally.
Glad to know your query got resolved. Please continue using Fabric Community for your further queries.