Get certified in Microsoft Fabric—for free! For a limited time, the Microsoft Fabric Community team will be offering free DP-600 exam vouchers. Prepare now
We created a lakehouse with schemas enabled and noticed that we were getting exceptions in a Notebook while trying to read from some schemas but not others. All of the schemas worked normally elsewhere, and the notebook could read from any schema if we used spark.read instead of spark.sql.
The pattern seems to be that only schemas with lower case names work with spark.sql. I verified this by creating two schema shortcurts to the same schema with the only difference being that one shortcut was named in all uppercase and the other was named in all lowercase. The former gave exceptions from spark.sql while the latter worked normally.
This is what the exceptions look like, using the code snippet generated by dragging a table from the left pane into the notebook.
AnalysisException Traceback (most recent call last)
Cell In[8], line 1
----> 1 df = spark.sql('SELECT * FROM Faculty_Success.ODS_UC.TTSV_ACAD_PERIOD_CURRENT_LOV LIMIT 1000')
2 display(df)
File /opt/spark/python/lib/pyspark.zip/pyspark/sql/session.py:1631, in SparkSession.sql(self, sqlQuery, args, **kwargs)
1627 assert self._jvm is not None
1628 litArgs = self._jvm.PythonUtils.toArray(
1629 [_to_java_column(lit(v)) for v in (args or [])]
1630 )
-> 1631 return DataFrame(self._jsparkSession.sql(sqlQuery, litArgs), self)
1632 finally:
1633 if len(kwargs) > 0:
File ~/cluster-env/trident_env/lib/python3.11/site-packages/py4j/java_gateway.py:1322, in JavaMember.__call__(self, *args)
1316 command = proto.CALL_COMMAND_NAME +\
1317 self.command_header +\
1318 args_command +\
1319 proto.END_COMMAND_PART
1321 answer = self.gateway_client.send_command(command)
-> 1322 return_value = get_return_value(
1323 answer, self.gateway_client, self.target_id, self.name)
1325 for temp_arg in temp_args:
1326 if hasattr(temp_arg, "_detach"):
File /opt/spark/python/lib/pyspark.zip/pyspark/errors/exceptions/captured.py:185, in capture_sql_exception.<locals>.deco(*a, **kw)
181 converted = convert_exception(e.java_exception)
182 if not isinstance(converted, UnknownException):
183 # Hide where the exception came from that shows a non-Pythonic
184 # JVM exception message.
--> 185 raise converted from None
186 else:
187 raise
AnalysisException: [TABLE_OR_VIEW_NOT_FOUND] The table or view `Faculty_Success`.`ODS_UC`.`TTSV_ACAD_PERIOD_CURRENT_LOV` cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema, verify the current_schema() output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF EXISTS.; line 1 pos 14;
'GlobalLimit 1000
+- 'LocalLimit 1000
+- 'Project [*]
+- 'UnresolvedRelation [Faculty_Success, ODS_UC, TTSV_ACAD_PERIOD_CURRENT_LOV], [], false
Put this at the top of the notebook
spark.conf.set("spark.sql.caseSensitive", "true")
Hi @nrogers
I couldn't reproduce the same result. Uppercase schema name works well in my testing. I tested with new schema, new schema shortcut and new lakehouse, all worked well.
What's your service version and data region?
My version is 13.0.24611.34 and data region is UK South. Sometimes this could be an issue that occurs in some specific regions or versions. If there is a fix, it may take some time to be deployed to all regions.
Best Regards,
Jing
Community Support Team
Check out the October 2024 Fabric update to learn about new features.
Learn from experts, get hands-on experience, and win awesome prizes.