Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Grow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.

Reply
BiJoe
Helper II
Helper II

[VS Code notebook]: Dropping a delta table using Spark SQL fails

In a Spark notebook in Fabric lakehouse online, this works just fine

 

spark.sql("drop table SCHEMA.TABLE")

 

In my VS Code Spark notebook, in the same Lakehouse, Spark SQL commands like 

 

df_raw = spark.sql("select * from SCHEMA.TABLE")
df_raw.show(5)

 

also works just fine, even if for each Spark command I get the error message in the Problems window.

 

"spark" is not defined

 

Trying to drop the specific table, before dropping it in online notebook of course, results in:

 

Py4JJavaError                             Traceback (most recent call last)
Cell In[29], line 1----> 1spark.sql("drop table SCHEMA.TABLE")

File c:\ProgramData\anaconda3\envs\fabric-synapse-runtime-1-2\lib\site-packages\pyspark\sql\session.py:1440, in SparkSession.sql(self, sqlQuery, args, **kwargs)
   1438try:
   1439litArgs = {k: _to_java_column(lit(v)) for k, v in (args or {}).items()}
-> 1440return DataFrame(self._jsparkSession.sql(sqlQuery, litArgs), self)
   1441finally:
   1442if len(kwargs) > 0:
File c:\ProgramData\anaconda3\envs\fabric-synapse-runtime-1-2\lib\site-packages\py4j\java_gateway.py:1321, in JavaMember.__call__(self, *args)
   1315command = proto.CALL_COMMAND_NAME +\
   1316self.command_header +\
   1317args_command +\
   1318proto.END_COMMAND_PART
   1320answer = self.gateway_client.send_command(command)
-> 1321return_value = get_return_value(
   1322answer, self.gateway_client, self.target_id, self.name)
   1324for temp_arg in temp_args:
   1325temp_arg._detach()
File c:\ProgramData\anaconda3\envs\fabric-synapse-runtime-1-2\lib\site-packages\pyspark\errors\exceptions\captured.py:169, in capture_sql_exception.<locals>.deco(*a, **kw)
    167def deco(*a: Any, **kw: Any) -> Any:
    168try:
--> 169return f(*a, **kw)
    170except Py4JJavaError as e:
    171converted = convert_exception(e.java_exception)
File c:\ProgramData\anaconda3\envs\fabric-synapse-runtime-1-2\lib\site-packages\py4j\protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
    324value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325if answer[1] == REFERENCE_TYPE:
--> 326raise Py4JJavaError(
    327"An error occurred while calling {0}{1}{2}.\n".
    328format(target_id, ".", name), value)
    329else:
    330raise Py4JError(
    331"An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
    332format(target_id, ".", name, value))

Py4JJavaError: An error occurred while calling o32.sql.
: org.apache.spark.SparkException: [INTERNAL_ERROR] Found the unresolved operator: 'UnresolvedIdentifier [SCHEMA, TABLE], true
== SQL(line 1, position 1) ==
drop table SCHEMA.TABLE
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

	at org.apache.spark.SparkException$.internalError(SparkException.scala:77)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$54(CheckAnalysis.scala:755)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$54$adapted(CheckAnalysis.scala:750)
	at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:295)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:294)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:294)
	at scala.collection.Iterator.foreach(Iterator.scala:943)
	at scala.collection.Iterator.foreach$(Iterator.scala:943)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
	at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:294)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:750)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:160)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:191)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:156)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:146)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:191)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:214)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:211)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:120)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:120)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:288)
	at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:642)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:288)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
	at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:287)
	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:120)
	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:118)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:110)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:640)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:630)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:662)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
	at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
	at java.lang.Thread.run(Thread.java:750)

 

Is my Spark Conda implementation somehow corrupt? I had a lot of problems installing Pyspark, and had to install Spark runtime environment 1.2 manually like this

 

pip install https://tridentvscodeextension.blob.core.windows.net/spark-lighter-lib/spark34/spark_lighter_lib-34.0.0.3-py3-none-any.whl

 

2 REPLIES 2
v-nikhilan-msft
Community Support
Community Support

Hi @BiJoe 
Thanks for using Microsoft Fabric Community.

This might require a deeper investigation from our engineering team and they can guide you better.

Please go ahead and raise a support ticket to reach our support team:

https://support.fabric.microsoft.com/support
Please provide the ticket number here as we can keep an eye on it.

 

Thanks

Hi @BiJoe 
We haven’t heard from you on the last response and was just checking back to see if you got a chance to create a support ticket. If yes please provide the details here.
Otherwise, will respond back with the more details and we will try to help.
Thanks

Helpful resources

Announcements
Europe Fabric Conference

Europe’s largest Microsoft Fabric Community Conference

Join the community in Stockholm for expert Microsoft Fabric learning including a very exciting keynote from Arun Ulag, Corporate Vice President, Azure Data.

Expanding the Synapse Forums

New forum boards available in Synapse

Ask questions in Data Engineering, Data Science, Data Warehouse and General Discussion.

RTI Forums Carousel3

New forum boards available in Real-Time Intelligence.

Ask questions in Eventhouse and KQL, Eventstream, and Reflex.

MayFBCUpdateCarousel

Fabric Monthly Update - May 2024

Check out the May 2024 Fabric update to learn about new features.