<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: [VS Code notebook]: Dropping a delta table using Spark SQL fails in Data Engineering</title>
    <link>https://community.fabric.microsoft.com/t5/Data-Engineering/VS-Code-notebook-Dropping-a-delta-table-using-Spark-SQL-fails/m-p/3683707#M202</link>
    <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/451446"&gt;@BiJoe&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;We haven’t heard from you on the last response and was just checking back to see if you got a chance to create a support ticket. If yes please provide the details here.&lt;BR /&gt;Otherwise, will respond back with the more details and we will try to help. &lt;BR /&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
    <pubDate>Tue, 06 Feb 2024 17:03:35 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2024-02-06T17:03:35Z</dc:date>
    <item>
      <title>[VS Code notebook]: Dropping a delta table using Spark SQL fails</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/VS-Code-notebook-Dropping-a-delta-table-using-Spark-SQL-fails/m-p/3669740#M200</link>
      <description>&lt;P&gt;In a Spark notebook in Fabric lakehouse online, this works just fine&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;spark.sql("drop table SCHEMA.TABLE")&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In my VS Code Spark notebook, in the same Lakehouse, Spark SQL commands like&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;df_raw = spark.sql("select * from SCHEMA.TABLE")
df_raw.show(5)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;also works just fine, even if for each Spark command I get the error message in the Problems window.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;"spark" is not defined&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Trying to drop the specific table, before dropping it in online notebook of course, results in:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;Py4JJavaError                             Traceback (most recent call last)
Cell In[29], line 1----&amp;gt; 1spark.sql("drop table SCHEMA.TABLE")

File c:\ProgramData\anaconda3\envs\fabric-synapse-runtime-1-2\lib\site-packages\pyspark\sql\session.py:1440, in SparkSession.sql(self, sqlQuery, args, **kwargs)
   1438try:
   1439litArgs = {k: _to_java_column(lit(v)) for k, v in (args or {}).items()}
-&amp;gt; 1440return DataFrame(self._jsparkSession.sql(sqlQuery, litArgs), self)
   1441finally:
   1442if len(kwargs) &amp;gt; 0:
File c:\ProgramData\anaconda3\envs\fabric-synapse-runtime-1-2\lib\site-packages\py4j\java_gateway.py:1321, in JavaMember.__call__(self, *args)
   1315command = proto.CALL_COMMAND_NAME +\
   1316self.command_header +\
   1317args_command +\
   1318proto.END_COMMAND_PART
   1320answer = self.gateway_client.send_command(command)
-&amp;gt; 1321return_value = get_return_value(
   1322answer, self.gateway_client, self.target_id, self.name)
   1324for temp_arg in temp_args:
   1325temp_arg._detach()
File c:\ProgramData\anaconda3\envs\fabric-synapse-runtime-1-2\lib\site-packages\pyspark\errors\exceptions\captured.py:169, in capture_sql_exception.&amp;lt;locals&amp;gt;.deco(*a, **kw)
    167def deco(*a: Any, **kw: Any) -&amp;gt; Any:
    168try:
--&amp;gt; 169return f(*a, **kw)
    170except Py4JJavaError as e:
    171converted = convert_exception(e.java_exception)
File c:\ProgramData\anaconda3\envs\fabric-synapse-runtime-1-2\lib\site-packages\py4j\protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
    324value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
    325if answer[1] == REFERENCE_TYPE:
--&amp;gt; 326raise Py4JJavaError(
    327"An error occurred while calling {0}{1}{2}.\n".
    328format(target_id, ".", name), value)
    329else:
    330raise Py4JError(
    331"An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
    332format(target_id, ".", name, value))

Py4JJavaError: An error occurred while calling o32.sql.
: org.apache.spark.SparkException: [INTERNAL_ERROR] Found the unresolved operator: 'UnresolvedIdentifier [SCHEMA, TABLE], true
== SQL(line 1, position 1) ==
drop table SCHEMA.TABLE
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

	at org.apache.spark.SparkException$.internalError(SparkException.scala:77)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$54(CheckAnalysis.scala:755)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis0$54$adapted(CheckAnalysis.scala:750)
	at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:295)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1(TreeNode.scala:294)
	at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$foreachUp$1$adapted(TreeNode.scala:294)
	at scala.collection.Iterator.foreach(Iterator.scala:943)
	at scala.collection.Iterator.foreach$(Iterator.scala:943)
	at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
	at scala.collection.IterableLike.foreach(IterableLike.scala:74)
	at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
	at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
	at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:294)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0(CheckAnalysis.scala:750)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis0$(CheckAnalysis.scala:160)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis0(Analyzer.scala:191)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis(CheckAnalysis.scala:156)
	at org.apache.spark.sql.catalyst.analysis.CheckAnalysis.checkAnalysis$(CheckAnalysis.scala:146)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:191)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:214)
	at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:330)
	at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:211)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$analyzed$1(QueryExecution.scala:120)
	at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:120)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:288)
	at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:642)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:288)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
	at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:287)
	at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:120)
	at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:118)
	at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:110)
	at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
	at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:97)
	at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:640)
	at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:630)
	at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:662)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
	at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
	at java.lang.Thread.run(Thread.java:750)&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Is my Spark Conda implementation somehow corrupt? I had a lot of problems installing Pyspark, and had to install Spark runtime environment 1.2 manually like this&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;LI-CODE lang="python"&gt;pip install https://tridentvscodeextension.blob.core.windows.net/spark-lighter-lib/spark34/spark_lighter_lib-34.0.0.3-py3-none-any.whl&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 31 Jan 2024 09:08:57 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/VS-Code-notebook-Dropping-a-delta-table-using-Spark-SQL-fails/m-p/3669740#M200</guid>
      <dc:creator>BiJoe</dc:creator>
      <dc:date>2024-01-31T09:08:57Z</dc:date>
    </item>
    <item>
      <title>Re: [VS Code notebook]: Dropping a delta table using Spark SQL fails</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/VS-Code-notebook-Dropping-a-delta-table-using-Spark-SQL-fails/m-p/3679133#M201</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/451446"&gt;@BiJoe&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;Thanks for using Microsoft Fabric Community.&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;This might require a deeper investigation from our engineering team and they can guide you better.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Please go ahead and raise a support ticket to reach our support team:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;A href="https://support.fabric.microsoft.com/support" target="_blank" rel="noopener nofollow noreferrer"&gt;https://support.fabric.microsoft.com/support&lt;/A&gt;&lt;BR /&gt;Please provide the ticket number here as we can keep an eye on it.&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Thanks&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 05 Feb 2024 04:38:57 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/VS-Code-notebook-Dropping-a-delta-table-using-Spark-SQL-fails/m-p/3679133#M201</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2024-02-05T04:38:57Z</dc:date>
    </item>
    <item>
      <title>Re: [VS Code notebook]: Dropping a delta table using Spark SQL fails</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/VS-Code-notebook-Dropping-a-delta-table-using-Spark-SQL-fails/m-p/3683707#M202</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/451446"&gt;@BiJoe&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;We haven’t heard from you on the last response and was just checking back to see if you got a chance to create a support ticket. If yes please provide the details here.&lt;BR /&gt;Otherwise, will respond back with the more details and we will try to help. &lt;BR /&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 06 Feb 2024 17:03:35 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/VS-Code-notebook-Dropping-a-delta-table-using-Spark-SQL-fails/m-p/3683707#M202</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2024-02-06T17:03:35Z</dc:date>
    </item>
  </channel>
</rss>

