<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Anyone having luck with pyspark workloads in Fabric?  Getting assorted error messages. in Data Engineering</title>
    <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Anyone-having-luck-with-pyspark-workloads-in-Fabric-Getting/m-p/4356479#M5985</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;Thank you for reaching out to the MS Fabric community forum.&lt;BR /&gt;&lt;BR /&gt;I understand that you are encountering unfamiliar errors. Let's go through each of the errors you've mentioned:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;Error 1: invalid_grant: Error(s): 501481 &lt;/STRONG&gt;This&amp;nbsp;occurs&amp;nbsp;when opening logs from the Spark UI. It&amp;nbsp;means&amp;nbsp;that&amp;nbsp;the code verifier and code challenge in the authorization request&amp;nbsp;do not match. Ensure&amp;nbsp;that&amp;nbsp;these values are correctly configured and try regenerating them.&lt;STRONG&gt;&lt;BR /&gt;Error 2: LD_PRELOAD warnings&amp;nbsp;Warnings:&amp;nbsp;&lt;/STRONG&gt;that&amp;nbsp;a&amp;nbsp;shared object file&amp;nbsp;cannot be preloaded&amp;nbsp;(/opt/gluten/dep/libjemalloc.so.2).&amp;nbsp;Known issue,&amp;nbsp;just&amp;nbsp;ignore&amp;nbsp;it. To&amp;nbsp;avoid&amp;nbsp;these&amp;nbsp;warnings,&amp;nbsp;set&amp;nbsp;LD_PRELOAD settings in your Spark configuration.&lt;STRONG&gt;&lt;BR /&gt;Error 3: java.lang.reflect.UndeclaredThrowableException &lt;/STRONG&gt;This&amp;nbsp;error&amp;nbsp;is encountered&amp;nbsp;when&amp;nbsp;trying&amp;nbsp;to create&amp;nbsp;a connection in the Livy notebook&amp;nbsp;because&amp;nbsp;of&amp;nbsp;a ClassNotFoundException for org.apache.spark.shuffle.sort.ColumnarShuffleManager.&amp;nbsp;Make&amp;nbsp;sure&amp;nbsp;all&amp;nbsp;the&amp;nbsp;necessary dependencies are&amp;nbsp;in your Spark environment and add the missing library or jar file.&lt;STRONG&gt;&lt;BR /&gt;Error 4: ExecutorMonitor threw an&amp;nbsp;exception. &lt;/STRONG&gt;The&amp;nbsp;null pointer exception in&amp;nbsp;the&amp;nbsp;Executor monitor&amp;nbsp;is&amp;nbsp;an issue with&amp;nbsp;the&amp;nbsp;dynamic allocation of executors.&amp;nbsp;Remove&amp;nbsp;dynamic allocation or&amp;nbsp;upgrade&amp;nbsp;your&amp;nbsp;Spark version.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;I hope this helps resolve the issues you're experiencing. Should the problems continue, please consider raising a Microsoft support ticket for further assistance. Here is the link: &lt;A href="https://learn.microsoft.com/en-us/power-bi/support/create-support-ticket" target="_blank"&gt;https://learn.microsoft.com/en-us/power-bi/support/create-support-ticket&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;If this helps then please &lt;STRONG&gt;Accept it as a solution&lt;/STRONG&gt; and dropping a "&lt;STRONG&gt;Kudos&lt;/STRONG&gt;" so other members can find it more easily.&lt;BR /&gt;Thanks.&lt;/P&gt;</description>
    <pubDate>Thu, 09 Jan 2025 06:47:24 GMT</pubDate>
    <dc:creator>v-ssriganesh</dc:creator>
    <dc:date>2025-01-09T06:47:24Z</dc:date>
    <item>
      <title>Anyone having luck with pyspark workloads in Fabric?  Getting assorted error messages.</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Anyone-having-luck-with-pyspark-workloads-in-Fabric-Getting/m-p/4353888#M5947</link>
      <description>&lt;P&gt;Has anyone tried to open a "professional" support ticket for pyspark?&amp;nbsp; &amp;nbsp;I think there are some growing pains.&amp;nbsp; The fabric pyspark and the support for pyspark may both be a work-in-progress.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am encountering some very unfamiliar messages in the Fabric spark environment.&amp;nbsp; The errors are proprietary to Microsoft and I &lt;STRONG&gt;haven't seen these in other spark implementations&lt;/STRONG&gt; (Databricks, Synapse, HDI, or OSS).&amp;nbsp; I'm pretty sure these errors would turn up in my google results if they were related to the OSS spark from apache.&amp;nbsp; If anyone recognizes any of these errors, please let me know.&amp;nbsp; They were encountered in various parts of the pyspark experience on Fabric.&amp;nbsp; I'm not aware of any degradation in the service or any known outages, so I'm assuming these are just snafu bugs in Fabric.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Error 1.&amp;nbsp; This one is from the Fabric spark UI:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;invalid_grant: Error(s): 501481 - Timestamp: 2025-01-01 - Description: AADSTS501481: The&lt;STRONG&gt; Code_Verifier does not match the code_challenge supplied in the authorization request&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;... it happens when trying to open the logs from the spark UI.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Error 2.&amp;nbsp; From Livy notebook yesterday (aka "e01"):&amp;nbsp;LD_PRELOAD:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2025-01-06 16:24:37,490 WARN YarnAllocator [Reporter]: Container from a bad node: container_1736180637918_0001_01_000003 on host: vm-95921137. &lt;STRONG&gt;Exit status: 1. Diagnostics: t be preloaded (cannot open shared object file): ignored.&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;ERROR: ld.so: object '/opt/gluten/dep/libjemalloc.so.2' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.&lt;/STRONG&gt;&lt;BR /&gt;ERROR: ld.so: object '/opt/gluten/dep/libjemalloc.so.2' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.&lt;BR /&gt;ERROR: ld.so: object '/opt/gluten/dep/libjemalloc.so.2' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.&lt;BR /&gt;Error files: stderr, stderr-active.&lt;BR /&gt;Last 4096 bytes of stderr :&lt;BR /&gt;11842@vm-95921137&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;... no idea what any of this means.&amp;nbsp; It looks scary and is repeated thru-out the stderr of the driver.&amp;nbsp; It uses the "warn" severity, and says the problem can be "ignored".&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Error 3:&amp;nbsp; From Livy notebook yesterday (aka "e01"): successfully created connection, despite exception&amp;nbsp;java.lang.reflect.UndeclaredThrowableException&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2025-01-06 16:24:36,743 INFO TransportClientFactory [netty-rpc-connection-0]: Successfully created connection to vm-beb30181/10.0.160.9:45461 after 3 ms (0 ms spent in bootstraps)&lt;BR /&gt;&lt;STRONG&gt;Exception in thread "main" java.lang.reflect.UndeclaredThrowableException&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1923)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:61)&lt;/STRONG&gt;&lt;BR /&gt;at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:471)&lt;BR /&gt;at org.apache.spark.executor.YarnCoarseGrainedExecutorBackend$.main(YarnCoarseGrainedExecutorBackend.scala:83)&lt;BR /&gt;at org.apache.spark.executor.YarnCoarseGrainedExecutorBackend.main(YarnCoarseGrainedExecutorBackend.scala)&lt;BR /&gt;Caused by: java.lang.ClassNotFoundException: org.apache.spark.shuffle.sort.ColumnarShuffleManager&lt;BR /&gt;at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)&lt;BR /&gt;at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)&lt;BR /&gt;at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:527)&lt;BR /&gt;at java.base/java.lang.Class.forName0(Native Method)&lt;BR /&gt;at java.base/java.lang.Class.forName(Class.java:398)&lt;BR /&gt;at org.apache.spark.util.SparkClassUtils.classForName(SparkClassUtils.scala:41)&lt;BR /&gt;at org.apache.spark.util.SparkClassUtils.classForName$(SparkClassUtils.scala:36)&lt;BR /&gt;at org.apache.spark.util.Utils$.classForName(Utils.scala:94)&lt;BR /&gt;at org.apache.spark.util.Utils$.instantiateSerializerOrShuffleManager(Utils.scala:2557)&lt;BR /&gt;at org.apache.spark.SparkEnv$.create(SparkEnv.scala:326)&lt;BR /&gt;at org.apache.spark.SparkEnv$.createExecutorEnv(SparkEnv.scala:215)&lt;BR /&gt;at org.apache.spark.executor.CoarseGrainedExecutorBackend$.$anonfun$run$9(CoarseGrainedExecutorBackend.scala:520)&lt;BR /&gt;at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:62)&lt;BR /&gt;at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:61)&lt;BR /&gt;at java.base/java.security.AccessController.doPrivileged(Native Method)&lt;BR /&gt;at java.base/javax.security.auth.Subject.doAs(Subject.java:423)&lt;BR /&gt;at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1907)&lt;BR /&gt;... 4 more&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Error 4:&amp;nbsp; From Livy notebook yesterday (aka "e01") :&amp;nbsp;ExecutorMonitor threw an exception&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;2025-01-06 16:24:59,020 ERROR AsyncEventQueue [spark-listener-group-executorManagement]: Listener ExecutorMonitor threw an exception&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;java.lang.NullPointerException&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.spark.scheduler.dynalloc.ExecutorMonitor.getRemovedExecutor(ExecutorMonitor.scala:466)&lt;/STRONG&gt;&lt;BR /&gt;at org.apache.spark.scheduler.dynalloc.ExecutorMonitor.onExecutorRemoved(ExecutorMonitor.scala:483)&lt;BR /&gt;at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:65)&lt;BR /&gt;at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)&lt;BR /&gt;at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)&lt;BR /&gt;at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)&lt;BR /&gt;at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:120)&lt;BR /&gt;at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:104)&lt;BR /&gt;at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:127)&lt;BR /&gt;at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:127)&lt;BR /&gt;at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)&lt;BR /&gt;at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)&lt;BR /&gt;at org.apache.spark.scheduler.AsyncEventQueue.org$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:121)&lt;BR /&gt;at org.apache.spark.scheduler.AsyncEventQueue$$anon$3.$anonfun$run$4(AsyncEventQueue.scala:117)&lt;BR /&gt;at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1356)&lt;BR /&gt;at org.apache.spark.scheduler.AsyncEventQueue$$anon$3.run(AsyncEventQueue.scala:117)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;... Sorry for the assorted errors.&amp;nbsp; I will monitor to see which are most common and try to focus on them first.&amp;nbsp; As it is now, I'm just trying to keep up with the pace of these unfamiliar issues.&amp;nbsp; They are coming at us pretty fast!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please let me know if any of these are familiar.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 07 Jan 2025 18:21:45 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Anyone-having-luck-with-pyspark-workloads-in-Fabric-Getting/m-p/4353888#M5947</guid>
      <dc:creator>dbeavon3</dc:creator>
      <dc:date>2025-01-07T18:21:45Z</dc:date>
    </item>
    <item>
      <title>Re: Anyone having luck with pyspark workloads in Fabric?  Getting assorted error messages.</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Anyone-having-luck-with-pyspark-workloads-in-Fabric-Getting/m-p/4356216#M5982</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/120263"&gt;@dbeavon3&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;From&lt;STRONG&gt; Error-2,&amp;nbsp;&lt;/STRONG&gt;I see you are using native engine on spark. Did you try turning it off and running the same notebook? (&lt;A href="https://learn.microsoft.com/en-us/fabric/data-engineering/native-execution-engine-overview?tabs=sparksql#enable-for-a-notebook-or-spark-job-definition" target="_blank"&gt;https://learn.microsoft.com/en-us/fabric/data-engineering/native-execution-engine-overview?tabs=sparksql#enable-for-a-notebook-or-spark-job-definition&lt;/A&gt;)&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;From all the error messages, I can offer you a speculative explanation. When Spark runs, VMs get spun up with specific configuration which run as Executors. I believe for some reason a VM had crashed (most likely because of running a unsupported query on native engine). And once the VM crashed, Spark application also crashed. Usually VM failures are automatically managed by Spark, but I guess with native engine integration, it still needs improvement from Microsoft.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jan 2025 03:55:39 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Anyone-having-luck-with-pyspark-workloads-in-Fabric-Getting/m-p/4356216#M5982</guid>
      <dc:creator>govindarajan_d</dc:creator>
      <dc:date>2025-01-09T03:55:39Z</dc:date>
    </item>
    <item>
      <title>Re: Anyone having luck with pyspark workloads in Fabric?  Getting assorted error messages.</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Anyone-having-luck-with-pyspark-workloads-in-Fabric-Getting/m-p/4356479#M5985</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;Thank you for reaching out to the MS Fabric community forum.&lt;BR /&gt;&lt;BR /&gt;I understand that you are encountering unfamiliar errors. Let's go through each of the errors you've mentioned:&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P class="lia-align-left"&gt;&lt;STRONG&gt;Error 1: invalid_grant: Error(s): 501481 &lt;/STRONG&gt;This&amp;nbsp;occurs&amp;nbsp;when opening logs from the Spark UI. It&amp;nbsp;means&amp;nbsp;that&amp;nbsp;the code verifier and code challenge in the authorization request&amp;nbsp;do not match. Ensure&amp;nbsp;that&amp;nbsp;these values are correctly configured and try regenerating them.&lt;STRONG&gt;&lt;BR /&gt;Error 2: LD_PRELOAD warnings&amp;nbsp;Warnings:&amp;nbsp;&lt;/STRONG&gt;that&amp;nbsp;a&amp;nbsp;shared object file&amp;nbsp;cannot be preloaded&amp;nbsp;(/opt/gluten/dep/libjemalloc.so.2).&amp;nbsp;Known issue,&amp;nbsp;just&amp;nbsp;ignore&amp;nbsp;it. To&amp;nbsp;avoid&amp;nbsp;these&amp;nbsp;warnings,&amp;nbsp;set&amp;nbsp;LD_PRELOAD settings in your Spark configuration.&lt;STRONG&gt;&lt;BR /&gt;Error 3: java.lang.reflect.UndeclaredThrowableException &lt;/STRONG&gt;This&amp;nbsp;error&amp;nbsp;is encountered&amp;nbsp;when&amp;nbsp;trying&amp;nbsp;to create&amp;nbsp;a connection in the Livy notebook&amp;nbsp;because&amp;nbsp;of&amp;nbsp;a ClassNotFoundException for org.apache.spark.shuffle.sort.ColumnarShuffleManager.&amp;nbsp;Make&amp;nbsp;sure&amp;nbsp;all&amp;nbsp;the&amp;nbsp;necessary dependencies are&amp;nbsp;in your Spark environment and add the missing library or jar file.&lt;STRONG&gt;&lt;BR /&gt;Error 4: ExecutorMonitor threw an&amp;nbsp;exception. &lt;/STRONG&gt;The&amp;nbsp;null pointer exception in&amp;nbsp;the&amp;nbsp;Executor monitor&amp;nbsp;is&amp;nbsp;an issue with&amp;nbsp;the&amp;nbsp;dynamic allocation of executors.&amp;nbsp;Remove&amp;nbsp;dynamic allocation or&amp;nbsp;upgrade&amp;nbsp;your&amp;nbsp;Spark version.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;I hope this helps resolve the issues you're experiencing. Should the problems continue, please consider raising a Microsoft support ticket for further assistance. Here is the link: &lt;A href="https://learn.microsoft.com/en-us/power-bi/support/create-support-ticket" target="_blank"&gt;https://learn.microsoft.com/en-us/power-bi/support/create-support-ticket&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;If this helps then please &lt;STRONG&gt;Accept it as a solution&lt;/STRONG&gt; and dropping a "&lt;STRONG&gt;Kudos&lt;/STRONG&gt;" so other members can find it more easily.&lt;BR /&gt;Thanks.&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jan 2025 06:47:24 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Anyone-having-luck-with-pyspark-workloads-in-Fabric-Getting/m-p/4356479#M5985</guid>
      <dc:creator>v-ssriganesh</dc:creator>
      <dc:date>2025-01-09T06:47:24Z</dc:date>
    </item>
    <item>
      <title>Re: Anyone having luck with pyspark workloads in Fabric?  Getting assorted error messages.</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Anyone-having-luck-with-pyspark-workloads-in-Fabric-Getting/m-p/4357766#M6008</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/644992"&gt;@govindarajan_d&lt;/a&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;I'm new to the Fabric implementation of&amp;nbsp; Spark and was worried about all these proprietary components and error messages.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I've only been using if one week.&amp;nbsp; I only had one day where these unfamiliar errors were appearing.&amp;nbsp; However it was on &lt;STRONG&gt;my very first day with Spark in Fabric&lt;/STRONG&gt; ...&amp;nbsp; so that is what made me concerned.&amp;nbsp; &amp;nbsp;Since then, we have not seen it repeated.&amp;nbsp; However, I wanted to ask for tips in preparation for the next time we see these errors.&amp;nbsp; Else I will be no better off than I was on day one.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have a speculative explanation as well.&amp;nbsp; The weird/unusual thing about Spark in Fabric is the integration with Entra ID user credentials.&amp;nbsp; In the other Spark environments which I've used, the cluster was always running as a system-level account (OS account or service principal).&amp;nbsp; However I think that the Spark notebooks in Fabric are &lt;STRONG&gt;constantly referring to Entra ID&lt;/STRONG&gt; in order to validate the PBI user's prior credentials, or retrieve new credentials.&amp;nbsp; This introduces code that may be (1) a weak link, and (2) very proprietary and very different than what is found in the OSS spark implementation.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;... this theory would also explain the presence of these strange error messages which aren't able to be found in google.&amp;nbsp; I may be the first person ever to post their error messages on the Internet!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 09 Jan 2025 19:52:32 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Anyone-having-luck-with-pyspark-workloads-in-Fabric-Getting/m-p/4357766#M6008</guid>
      <dc:creator>dbeavon3</dc:creator>
      <dc:date>2025-01-09T19:52:32Z</dc:date>
    </item>
    <item>
      <title>Re: Anyone having luck with pyspark workloads in Fabric?  Getting assorted error messages.</title>
      <link>https://community.fabric.microsoft.com/t5/Data-Engineering/Anyone-having-luck-with-pyspark-workloads-in-Fabric-Getting/m-p/4358085#M6012</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/120263"&gt;@dbeavon3&lt;/a&gt;,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;You are right. Microsoft has implemented &lt;SPAN&gt;proprietary&amp;nbsp;&lt;/SPAN&gt;code on top of OSS Spark and that makes it a source for error messages that are uncommon for people who worked with OSS Spark. Microsoft has to add more informative error messages!&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 10 Jan 2025 02:46:54 GMT</pubDate>
      <guid>https://community.fabric.microsoft.com/t5/Data-Engineering/Anyone-having-luck-with-pyspark-workloads-in-Fabric-Getting/m-p/4358085#M6012</guid>
      <dc:creator>govindarajan_d</dc:creator>
      <dc:date>2025-01-10T02:46:54Z</dc:date>
    </item>
  </channel>
</rss>

