Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Compete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.

Reply
aajain
New Member

Facing error while running a Spark Job Definition in Microsoft Fabric

Hi,

 

I was trying out a basic scala program as a Spark Job definition on Microsoft Fabric. I am observing is my main class executes perfectly, but the session fails.

 

Main method:

val spark = SparkSession.builder.appName("Sample Spark Session").master("local[*]").getOrCreate()
val sqlContext = spark.sqlContext
var df = sqlContext.sql("Select * from delta.`<abfss path>`")

 

In the post step, I am seeing the following error in the stderr:

 

2024-02-08 08:40:09,339 ERROR Logger [spark-listener-group-shared]: Failed to flush.
java.lang.NullPointerException
at com.microsoft.azure.synapse.diagnostic.SparkObservabilityBus.flush(SparkObservabilityBus.java:300)
at com.microsoft.azure.synapse.diagnostic.SparkObservabilityBus.flushSparkListenerEvent(SparkObservabilityBus.java:230)
at org.apache.spark.listeners.SparkObservabilityListener.onApplicationEnd(SparkObservabilityListener.scala:34)
at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:57)
at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)
at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37)
at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:120)
at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:104)
at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:127)
at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:127)
at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at org.apache.spark.scheduler.AsyncEventQueue.org$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:121)
at org.apache.spark.scheduler.AsyncEventQueue$$anon$3.$anonfun$run$4(AsyncEventQueue.scala:117)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1471)
at org.apache.spark.scheduler.AsyncEventQueue$$anon$3.run(AsyncEventQueue.scala:117)

 

Has anyone seen this before or has any idea around this?

 

Thanks 

1 REPLY 1
Anonymous
Not applicable

Hi @aajain ,

If I understand correctly, the issue is that you encountered the error while running Spark Job. Please try the following methods and check if they can solve your problem:

1.Verify that the resources specified in the Spark job are correctly configured.

Create an Apache Spark job definition - Microsoft Fabric | Microsoft Learn

 

2.Ensure that all necessary dependencies are accessible.

 

3.You can also view the following links to learn information.

Run an Apache Spark job definition - Microsoft Fabric | Microsoft Learn

Solved: Spark Job Definition: Spark_Ambiguous_NonJvmUserAp... - Microsoft Fabric Community

 

Best Regards,

Wisdom Wu

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

Helpful resources

Announcements
August Power BI Update Carousel

Power BI Monthly Update - August 2025

Check out the August 2025 Power BI update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.