Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Don't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.

Reply
situ042
Frequent Visitor

Issues executing notebook using custom databricks library uploaded

I have been trying to process xml content using pyspark and dataframes as per the solution in the post https://community.fabric.microsoft.com/t5/Data-Engineering/Spark-XML-does-not-work-with-pyspark/td-p...

 

I am encoutering some execution errors in the notebook. As per the solution the first code element in the notebook is 

 

 

%%configure -f
{"conf": {"spark.jars.packages": "com.databricks:spark-xml_2-13-0.18.0"}}

 

 

Depending on how I exedcute this I get two different errors.

 

a) I connect to the spark instance first in the notebook. This takes 2 to 3 minutes to startup due to the loading of the custom environment with the databricks library. Then I execute the code fragment in the notebook:

 

 

SparkCoreError/UnexpectedSessionState: Livy session has failed. Error code: SparkCoreError/UnexpectedSessionState. SessionInfo.State from SparkCore is Error: Encountered an unexpected session state Dead while waiting for session to become Idle.  Error description: Spark_User_Requirements_IllegalArgumentException. Source: System.

 

 

b) I execute the code fragment first which in turn connect to the spark instance using the custom environment. After 2 or 3 minutes I get this error

 

invalidHttpRequestToLivy: [TooManyRequestsForCapacity] This spark job can't be run because you have hit a spark compute or API rate limit. To run this spark job, cancel an active Spark job through the Monitoring hub, choose a larger capacity SKU, or try again later. HTTP status code: 430 {Learn more} HTTP status code: 430.

 

 

Is there a workaround? I can't imagine capacity is the real problem.

 

Any thoughts appreciated.

1 ACCEPTED SOLUTION
v-jingzhan-msft
Community Support
Community Support

Hi @situ042 

 

A simple workaround is to use Pandas to read data from the xml file into a Pandas dataframe, then convert the Pandas dataframe into a Spark dataframe. For example, 

vjingzhanmsft_0-1728548009077.png

 

Best Regards,
Jing
If this post helps, please Accept it as Solution to help other members find it. Appreciate your Kudos!

View solution in original post

2 REPLIES 2
v-jingzhan-msft
Community Support
Community Support

Hi @situ042 

 

A simple workaround is to use Pandas to read data from the xml file into a Pandas dataframe, then convert the Pandas dataframe into a Spark dataframe. For example, 

vjingzhanmsft_0-1728548009077.png

 

Best Regards,
Jing
If this post helps, please Accept it as Solution to help other members find it. Appreciate your Kudos!

Perfect, works perfectly in my test case... now to try it in my real world scenarios

Helpful resources

Announcements
Las Vegas 2025

Join us at the Microsoft Fabric Community Conference

March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!

Jan NL Carousel

Fabric Community Update - January 2025

Find out what's new and trending in the Fabric community.