Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Anushka96
Frequent Visitor

I am getting Livy Http Request Failure in Fabric Notebooks.

I am trying to migrate somefiles to ADLS Gen2 using a high concurrency notebook in pipeline using Fabric F64 Trial Capacity.Pipline got fail after 1d and 20 hours and  I am getting following error.
LivyHttpRequestFailure: Something went wrong while processing your request. Please try again later. HTTP status code: 500.
What can be the reason for this? 

1 ACCEPTED SOLUTION
v-ssriganesh
Community Support
Community Support

Hello @Anushka96,

Based on your scenario using a high-concurrency notebook to migrate files, here are some possible causes:

  • F64 Trial capacity has resource limits and may timeout long-running jobs.
  • Making REST API calls inside Spark UDFs can lead to memory issues, driver bottlenecks, and Livy session failures.

Best practices that may help prevent this issue:

  • Avoid row-by-row API calls in UDFs they are inefficient and unstable at scale.
  • Use rdd.mapPartitions() to process data in batches and reuse sessions.
  • Create a single requests.Session() per partition to avoid connection overhead.
  • Add retry logic and error handling to prevent executor crashes.
  • Filter and batch data before making API calls, and split large jobs into smaller chunks.


If this information is helpful, please “Accept as solution” and give a "kudos" to assist other community members in resolving similar issues more efficiently.
Thank you.

View solution in original post

8 REPLIES 8
v-ssriganesh
Community Support
Community Support

Hello @Anushka96,

Based on your scenario using a high-concurrency notebook to migrate files, here are some possible causes:

  • F64 Trial capacity has resource limits and may timeout long-running jobs.
  • Making REST API calls inside Spark UDFs can lead to memory issues, driver bottlenecks, and Livy session failures.

Best practices that may help prevent this issue:

  • Avoid row-by-row API calls in UDFs they are inefficient and unstable at scale.
  • Use rdd.mapPartitions() to process data in batches and reuse sessions.
  • Create a single requests.Session() per partition to avoid connection overhead.
  • Add retry logic and error handling to prevent executor crashes.
  • Filter and batch data before making API calls, and split large jobs into smaller chunks.


If this information is helpful, please “Accept as solution” and give a "kudos" to assist other community members in resolving similar issues more efficiently.
Thank you.

Hi @Anushka96,
I hope this information is helpful. Please let me know if you have any further questions or if you'd like to discuss this further. If this answers your question, please accept it as a solution and give it a 'Kudos' so other community members with similar problems can find a solution faster.
Thank you.

v-ssriganesh
Community Support
Community Support

Hi @Anushka96,

May I ask if you have resolved this issue? If so, kindly share the insights, as this will assist other community members in resolving similar issues more efficiently.

Thankyou.

 

v-ssriganesh
Community Support
Community Support

Hi @Anushka96,
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.

v-ssriganesh
Community Support
Community Support

Hi @Anushka96,

Thank you for posting your query in the Microsoft Fabric Community Forum, and thanks to @Srisakthi for sharing valuable insights.

 

I am following up to see if you had a chance to review previous response of srisakthi's  and provide the requested information. This will enable us to assist you further.

Thank you.

Anushka96
Frequent Visitor

Hi @Srisakthi , Not using any transformation logic it is just using sharepoint REST api and write in to ADLS Gen2 Blob.

Hi @Anushka96 ,

Is there any specific requirement to use notebooks?. Have you tried data pipeline and land to one lake

 

Regards,

Srisakthi

Srisakthi
Super User
Super User

Hi @Anushka96 ,

 

By looking at the error message we cannot predict exactly the root cause of the issue, basically issue is due to spark is not able to handle the request(could be due to not enough CUs).

What is the size of your files?

How many files you are migrating and what kind of transformation logic you are using?

What is spark pool configurations you have setup?

Are you using latest runtime version?

 

Can you install MS capacity metric app and check the CUs consumption?

https://learn.microsoft.com/en-us/fabric/enterprise/metrics-app-install?tabs=1st

 

 

 

Regards,

Srisakthi

Helpful resources

Announcements
July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.