Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
SivaReddy24680
Frequent Visitor

A connection attempt failed because the connected party did not properly respond

Hi 

We are currently running approximately 50 tables dynamically with a single notebook in a pipeline that merge into lakehouse tables. We did not specify any batch count, and we noticed that 20 notebooks were running concurrently. However, we've been encountering an intermittent issue while executing cells in the middle of the notebook. Could someone kindly assist us with troubleshooting this problem?

Could this be related to the Livy session expiring during notebook execution, or is it potentially caused by something else

 

Error:

SivaReddy24680_0-1740654088507.png


A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (jobservice.eastus.trident.azuresynapse.net:443) --> SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

 

Below are the error logs i found from the monitor tab:

2025-02-27 09:15:56,285 WARN AbstractChannelHandlerContext [RPC-Handler-7]: An exception 'java.lang.IllegalArgumentException: not existed channel:[id: 0xb280c6ed, L:/10.7.32.4:10001 ! R:/10.7.32.4:56412]' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
java.lang.IllegalArgumentException: not existed channel:[id: 0xb280c6ed, L:/10.7.32.4:10001 ! R:/10.7.32.4:56412]
    at org.apache.livy.rsc.rpc.RpcDispatcher.getRpc(RpcDispatcher.java:67)
    at org.apache.livy.rsc.rpc.RpcDispatcher.channelInactive(RpcDispatcher.java:85)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:305)

 

2025-02-27 09:16:11,947 INFO SystemSASProviderWithRefresh [abfs-bounded-pool2-t5148]: [SystemSASProviderWithRefresh] Returning SasToken for account spark1triprodeus, container b2161aac-7333-4bfc-a70c-cf0ea5ff3f68, path /app-logs/trusted-service-user/driver-logs/application_1740647118938_0001/jobgroup_58, operation write
2025-02-27 09:16:27,228 WARN ShutdownHookManager [Thread-17]: ShutdownHook '' timeout, java.util.concurrent.TimeoutException
java.util.concurrent.TimeoutException
    at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:204)
    at org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
    at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
 
25-02-27 09:16:57,228 WARN ShutdownHookManager [Thread-17]: ShutdownHook 'ClientFinalizer' timeout, java.util.concurrent.TimeoutException
java.util.concurrent.TimeoutException
    at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:204)
    at org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
    at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)

End of LogType:stderr-active
 
2 ACCEPTED SOLUTIONS
Anonymous
Not applicable

Hi @SivaReddy24680,

Thank you for reaching out in Microsoft Community Forum.

The issue may be caused by Livy session timeouts, high concurrency, or resource limitations. please follow below steps to resolve the error;

1. Adjust settings or add a keep-alive command to prevent session expiration.

2. Set a batch size in your pipeline to limit parallel execution and avoid resource exhaustion.

3. Make sure Ensure there are no firewall/proxy restrictions blocking Azure Synapse connections.

4. High workloads may exceed CPU/memory limits, so consider scaling up your Spark cluster.

Please continue using Microsoft community forum.

If you found this post helpful, please consider marking it as "Accept as Solution" and give it a 'Kudos'. if it was helpful. help other members find it more easily.

Regards,
Pavan.

View solution in original post

Hi @Anonymous 

 

we reduced the concurrency of the notebooks. As the issue is intermittent earlier, we will monitor for few days to make sure the issue is resolved

View solution in original post

8 REPLIES 8
Anonymous
Not applicable

Hi @SivaReddy24680,

I wanted to follow up since we haven't heard back from you regarding our last response. We hope your issue has been resolved.
If the community member's answer your query, please mark it as "Accept as Solution" and select "Yes" if it was helpful.
If you need any further assistance, feel free to reach out.

Please continue using Microsoft community forum.

Thank you,
Pavan.

Anonymous
Not applicable

Hi @SivaReddy24680,

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please "Accept  as  Solution" and give a 'Kudos' so other members can easily find it.

Thank you,
Pavan.

Hi @Anonymous 

 

we reduced the concurrency of the notebooks. As the issue is intermittent earlier, we will monitor for few days to make sure the issue is resolved

Anonymous
Not applicable

Hi @SivaReddy24680,
 

Thank you for reaching out on the Microsoft Community Forum.

We hope your issue has been resolved. If you have any further questions, please feel free to contact us. If your issue is resolved, please "Accept as Solution" so that other community members can find the solution quickly.
 

Please continue using the Microsoft Community Forum.
 

Regards,
Pavan.

Anonymous
Not applicable

Hi @SivaReddy24680,
 

Thank you for reaching out on the Microsoft Community Forum.

We hope your issue has been resolved. If you have any further questions, please feel free to contact us. If your issue is resolved, please "Accept as solution" so that other community members can find the solution quickly.

 

Please continue using the Microsoft Community Forum.
 

Regards,
Pavan.

Anonymous
Not applicable

Hi @SivaReddy24680,

I hope this information is helpful. Please let me know if you have any further questions or if you'd like to discuss this further. If this answers your question, kindly "Accept  as  Solution" and give it a 'Kudos' so others can find it easily.

Thank you,
Pavan.

Anonymous
Not applicable

Hi @SivaReddy24680,

Thank you for reaching out in Microsoft Community Forum.

The issue may be caused by Livy session timeouts, high concurrency, or resource limitations. please follow below steps to resolve the error;

1. Adjust settings or add a keep-alive command to prevent session expiration.

2. Set a batch size in your pipeline to limit parallel execution and avoid resource exhaustion.

3. Make sure Ensure there are no firewall/proxy restrictions blocking Azure Synapse connections.

4. High workloads may exceed CPU/memory limits, so consider scaling up your Spark cluster.

Please continue using Microsoft community forum.

If you found this post helpful, please consider marking it as "Accept as Solution" and give it a 'Kudos'. if it was helpful. help other members find it more easily.

Regards,
Pavan.

SivaReddy24680
Frequent Visitor

Hi

 

We are currently running approximately 50 tables dynamically with a single notebook in a fabric pipeline that merge the data into the lakehouse tables. We did not specify any batch count, and we noticed that 20 notebooks were running concurrently. However, we've been encountering an intermittent issue while executing cells in the middle of the notebook. Could someone kindly assist us with troubleshooting this problem? 

Could this be related to the Livy session expiring during notebook execution, or is it potentially caused by something else?

 

Error:

SivaReddy24680_0-1740654702562.png

A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (jobservice.eastus.trident.azuresynapse.net:443) --> SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

 

Logs:

2025-02-27 09:15:56,285 WARN AbstractChannelHandlerContext [RPC-Handler-7]: An exception 'java.lang.IllegalArgumentException: not existed channel:[id: 0xb280c6ed, L:/10.7.32.4:10001 ! R:/10.7.32.4:56412]' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
java.lang.IllegalArgumentException: not existed channel:[id: 0xb280c6ed, L:/10.7.32.4:10001 ! R:/10.7.32.4:56412]
    at org.apache.livy.rsc.rpc.RpcDispatcher.getRpc(RpcDispatcher.java:67)
    at org.apache.livy.rsc.rpc.RpcDispatcher.channelInactive(RpcDispatcher.java:85)
    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:305)
 
25-02-27 09:16:57,228 WARN ShutdownHookManager [Thread-17]: ShutdownHook 'ClientFinalizer' timeout, java.util.concurrent.TimeoutException
java.util.concurrent.TimeoutException
    at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:204)
    at org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
    at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)

End of LogType:stderr-active
 

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.