Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
stomori
New Member

Fabric Notebook disconnects when running a certain code cell

I am working in a notebook in Fabric using PySpark, and have recently run into an error when running a specific code cell (that normally works). My kernel simply disconnects, and I can't proceed with my code.

 

I am using Fabric within my organization, so I have access to limited metrics, as I do not have admin rights.

 

I suspected it was capacity-related but a colleague of mine has already attempted to create a bigger spark pool. Unfortunately, this did not resolve it.

 

I get the following error right under the code cell:

Fabric_error_overlined.png

 

And under diagnostics, I get the following:

 

Diagnostic ID: e0d1cb8a-823d-48c8-b878-478e7da536d3

Timestamp: 2025-05-26T13:53:44.169Z

Message: [object CloseEvent]

JSON
{
"type": "close",
"timeStamp": 583281.6999999285,
"code": 1000,
"reason": "{\"reason\":\"Session error or stopped.\",\"state\":\"session-completed\"}",
"wasClean": false,
"target": {
"url": removed,
"readyState": 3,
"protocolsProfile": [
7,
4588
]
},
"currentTarget": {
"url": removed,
"readyState": 3,
"protocolsProfile": [
7,
4588
]
},
"isTrusted": true
}

Additional info: InstanceId: 977dc982-d987-43eb-9104-373959539332

 

What causes this, and how can it be resolved?

1 ACCEPTED SOLUTION
burakkaragoz
Community Champion
Community Champion

Hi @stomori ,

 

This looks like a session-level failure that happens before your code even starts running — usually tied to Spark kernel startup or resource allocation issues.

Here’s what might be causing it and what you can try:

  1. Session Timeout or Idle Expiry
    If the notebook was idle for a while, the session might have expired silently. Try restarting the notebook kernel and re-running the cell immediately.

  2. Spark Pool Resource Limits
    Even if your colleague increased the pool size, check if:

    • The max concurrency is being hit (too many notebooks using the same pool)
    • The session quota per user is exceeded
  3. Code Cell Content
    If the cell has heavy operations (e.g. large joins, wide transformations), try:

    • Breaking it into smaller steps
    • Caching intermediate results
    • Logging the Spark plan (df.explain())
  4. Plugin State: Cleanup
    This usually means the session failed during init and Fabric is cleaning up. It could be a transient backend issue — try running the same cell in a new notebook or after a short wait.

  5. Diagnostics
    Since you don’t have admin rights, ask your admin to check:

    • Spark job logs in the Fabric Admin portal
    • Capacity metrics around the time of failure

Let me know if you want help reviewing the code in that cell — sometimes a small tweak can avoid triggering these session-level errors.

If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

View solution in original post

3 REPLIES 3
v-nmadadi-msft
Community Support
Community Support

Hi @stomori 

May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.

Thank you.

 

burakkaragoz
Community Champion
Community Champion

Hi @stomori ,

 

This looks like a session-level failure that happens before your code even starts running — usually tied to Spark kernel startup or resource allocation issues.

Here’s what might be causing it and what you can try:

  1. Session Timeout or Idle Expiry
    If the notebook was idle for a while, the session might have expired silently. Try restarting the notebook kernel and re-running the cell immediately.

  2. Spark Pool Resource Limits
    Even if your colleague increased the pool size, check if:

    • The max concurrency is being hit (too many notebooks using the same pool)
    • The session quota per user is exceeded
  3. Code Cell Content
    If the cell has heavy operations (e.g. large joins, wide transformations), try:

    • Breaking it into smaller steps
    • Caching intermediate results
    • Logging the Spark plan (df.explain())
  4. Plugin State: Cleanup
    This usually means the session failed during init and Fabric is cleaning up. It could be a transient backend issue — try running the same cell in a new notebook or after a short wait.

  5. Diagnostics
    Since you don’t have admin rights, ask your admin to check:

    • Spark job logs in the Fabric Admin portal
    • Capacity metrics around the time of failure

Let me know if you want help reviewing the code in that cell — sometimes a small tweak can avoid triggering these session-level errors.

If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

Hi Burak,

I have now tried step 4 of running the cell in a new notebook, and this seems to resolve the issue.

 

Thanks a lot for your response!

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.