Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!

Reply
GeetanjaliK
New Member

Notebook Session Not Starting Post Workspace Private Only Configuration

Hi,
We recently changed our Microsoft Fabric workspace setting to “Allow connections only from workspace-level private links.”
After this change, our pipeline started failing only at the Notebook activity, while all other activities such as Lookup, Copy, and ForEach continue to work normally.

The failure message from the Notebook activity is:

GeetanjaliK_0-1769156869867.png

Observed Behavior

  • Lookup, Copy, and ForEach activities still run successfully.
  • Notebook activity fails immediately (please refer Snip) when trying to start the Spark session.
  • This issue began only after switching the workspace from public to private-only mode.

Context

  • Our deployment was done while the workspace was still public, and the pipeline is now running while the workspace is private-only.
  • Previously, when Spark sessions failed to start, the issue typically resolved automatically within 3–5 hours.
  • However, after the recent deployment, it has now been over 24 hours, and Spark sessions are still not starting, which is unusual compared to earlier behavior. could you please share your insights on this.

could you please share your insights on this.

 

Regards,

Geetanjali.

1 REPLY 1
deborshi_nag
Impactful Individual
Impactful Individual

Hello @GeetanjaliK 

 

In public mode, Spark sessions start on shared starter pools — these are pre‑warmed, Microsoft‑managed clusters available to everyone. But when your workspace blocks public access, Fabric is no longer allowed to use anything on the public network, including those shared pools.

 

Try to use a custom environment and see if this works.

 

1. Go to your Fabric workspace > Environment hub

  • Open Data Engineering / Data Science experience
  • Select Environment → New Environment

2. Enable custom compute

As workspace admin:
Workspace settings > Data Engineering/Science > Pool tab > “Customize compute configurations for items” = ON
This unlocks custom pool selection inside Environments. 

3. Create the environment

Inside the Environment creation panel:

Select a Spark Pool Size:

Fabric provides predefined compute sizes:

  • Small (4 vCores, 32 GB RAM)
  • Medium (8 vCores, 64 GB RAM)
  • Large (16 vCores, 128 GB RAM)
  • XL (32 vCores, 256 GB)
  • XXL (64 vCores, 512 GB)
    These appear as driver/executor options inside the Environment.
    (These options are described in Spark documentation.) 

Configure session-level properties:

  • Number of executors
  • Executor memory
  • Driver core count
  • Executor cores
    These settings take effect AFTER Spark starts and remain within Fabric’s pool limits.

4. Save the Environment

5. Open your notebook

6. Attach Environment

From the Compute selector (bottom-left session panel):

  • Click “Environment”
  • Select the custom Environment you created

 

 

I trust this will be helpful. If you found this guidance useful, you are welcome to acknowledge with a Kudos or by marking it as a Solution.

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.