Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
Hi Community,
I have researched that it's possible to configure Spark Sessions to spin up quickly. At the moment, it's taking up to 8mins for a Standard Session to spin up.
Can someone let me know what is needed to make Sparks not so long to spin up?
Solved! Go to Solution.
Hello @carlton7372,
In Microsoft Fabric, every time you run a Notebook, Dataflow Gen2, or any Spark-based job, the service provisions an isolated Spark session on-demand inside your capacity (F SKU).
The “spin-up time” — typically 3–8 minutes — is the time Fabric needs to:
Allocate Spark compute resources,
Initialize cluster dependencies and libraries,
Mount OneLake storage and environment variables,
Start your session driver and executors.
If you’re seeing 8+ minutes regularly, it usually indicates that Spark pools are not being reused efficiently or capacity resources are constrained.
Here are the most common causes:
Fabric currently provisions Spark sessions on demand. If there’s no active Spark session in your capacity, the first job triggers a cold start — allocating new containers and initializing runtime images.
Each Fabric capacity (F SKU) auto-scales down when idle to save resources.
After 20–30 minutes of inactivity, Spark runtimes are deallocated, meaning your next run must spin up new containers again.
Launching a Standard or Large session type allocates more executors and memory, increasing startup latency.
Unless you’re running heavy transformations or ML jobs, a Small session usually suffices and spins up faster.
If multiple users or Dataflows are consuming your F SKU at the same time, Fabric’s resource scheduler may queue new sessions until enough CUs are available.
If your Notebook installs additional packages using %pip install or mounts external data sources, that also adds delay to the effective “ready-to-run” time.
Source :
- https://learn.microsoft.com/en-us/fabric/data-engineering/spark-compute
Hope it can help you !
Best regards,
Antoine
Hello @carlton7372,
In Microsoft Fabric, every time you run a Notebook, Dataflow Gen2, or any Spark-based job, the service provisions an isolated Spark session on-demand inside your capacity (F SKU).
The “spin-up time” — typically 3–8 minutes — is the time Fabric needs to:
Allocate Spark compute resources,
Initialize cluster dependencies and libraries,
Mount OneLake storage and environment variables,
Start your session driver and executors.
If you’re seeing 8+ minutes regularly, it usually indicates that Spark pools are not being reused efficiently or capacity resources are constrained.
Here are the most common causes:
Fabric currently provisions Spark sessions on demand. If there’s no active Spark session in your capacity, the first job triggers a cold start — allocating new containers and initializing runtime images.
Each Fabric capacity (F SKU) auto-scales down when idle to save resources.
After 20–30 minutes of inactivity, Spark runtimes are deallocated, meaning your next run must spin up new containers again.
Launching a Standard or Large session type allocates more executors and memory, increasing startup latency.
Unless you’re running heavy transformations or ML jobs, a Small session usually suffices and spins up faster.
If multiple users or Dataflows are consuming your F SKU at the same time, Fabric’s resource scheduler may queue new sessions until enough CUs are available.
If your Notebook installs additional packages using %pip install or mounts external data sources, that also adds delay to the effective “ready-to-run” time.
Source :
- https://learn.microsoft.com/en-us/fabric/data-engineering/spark-compute
Hope it can help you !
Best regards,
Antoine
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.