Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

View all the Fabric Data Days sessions on demand. View schedule

Reply
tan_thiamhuat
Post Patron
Post Patron

High Currency for Notebooks

how does this work?

tan_thiamhuat_0-1748250081562.png

And I try 2 notebooks in the same Workspace:

tan_thiamhuat_1-1748250122910.pngtan_thiamhuat_2-1748250210538.png

the second notebook will always go to the Standard Session and fail. Do I need to use below

tan_thiamhuat_3-1748250287439.png

 

for the second Notebook?

1 ACCEPTED SOLUTION

Hi @tan_thiamhuat ,

 

Yes, writing to a persistent table with saveAsTable makes the data accessible to any notebook connected to the same lakehouse and Spark environment, including both high concurrency and standard sessions. This is useful for sharing data broadly across sessions.

However, temporary views are limited to the same Spark session, so to share them between notebooks, those notebooks must join the same high concurrency session. If you need quick, session-scoped sharing, use temporary views with shared sessions; for wider access across different sessions, persistent tables are the best option.

 

If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.

 

Thank you.

View solution in original post

7 REPLIES 7
v-tsaipranay
Community Support
Community Support

Hi @tan_thiamhuat ,

Thanks for reaching out to the Microsoft Fabric Community. 

 

Although both notebooks are in the same workspace and lakehouse, the error indicates they are not sharing the same Spark session, which is why the temporary view created in one notebook is not visible in the other.

To resolve this please follow the below steps:

  • In the notebook UI toolbar, make sure to manually select/join the same high concurrency Spark session for your second notebook, instead of letting it default to a standard session. This is crucial because temporary views are session-scoped.
  • Confirm that both notebooks are running on the same Spark pool/environment configured for high concurrency, with matching configurations.
  • If sharing temporary views proves complicated, consider creating a persistent table instead of a temporary view, which will be visible across all sessions:
df.write.mode("overwrite").saveAsTable("people_test")
df_shared = spark.sql("SELECT * FROM people_test")
df_shared.show()
  • You can also print the spark.sparkContext.applicationId in both notebooks to confirm if they are connected to the same Spark session.

Also thank you @suparnababu8  and @burakkaragoz for actively participating in the community forum and for the solutions you’ve been sharing in the community forum.

 

Hope this helps. Please reach out for further assistance.

If this post helps, then please consider to Accept as the solution to help the other members find it more quickly and a kudos would be appreciated.

 

Thank you.

 

df.write.mode("overwrite").saveAsTable("people_test")

if it is written to a table, then any non-high concurrency notebook can read from that table, isn't it?

Hi @tan_thiamhuat ,

 

Yes, writing to a persistent table with saveAsTable makes the data accessible to any notebook connected to the same lakehouse and Spark environment, including both high concurrency and standard sessions. This is useful for sharing data broadly across sessions.

However, temporary views are limited to the same Spark session, so to share them between notebooks, those notebooks must join the same high concurrency session. If you need quick, session-scoped sharing, use temporary views with shared sessions; for wider access across different sessions, persistent tables are the best option.

 

If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.

 

Thank you.

tan_thiamhuat
Post Patron
Post Patron

any thoughts why my two notebooks do not work on that High Currency?

suparnababu8
Super User
Super User

Hi @tan_thiamhuat 

 

If you want to run mulitple Notebooks from the same workspace, you have to tag both notebooks High concurrency session,  but there are specific condiitions that must be met. The notebooks need to

  • Both notebooks from the same lakheouse
  • Notebooks run by same user
  • Same Librarry configurations

So, with  enabling high concuurrency mode, you will attaach multiple noteebooks to an existing Spark session, allowing for greater session utilization and faster execution

 

Read more abour high concuurrency mode here - Configure high concurrency mode for notebooks - Microsoft Fabric | Microsoft Learn

 

Thank you!

 

Did I answer your question? Mark my post as a solution!

Proud to be a Super User!

does not work for me, both notebooks are in the same Lakehouse and workspace.

tan_thiamhuat_3-1748263855297.png

 

tan_thiamhuat_4-1748263873174.png

 

 

burakkaragoz
Community Champion
Community Champion

Hi @tan_thiamhuat ,


Good question. Even though high concurrency is enabled in the workspace settings, each notebook still needs to explicitly join an available high concurrency session if you want them to share the same Spark app.

So yes, for your second notebook, you should select the available session like HC_Notebook_1_6782 manually — otherwise, it defaults to a standard session, which is why you're seeing it fail.

Also make sure:

  • Both notebooks are in the same workspace.
  • The high concurrency session is not at capacity (e.g. Joined: 1/5 means 4 more can join).
  • You're not mixing different Spark pool types or configurations between notebooks.

Let me know if you want help setting up a shared session programmatically.

If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

Helpful resources

Announcements
November Fabric Update Carousel

Fabric Monthly Update - November 2025

Check out the November 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.