Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
roger_fabric
New Member

Remote and persistent Jupyter kernel in Microsoft Fabric

Is there a way to connect to a Microsoft Fabric Jupyter kernel remotely from a local VSCode desktop, such that the session is aware of the local filesystem. The reason for this setup is that when developing, my source code is local so I can import my modules in development as usual (or do something like `pip install -e .`) and any changes to the code is effectively propagated in real-time. With GCP (and presumably with Azure Virtual Machines too) it's possible to create a cloud VM and connect to it using SSH. Essentially the requirements are:

 

1. The notebook session having access to local filesystem, so I can do something like `from src.utils import my_model`

2. When the notebook cell is executed, the calculation is performed on the cloud VM

 

I have tried the "Open in VS Code" button in the web portal notebook and connecting to the PySpark (Fabric VS Code Collection) kernel, however, it's not aware of the local filesystem.

roger_fabric_0-1751194414915.png

 

I know that it's possible to create wheels -> upload to Fabric -> pip install, but it takes too long.

 

If it's not possible, is there another way to achieve the two conditions?

 

And if there's absolutely no way, how does an effective data science workflow look like on Microsoft Fabric?

3 REPLIES 3
v-tsaipranay
Community Support
Community Support

Hi @roger_fabric ,

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions.

 

Thank you.

v-tsaipranay
Community Support
Community Support

Hi @roger_fabric ,

Could you please confirm if you've submitted this as an idea in the Ideas Forum? If so, sharing the link here would be helpful for other community members who may have similar feedback.

If we don’t hear back, we’ll go ahead and close this thread. For any further discussions or questions, please start a new thread in the Microsoft Fabric Community Forum  we’ll be happy to assist.

 

Thank you for being part of the Microsoft Fabric Community.

 

v-tsaipranay
Community Support
Community Support

Hi @roger_fabric ,

Thank you for reaching out to the Microsoft Fabric Community Forum.

 

Currently, Microsoft Fabric’s Spark compute nodes run in a secure, shared environment. That means when you use a PySpark kernel, it can’t directly access code stored only on your local machine. Even though “Open in VS Code” gives you a live connection to the kernel, your laptop and the Fabric cluster still have separate file systems. So, something like from src.utils import my_model won’t work unless you’ve uploaded that code to Fabric first.

 

For quick and easy development, it’s best to keep your project in Git, open it in VS Code with the Fabric extension, and use the Publish button. This lets you sync just the updated files to the notebook’s Resources folder, making them instantly available to the remote kernel no need to build a wheel.

When your code is ready to be reused across notebooks or pipelines, you can package it as a wheel (or just a set of .py files) and add it as a custom library in a Fabric Environment. Attaching that environment will automatically load your package every time a Spark session starts. If you’re just experimenting, you can also drag and drop a wheel file into the notebook and run %pip install to use it right away.

 

Since Fabric clusters are managed cloud resources, things like SSH access or mounting drives aren’t currently supported. However, there’s already a community request open for a “Spark-Connect-style” live file sync feel free to vote on it or else create an idea in the idea forum using the below link : Fabric Ideas - Microsoft Fabric Community

In the meantime, the recommended workflow is:

  • Develop locally and use Git for version control
  • Publish small changes while iterating and package your code when it’s stable

Hope this helps. Please reach out for further assistance.

If this post helps, then please consider to Accept as the solution to help the other members find it more quickly and a kudos would be appreciated.

 

Thank you.

 

Helpful resources

Announcements
July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.

Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

Top Solution Authors