Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Next up in the FabCon + SQLCon recap series: The roadmap for Microsoft SQL and Maximizing Developer experiences in Fabric. All sessions are available on-demand after the live show. Register now

Feature Request: Python notebooks should support custom Environments, similar to PySpark notebooks.

Fabric currently supports custom Environments for PySpark notebooks, but Python notebooks (preview) cannot use them. In Python notebooks, the only available kernels are Python 3.10 and 3.11, and the only way to install packages is via %pip install, which is session‑scoped, not reproducible, and not aligned with Fabric’s managed environment model.

This creates a major gap for teams who want consistent dependency management across both PySpark and Python workflows.

Request

Please add support for:

  • Custom Python environments (similar to virtual environments)

  • Attaching those environments to Python notebooks

  • Managing Python‑only dependencies through the Environment UI

This would bring Python notebooks to parity with PySpark notebooks and enable reliable, repeatable development without %pip workarounds.

Example

In PySpark notebooks with an Environment, this works:

python
from faker import Fakerfake = Faker()fake.name()

In Python notebooks, it only works after:

 
%pip install faker

…and must be reinstalled every session.

Enabling custom environments for Python notebooks would solve this cleanly and improve the overall Fabric development experience.

 
Status: New
Comments
deborshi_nag
Resident Rockstar
I agree, Environments in Fabric should work for both PySpark and Python notebooks. The concept is very similar to maintaining a virtual environment for Python project, so doesn't explain why it works for PySpark notebooks and not Python.
mrbartuss
Advocate I
One follow-up for production use - if this pure Python notebook is scheduled in a Data Factory pipeline, is inline %pip install completely safe? I just want to make sure automated runs won't hit any of the stability issues mentioned here: https://learn.microsoft.com/en-us/fabric/data-engineering/library-management#python-inline-installat... warning against inline pip (even though I know those articles mostly focus on Spark)