Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Fabric Data Days Monthly is back. Join us on March 26th for two expert-led sessions on 1) Getting Started with Fabric IQ and 2) Mapping & Spacial Analytics in Fabric. Register now

Reply
abhidotnet
Advocate II
Advocate II

Faker Works in PySpark… but Not Python

I ran into while trying to use the Faker library inside both PySpark and Python notebooks in Microsoft Fabric. I’m hoping this helps others, and I’d love confirmation from the product team on whether this is the expected behavior.

Summary of what I discovered

  • I created a custom Environment in my workspace (called TestforTrial) and added the faker library under Libraries from external repositories.

  • When I open a PySpark notebook, I can attach this environment and use Faker without any issues.

  • When I open a Python notebook, the environment does not appear in the kernel dropdown. I only see Python 3.10 and Python 3.11.

  • It turns out that Python notebooks (preview) currently do not support attaching custom Environments.

  • They only support %pip install inside the notebook session.

This explains why my environment only shows Spark runtimes (Spark 3.4, 3.5, etc.) and why I couldn’t select it in a Python notebook.

If this is the intended behavior for now, it would be great to have it documented more clearly, because it’s easy to assume Environments apply to both notebook types.

Is Python‑runtime support for Environments planned?

 

7 REPLIES 7
deborshi_nag
Resident Rockstar
Resident Rockstar

Hi @abhidotnet  

 

What you've discovered is correct, Python notebooks run on a pure kernel. They are in Public Preview and 

don’t integrate with Fabric “Environments.” Instead, they support inline package installs via %pip/%conda in the notebook session and allow dropping custom libs into the notebook’s Resources folder.
 
deborshi_nag_0-1768562087605.png

 

 
There isn’t a public roadmap item that explicitly confirms when “attach Fabric Environments to Python notebooks.” will be made available. 
 
Hope this helps - please appreciate by leaving a Kudos or accepting as a Solution
 
I trust this will be helpful. If you found this guidance useful, you are welcome to acknowledge with a Kudos or by marking it as a Solution.
v-echaithra
Community Support
Community Support

Hi @abhidotnet ,

Thank you for reaching out to Microsoft Community.
At the moment, this is a current limitation of Microsoft Fabric rather than a configuration issue. Python notebooks don’t yet support attaching custom Environments, while PySpark notebooks do.
Right now, you must use %pip install in Python notebooks or use PySpark notebooks to attach environments.

Since this affects dependency management and consistency across notebook types, I’d recommend raising a feature request on Fabric Ideas so the product team can track demand and prioritize Python-runtime support for Environments. It’s likely the best way to get visibility and an official response on roadmap timing.

You’re welcome to post this in the Ideas forum here: Fabric Ideas - Microsoft Fabric Community

That’s where enhancement suggestions go. The Power BI team actively reviews and prioritizes ideas based on community feedback and votes.

Thank you.

@deborshi_nag @v-echaithra 
One follow-up for production use - if this pure Python notebook is scheduled in a Data Factory pipeline, is inline %pip install completely safe? I just want to make sure automated runs won't hit any of the stability issues mentioned here: https://learn.microsoft.com/en-us/fabric/data-engineering/library-management#python-inline-installat... warning against inline pip (even though I know those articles mostly focus on Spark)

Hi @mrbartuss ,

The warning in the documentation you referenced mainly applies to Spark notebooks, where inline installation can trigger Spark session restarts and impact long-running jobs. For pure Python notebooks, %pip install is currently the supported approach for managing dependencies while the Python runtime experience is still in preview.

Hope this helps.
Chaithra E.

Hello @mrbartuss %pip command restarts the Python interpreter, as long as you keep that statement as the first line of code in your notebook, it should be fine for production workloads using data pipelines.

 

I would also recommend that you specify a specific version (or a range) of a Python library when using %pip. This reduces risk in your production pipelines in case a new version of the library is made available.

 

%pip install numpy==1.26.4


%pip install "pandas>=1.5,<2.0"

 

I trust this will be helpful. If you found this guidance useful, you are welcome to acknowledge with a Kudos or by marking it as a Solution.

Hi @abhidotnet ,

Thank you for submitting this as a feature request and sharing the link. The product team will review and evaluate this. We appreciate you taking the time to help improve the Fabric experience.

Thanks again for your contribution

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

February Fabric Update Carousel

Fabric Monthly Update - February 2026

Check out the February 2026 Fabric update to learn about new features.