Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!
I'm currently working in an organization that is using Fabric for data engineering, we are utilizing pipelines and notebooks. We have a team of about 6 data engineers.
We are encountering many difficulties around capacity at the moment and maxing it out and data engineers being blocked because capacity is at it's limit, but the the kind of pipelines and notebooks we are using should not be consuming all of our capacity.
We have originally had an F4 capacity for our dev environment and F64 for our other environments. We had to bump that to F8 today because engineers were being blocked because of capacity and we're again almost maxing that out.
I've been looking into ways we can reduce the amount of resources each engineer is taking up and have landed on configuring the spark pools differently, but also on top of that I've been contemplating around local development. Ideally I think engineers should be able to do their work locally, which I don't think is possible for pipelines but should be for notebooks.
Is it a feasible approach to develop a python package for re-usable code and have that developed through python scripts? From my perspective that would shift some of the development off of using capacity, and also allow us to implement testing frameworks more robustly.
Also as well as that is there a feasible way to develop fabric notebooks locally so engineers can run and use their own resources within their machines instead of Fabric?
Would be great if anyone has any other suggestions on decoupling development from fabric capacity or ways we can reduce our consumption or to tell me I'm barking up the completely wrong tree.
Thanks in advanced to anyone who responds!
Hi @nnshr ,
I would also take a moment to thank @deborshi_nag , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions
Hi @nnshr ,
I hope the information provided above assists you in resolving the issue. If you have any additional questions or concerns, please do not hesitate to contact us. We are here to support you and will be happy to help with any further assistance you may need.
Hello @nnshr
Yes, you can absolutely follow an approach where developers develop reusable Python code locally and make that code available on Fabric. To do that you use an Environment Fabric item. You can upload custom Python libraries in a Environment item and set that item as a default environment in your Spark settings.

The Spark settings can be found in Workspace Settings > Data Engineering/Science.
Hope this helps - please appreciate by leaving a Kudos or accepting as a Solution!
Hi @nnshr,
Right now there is no local environment that can be made. I agree that this would be a very helpful feature.
There is an idea submitted for this that I recommend you vote for:
Fabric Desktop: A Local Development Experience for... - Microsoft Fabric Community
Proud to be a Super User! | |
Hiya! Agree that a local environment for Fabric itself would be very helpful indeed. I was more suggesting that if we built our functionality in a python package which we can utilize in fabric notebooks, that would in theory drive costs down as development and testing of a python package can be one outside of Fabric.
| User | Count |
|---|---|
| 19 | |
| 5 | |
| 4 | |
| 3 | |
| 2 |