Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredJoin us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM. Register now.
Hey
I have a problem with using NotebookUtils. My design is that I have two workspaces wilt following utilities:
-BRONZE WORKSPACE-
Notebook: Ochrestrator - default lakehouse is Bronze
Notebook: Process Bronze - default lakehouse is Bronze
Datalake: Bronze
-SILVER WORKSPACE-
Notebook: Process Silver - default lakehouse is Silver
Datalake: Silver
From Orchestrator i make these two notebook runs, but when executing Silver it seems like the default lakehouse is inherited from the calling notebook, because when I print the tables in the default lakehouse from silver it shows the bronze tables.
Solved! Go to Solution.
I didn't know that .run keeps the same default lakehouse, but I can see it makes sense. The default lakehouse is set at Spark Session start (you can parameterise it though)
.run doesn't create a new spark session, but reuses the old one ("The notebook being referenced runs on the Spark pool of the notebook that calls this function.") from here;
https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities
What we do is explicitly use the ABFSS path rather than default lakehouses. (we also seperate the Notebooks/Pipelines into a separate workspace completely so have to use ABFSS paths to specify lakehouses.)
So df.read.format('delta').load('abfss://<silverworkspace>@onelake.dfs.fabric.microsoft.com/<silverlakehouse>/Tables/...')
I didn't know that .run keeps the same default lakehouse, but I can see it makes sense. The default lakehouse is set at Spark Session start (you can parameterise it though)
.run doesn't create a new spark session, but reuses the old one ("The notebook being referenced runs on the Spark pool of the notebook that calls this function.") from here;
https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities
What we do is explicitly use the ABFSS path rather than default lakehouses. (we also seperate the Notebooks/Pipelines into a separate workspace completely so have to use ABFSS paths to specify lakehouses.)
So df.read.format('delta').load('abfss://<silverworkspace>@onelake.dfs.fabric.microsoft.com/<silverlakehouse>/Tables/...')
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.
User | Count |
---|---|
14 | |
7 | |
3 | |
2 | |
2 |