Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Learn more
Hey
I have a problem with using NotebookUtils. My design is that I have two workspaces wilt following utilities:
-BRONZE WORKSPACE-
Notebook: Ochrestrator - default lakehouse is Bronze
Notebook: Process Bronze - default lakehouse is Bronze
Datalake: Bronze
-SILVER WORKSPACE-
Notebook: Process Silver - default lakehouse is Silver
Datalake: Silver
From Orchestrator i make these two notebook runs, but when executing Silver it seems like the default lakehouse is inherited from the calling notebook, because when I print the tables in the default lakehouse from silver it shows the bronze tables.
Solved! Go to Solution.
I didn't know that .run keeps the same default lakehouse, but I can see it makes sense.  The default lakehouse is set at Spark Session start (you can parameterise it though)
.run doesn't create a new spark session, but reuses the old one ("The notebook being referenced runs on the Spark pool of the notebook that calls this function.") from here;
https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities
What we do is explicitly use the ABFSS path rather than default lakehouses.  (we also seperate the Notebooks/Pipelines into a separate workspace completely so have to use ABFSS paths to specify lakehouses.)
So df.read.format('delta').load('abfss://<silverworkspace>@onelake.dfs.fabric.microsoft.com/<silverlakehouse>/Tables/...')
I didn't know that .run keeps the same default lakehouse, but I can see it makes sense.  The default lakehouse is set at Spark Session start (you can parameterise it though)
.run doesn't create a new spark session, but reuses the old one ("The notebook being referenced runs on the Spark pool of the notebook that calls this function.") from here;
https://learn.microsoft.com/en-us/fabric/data-engineering/notebook-utilities
What we do is explicitly use the ABFSS path rather than default lakehouses.  (we also seperate the Notebooks/Pipelines into a separate workspace completely so have to use ABFSS paths to specify lakehouses.)
So df.read.format('delta').load('abfss://<silverworkspace>@onelake.dfs.fabric.microsoft.com/<silverlakehouse>/Tables/...')
 
					
				
				
			
		
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.
 
            | User | Count | 
|---|---|
| 15 | |
| 5 | |
| 2 | |
| 2 | |
| 2 |