Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
We have the following notebooks:
- Notebook A, with no default lakehouse
- Notebook B, with a default lakehouse
Is it an expected scenario that when calling Notebook B from Notebook A, the default lakehouse is taken from Notebook A (in this case none)?
As a matter of fact, while executing the scenario above, through mssparkutils.notebook.run(), we get the following error:
AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Spark SQL queries are only possible in the context of a lakehouse. Please attach a lakehouse to proceed.
Solved! Go to Solution.
Maybe it can work to use this code in the first cell in Notebook B?
%%configure -f
{
"defaultLakehouse": {
"name": "<lakehouseName>",
"id": "<lakehouseID>",
"workspaceId": "<workspaceID>"
}
}
In order to try to force Notebook B to use this default Lakehouse, even when it's called from Notebook A.
(I don't have any knowledge regarding your question: Is it an expected scenario that when calling Notebook B from Notebook A, the default lakehouse is taken from Notebook A (in this case none)?
I hope someone else can answer that.)
Maybe these articles are somehow relevant?
and
https://fabric.guru/how-to-attach-a-default-lakehouse-to-a-notebook-in-fabric
However, I don't have experience with calling a notebook from another notebook.
So I hope someone else can answer 😄
Btw, I think you can run Spark SQL in Notebook B even if it doesn't have a default Lakehouse.
However I think it means you would need to use temporary views in your code. At least that worked for me. Ref. one of the last comments here: https://community.fabric.microsoft.com/t5/Data-Engineering/Mounting-NB-on-the-fly/m-p/4057128#M3222
Or you could maybe use plain PySpark (without using Spark SQL).
Hi @maxkent ,
Thanks for the reply from @frithjof_v . The code you provided is very good!
I created three Notebooks for testing, where Notebook6 and Notebook8 are associated with a lakehouse, while Notebook9 does not have a default lakehouse.
When I call notebook8 in notebook6 it works fine and when I call notebook9 in notebook6 some error occurs.
from notebookutils import mssparkutils
result = notebookutils.notebook.run("Notebook 9", 60)
display(result)
Since Notebook A does not have a default lake bin in one piece, this context will be passed to Notebook B, resulting in the error you encountered.
You can use the “%%configure” method frithjof_v mentioned to set up lakehouse for the notebook.
You can also use the non-coding method in the screenshot below to assign a lakehouse context to the notebook.
You can see that it runs well after assigning the lakehouse context.
If you have any other questions please feel free to contact me.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Hi @maxkent ,
Thanks for the reply from @frithjof_v . The code you provided is very good!
I created three Notebooks for testing, where Notebook6 and Notebook8 are associated with a lakehouse, while Notebook9 does not have a default lakehouse.
When I call notebook8 in notebook6 it works fine and when I call notebook9 in notebook6 some error occurs.
from notebookutils import mssparkutils
result = notebookutils.notebook.run("Notebook 9", 60)
display(result)
Since Notebook A does not have a default lake bin in one piece, this context will be passed to Notebook B, resulting in the error you encountered.
You can use the “%%configure” method frithjof_v mentioned to set up lakehouse for the notebook.
You can also use the non-coding method in the screenshot below to assign a lakehouse context to the notebook.
You can see that it runs well after assigning the lakehouse context.
If you have any other questions please feel free to contact me.
Best Regards,
Yang
Community Support Team
If there is any post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Yes, I am using Spark SQL in Notebook B.
The point is that we need Notebook B to use its own default lakehouse, which it does when run individually but it does NOT when called from Notebook A.
As per your suggestion:
It seems that when you call from a notebook from another notebook, the default lakehouse of the called notebook (Notebook B) is overwritten with the caller one (Notebook A). In the above screenshot there is none, as I wrote in the post itself, in our case has no default lakehouse.
Maybe it can work to use this code in the first cell in Notebook B?
%%configure -f
{
"defaultLakehouse": {
"name": "<lakehouseName>",
"id": "<lakehouseID>",
"workspaceId": "<workspaceID>"
}
}
In order to try to force Notebook B to use this default Lakehouse, even when it's called from Notebook A.
(I don't have any knowledge regarding your question: Is it an expected scenario that when calling Notebook B from Notebook A, the default lakehouse is taken from Notebook A (in this case none)?
I hope someone else can answer that.)
Maybe these articles are somehow relevant?
and
https://fabric.guru/how-to-attach-a-default-lakehouse-to-a-notebook-in-fabric
However, I don't have experience with calling a notebook from another notebook.
So I hope someone else can answer 😄
Btw, I think you can run Spark SQL in Notebook B even if it doesn't have a default Lakehouse.
However I think it means you would need to use temporary views in your code. At least that worked for me. Ref. one of the last comments here: https://community.fabric.microsoft.com/t5/Data-Engineering/Mounting-NB-on-the-fly/m-p/4057128#M3222
Or you could maybe use plain PySpark (without using Spark SQL).
Are you using Spark SQL in Notebook A or Notebook B?
I think it is possible to use Spark SQL in a Notebook without having a default lakehouse.
Possibly a related thread: Mounting NB on- the-fly - Microsoft Fabric Community
If you want to examine the default lakehouse in a Notebook run, I think you can use mssparkutils.fs.mounts() inside the Notebook, and look for a lakehouse with scope = 'default_lh' and then look at the source path to identify which Lakehouse is the default lakehouse.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
10 | |
5 | |
4 | |
3 | |
3 |