Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
maxkent
Frequent Visitor

Recursive notebooks calls default lakehouse

We have the following notebooks:

 

- Notebook A, with no default lakehouse

- Notebook B, with a default lakehouse

 

Is it an expected scenario that when calling Notebook B from Notebook A, the default lakehouse is taken from Notebook A (in this case none)? 

 

As a matter of fact, while executing the scenario above, through mssparkutils.notebook.run(), we get the following error:

 

 

AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Spark SQL queries are only possible in the context of a lakehouse. Please attach a lakehouse to proceed.

 

 

2 ACCEPTED SOLUTIONS

Maybe it can work to use this code in the first cell in Notebook B?

 

 

 

 

%%configure -f
{
    "defaultLakehouse": {  
        "name": "<lakehouseName>",
        "id": "<lakehouseID>",
        "workspaceId": "<workspaceID>"
    }
}

 

 

 

 

In order to try to force Notebook B to use this default Lakehouse, even when it's called from Notebook A.

 

(I don't have any knowledge regarding your question: Is it an expected scenario that when calling Notebook B from Notebook A, the default lakehouse is taken from Notebook A (in this case none)? 

I hope someone else can answer that.)

 

Maybe these articles are somehow relevant? 

https://learn.microsoft.com/en-us/fabric/data-engineering/configure-high-concurrency-session-noteboo...

and

https://fabric.guru/how-to-attach-a-default-lakehouse-to-a-notebook-in-fabric

 

However, I don't have experience with calling a notebook from another notebook.

So I hope someone else can answer 😄

 

 

Btw, I think you can run Spark SQL in Notebook B even if it doesn't have a default Lakehouse.

However I think it means you would need to use temporary views in your code. At least that worked for me. Ref. one of the last comments here: https://community.fabric.microsoft.com/t5/Data-Engineering/Mounting-NB-on-the-fly/m-p/4057128#M3222

 

Or you could maybe use plain PySpark (without using Spark SQL).

View solution in original post

Anonymous
Not applicable

Hi @maxkent ,

 

Thanks for the reply from @frithjof_v . The code you provided is very good!

 

I created three Notebooks for testing, where Notebook6 and Notebook8 are associated with a lakehouse, while Notebook9 does not have a default lakehouse.

 

When I call notebook8 in notebook6 it works fine and when I call notebook9 in notebook6 some error occurs.

from notebookutils import mssparkutils

result = notebookutils.notebook.run("Notebook 9", 60)

display(result)

vhuijieymsft_0-1721973288537.png

 

Since Notebook A does not have a default lake bin in one piece, this context will be passed to Notebook B, resulting in the error you encountered.

 

You can use the “%%configure” method frithjof_v mentioned to set up lakehouse for the notebook.

 

You can also use the non-coding method in the screenshot below to assign a lakehouse context to the notebook.

vhuijieymsft_1-1721973288541.png

 

You can see that it runs well after assigning the lakehouse context.

vhuijieymsft_2-1721973297747.png

 

If you have any other questions please feel free to contact me.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

View solution in original post

4 REPLIES 4
Anonymous
Not applicable

Hi @maxkent ,

 

Thanks for the reply from @frithjof_v . The code you provided is very good!

 

I created three Notebooks for testing, where Notebook6 and Notebook8 are associated with a lakehouse, while Notebook9 does not have a default lakehouse.

 

When I call notebook8 in notebook6 it works fine and when I call notebook9 in notebook6 some error occurs.

from notebookutils import mssparkutils

result = notebookutils.notebook.run("Notebook 9", 60)

display(result)

vhuijieymsft_0-1721973288537.png

 

Since Notebook A does not have a default lake bin in one piece, this context will be passed to Notebook B, resulting in the error you encountered.

 

You can use the “%%configure” method frithjof_v mentioned to set up lakehouse for the notebook.

 

You can also use the non-coding method in the screenshot below to assign a lakehouse context to the notebook.

vhuijieymsft_1-1721973288541.png

 

You can see that it runs well after assigning the lakehouse context.

vhuijieymsft_2-1721973297747.png

 

If you have any other questions please feel free to contact me.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

maxkent
Frequent Visitor

Yes, I am using Spark SQL in Notebook B.

 

The point is that we need Notebook B to use its own default lakehouse, which it does when run individually but it does NOT when called from Notebook A. 

 

As per your suggestion:

maxkent_0-1721916655406.png

It seems that when you call from a notebook from another notebook, the default lakehouse of the called notebook (Notebook B) is overwritten with the caller one (Notebook A). In the above screenshot there is none, as I wrote in the post itself, in our case has no default lakehouse.

Maybe it can work to use this code in the first cell in Notebook B?

 

 

 

 

%%configure -f
{
    "defaultLakehouse": {  
        "name": "<lakehouseName>",
        "id": "<lakehouseID>",
        "workspaceId": "<workspaceID>"
    }
}

 

 

 

 

In order to try to force Notebook B to use this default Lakehouse, even when it's called from Notebook A.

 

(I don't have any knowledge regarding your question: Is it an expected scenario that when calling Notebook B from Notebook A, the default lakehouse is taken from Notebook A (in this case none)? 

I hope someone else can answer that.)

 

Maybe these articles are somehow relevant? 

https://learn.microsoft.com/en-us/fabric/data-engineering/configure-high-concurrency-session-noteboo...

and

https://fabric.guru/how-to-attach-a-default-lakehouse-to-a-notebook-in-fabric

 

However, I don't have experience with calling a notebook from another notebook.

So I hope someone else can answer 😄

 

 

Btw, I think you can run Spark SQL in Notebook B even if it doesn't have a default Lakehouse.

However I think it means you would need to use temporary views in your code. At least that worked for me. Ref. one of the last comments here: https://community.fabric.microsoft.com/t5/Data-Engineering/Mounting-NB-on-the-fly/m-p/4057128#M3222

 

Or you could maybe use plain PySpark (without using Spark SQL).

frithjof_v
Super User
Super User

Are you using Spark SQL in Notebook A or Notebook B?

I think it is possible to use Spark SQL in a Notebook without having a default lakehouse.

 

Possibly a related thread: Mounting NB on- the-fly - Microsoft Fabric Community

 

If you want to examine the default lakehouse in a Notebook run, I think you can use mssparkutils.fs.mounts() inside the Notebook, and look for a lakehouse with scope = 'default_lh' and then look at the source path to identify which Lakehouse is the default lakehouse.

 

frithjof_v_0-1721911960510.png

 

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.