Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
smpa01
Super User
Super User

Mounting NB on- the-fly

I am running some codes in a standalone notebook that is currently giving me the following error

Spark SQL queries are only possible in the context of a lakehouse. Please attach a lakehouse to proceed

 

How can I progrmaatically mount/add a lakehouse to a NB prior to execution of that code. I tried following

notebookutils.fs.mount('abfss://ws_id@onelake.dfs.fabric.microsoft.com/lh_id','/lakehouse/default')

and then I tried running the spark code which still gave me the same error as above.

 

Is it doable?

 

Did I answer your question? Mark my post as a solution!
Proud to be a Super User!
My custom visualization projects
Plotting Live Sound: Viz1
Beautiful News:Viz1, Viz2, Viz3
Visual Capitalist: Working Hrs
10 REPLIES 10
smpa01
Super User
Super User

@frithjof_v  thanks for all this. Give me a day or two to get back

Did I answer your question? Mark my post as a solution!
Proud to be a Super User!
My custom visualization projects
Plotting Live Sound: Viz1
Beautiful News:Viz1, Viz2, Viz3
Visual Capitalist: Working Hrs
frithjof_v
Super User
Super User

Hopefully something in this blog post can help achieving what you request: https://fabric.guru/how-to-mount-a-lakehouse-and-identify-the-mounted-lakehouse-in-fabric-notebook

This post describes how to mount a NB which is exactly similar to what I have provided (good so far)

but it does not execute a code in a Notebook manually attached to a Lakehouse would have successfully executed otherwise.

 

The objective of progrmatic mounting is to see if the succedding code executes correctly as it would have in a manually attached lakehouse.

Did I answer your question? Mark my post as a solution!
Proud to be a Super User!
My custom visualization projects
Plotting Live Sound: Viz1
Beautiful News:Viz1, Viz2, Viz3
Visual Capitalist: Working Hrs

I see, I will do a test on my side if I find the time to do the test.

 

Btw I see you are using notebookutils whereas the blog post is using mssparkutils.

 

Will it work if you use mssparkutils instead of notebookutils?

@frithjof_v  i will check and let you know.

Did I answer your question? Mark my post as a solution!
Proud to be a Super User!
My custom visualization projects
Plotting Live Sound: Viz1
Beautiful News:Viz1, Viz2, Viz3
Visual Capitalist: Working Hrs

I must admit I am not sure what it means to Attach or Mount a lakehouse to a Notebook. Is Attach and Mount the same thing? 

 

Anyway, I am unsure if the mssparkutils mount method is able to make a lakehouse the default lakehouse for the Notebook.

Okay, it can mount one or more lakehouses to the Notebook, which per my current understanding makes it possible to use relative directory paths (e.g. useful for os module and Pandas).

However, if you want to use Spark SQL, I guess we would need to be able to mount a lakehouse as the default Lakehouse of the Notebook? I am not sure if the mssparkutils mount method can do that.

 

It seems I am able to mount a Lakehouse as the default Lakehouse by using the solution to this forum thread: Solved: Re: How to set default Lakehouse in the notebook p... - Microsoft Fabric Community

 

However I need to use this code (need to add the -f argument):

 

%%configure -f
{
    "defaultLakehouse": {  
        "name": "<lakehouseName>",
        "id": "<lakehouseID>",
        "workspaceId": "<workspaceID>"
    }
}

 

 

It means the Livy session will need to restart. I don't know what the Livy session is, however it seems all variables will get lost by doing that. 

 

I also didn't find out how to insert the lakehouseName, lakehouseID and workspaceID as variables.
So I had to hardcode those values in the JSON structure in the %%configure cell.
I guess it should be possible to pass variables into the JSON structure, however at the moment I don't know how to do that.

 

Other than that, it seemed to work, when I hardcoded the values.

Here is a new blog post, with some information regarding the '%% configure' approach to attaching a default lakehouse programmatically:

 

How to Attach a Default Lakehouse to a Notebook in Fabric

 

 

It also states that you cannot attach a default lakehouse by using the mssparkutils.fs.mounts() approach.

This worked for me in a Notebook which has no default lakehouse. The default lakehouse gets attached programmatically in the %%configure code cell.
I hardcoded the values for lakehouseName, lakehouseID and workspaceID (as mentioned in previous comment).

 

%%configure -f
{
    "defaultLakehouse": {  
        "name": "<lakehouseName>",
        "id": "<lakehouseID>",
        "workspaceId": "<workspaceID>"
    }
}

 

 

 

%%sql
CREATE OR REPLACE TEMPORARY VIEW Dim_Product_temp_vw
AS 
SELECT *, CURRENT_TIMESTAMP AS load_timestamp 
FROM Dim_Product 

 

 

 

df = spark.sql("SELECT * FROM Dim_Product_temp_vw")

df.write.mode("append").saveAsTable("Dim_Product_2")

 

 

However, if you want to switch to another default Lakehouse on-the-fly, then the temporary view will not be available after you switch to another default Lakehouse.

Because the variables get lost when running the %%configure -f code cell.

 

So I guess you would need to solve that using a workaround.

@smpa01 
Could you explain more in detail what you want to do with the data?
Moving data from one workspace to another?

There could be easier ways to accomplish this instead of attaching Lakehouse programmatically to Notebook.

Here is a solution which doesn't involve mounting or attaching Lakehouses to the Notebook:

 

Perhaps you can use temporary views to be able to work with Spark SQL without having a default Lakehouse for your Notebook.

 

In this example I have a Notebook without any default Lakehouse and without any mounted/attached lakehouses.

 

I create a dataframe to connect to data from a lakehouse using the abfss path, and create a temporary view based on that dataframe. 

 

 

df = spark.read.load("abfss://<workspaceID>@onelake.dfs.fabric.microsoft.com/<lakehouseID>/Tables/<tableName>")

df.createOrReplaceTempView("MyView_Read")

 

 

I use Spark SQL to do some modifications to the temporary view, and save it as another temporary view.

 

%%sql
CREATE OR REPLACE TEMPORARY VIEW MyView_Write AS SELECT *, Current_Timestamp as LoadTime FROM MyView_Read;

 

 

I create a new dataframe from the new temporary new.

I then write this new dataframe to another lakehouse table in another workspace.

 

df_write = spark.sql("SELECT * FROM MyView_Write")

df_write.write.mode("append").save("<anotherWorkspaceID>@onelake.dfs.fabric.microsoft.com/<anotherLakehouseName>/Tables/<anotherTableName>")​

 

 

This seems to work fine for me.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.