Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Be one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now

Reply
AdamFry
Advocate I
Advocate I

Writing to a Lakehouse in different workspace from Notebook

Hi there, I've been struggling to find good documentation on how to read data from a lakehouse in one workspace and after applying some transformations, write it to a different lakehouse in a different workspace.  Is this possible?  I have the following workspaces: 

WORKSPACE_BRONZE that contains LAKEHOUSE_BRONZE

WORKSPACE_SILVER that contains LAKEHOUSE_SILVER

 

LAKEHOUSE_BRONZE has a CSV file in the files section.  I have a notebook and have added both lakehouses to the notebook.  I have some code like this to read the file: 

 

FILE_TO_PROCESS = "MYFILE.csv"
BASE_PATH_TO_FOLDER = "MY/FOLDER/PENDING/"
df = spark.read.format("csv").option("header","true").load(BASE_PATH_TO_FOLDER + FILE_TO_PROCESS)
 
After applying some schema validations and transformations like adding a column for the file name, changing data types from strings to their actual types and renaming the columns to remove spaces, my dataframe is looking good and now I'd like to append this dataframe to a delta table in my silver lakehouse.  
 
When I do the following: 
df.write.format("delta").mode("append").option("delta.columnMapping.mode", "name").saveAsTable("my_special_table")
 
It will write to the bronze (default) lakehouse.  I've tried prefixing the table name with LAKEHOUSE_SILVER but I get an error that the schema is not found: 
 
df.write.format("delta").mode("append").option("delta.columnMapping.mode", "name").saveAsTable("LAKEHOUSE_SILVER.my_special_table")
 
One thing I tried was making the silver lakehouse the default lakehouse and then providing the full abfss file path when reading the file from bronze.  That actually works but I thought there could be scenarios where I have multiple lakehouse sources from multiple workspaces and I won't be able to solve this by managing the default lakehouse so in general, it would be nice to understand how I can explicitly write to a given lakehouse in a given workspace but I am struggling to find the syntax.  Can anyone point me to documentation or help me understand the syntax?  
 
Thank you very much in advance if anyone can shed some light here!
1 ACCEPTED SOLUTION
frithjof_v
Community Champion
Community Champion

You can use the fully qualified path to write to a Lakehouse in another workspace. 

 

Please see this article, it helped me:

https://murggu.medium.com/databricks-and-fabric-writing-to-onelake-and-adls-gen2-671dcf24cf33

 

So, to write to a table (new or existing) in a Lakehouse in another workspace, I think it is possible to write it like this:

df.write.format("delta").mode("append").save(f"abfss://{workspace_name}@onelake.dfs.fabric.microsoft.com/{lakehouse_name}.Lakehouse/Tables/{table_name}")
 
or if your objects names have special characters or whitespace, could use the id's:
 
df.write.format("delta").mode("append").save(f"abfss://{workspace_id}@onelake.dfs.fabric.microsoft.com/{lakehouse_id}/Tables/{table_name}")
 
 
For reading you could also use the fully qualified path, as you have already done. Then I think the whole process should be independent of the default lakehouse.

View solution in original post

5 REPLIES 5
frithjof_v
Community Champion
Community Champion

You can use the fully qualified path to write to a Lakehouse in another workspace. 

 

Please see this article, it helped me:

https://murggu.medium.com/databricks-and-fabric-writing-to-onelake-and-adls-gen2-671dcf24cf33

 

So, to write to a table (new or existing) in a Lakehouse in another workspace, I think it is possible to write it like this:

df.write.format("delta").mode("append").save(f"abfss://{workspace_name}@onelake.dfs.fabric.microsoft.com/{lakehouse_name}.Lakehouse/Tables/{table_name}")
 
or if your objects names have special characters or whitespace, could use the id's:
 
df.write.format("delta").mode("append").save(f"abfss://{workspace_id}@onelake.dfs.fabric.microsoft.com/{lakehouse_id}/Tables/{table_name}")
 
 
For reading you could also use the fully qualified path, as you have already done. Then I think the whole process should be independent of the default lakehouse.

Thank you so much!

Anonymous
Not applicable

Hi @AdamFry ,

Glad to that your issue got resolved. Please continue using Fabric Community on your further queries.

Element115
Power Participant
Power Participant

For syntax and stuff, trying asking Copilot or ChatGPT.  I usually get pretty good feedback.

AdamFry
Advocate I
Advocate I

Apologies for not using the code block for the code in my post, I tried editing my post to add it but I got an invalid html error so hopefully this is ok posted as is.

Helpful resources

Announcements
Las Vegas 2025

Join us at the Microsoft Fabric Community Conference

March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!

Dec Fabric Community Survey

We want your feedback!

Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.

ArunFabCon

Microsoft Fabric Community Conference 2025

Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.

December 2024

A Year in Review - December 2024

Find out what content was popular in the Fabric community during 2024.