Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!

Reply
lsoriaga
Frequent Visitor

How can I save a delta table from lakehouse1 stored in wk1 to lakehouse2 in wk2 using a NB pyspark?

I have a notebook in wk1 that reads table1 from lakehouse1, performs a series of transformations, and then saves the new delta table in lakehouse2 in wk2, but I get an error saying that it cannot find the lakehouse in the catalogue when it is included in the notebook as an item. How can I make it visible? How can I use the save or saveAsTable statement to make it work? Lakehouses are without schemas.

2 ACCEPTED SOLUTIONS
Zanqueta
Super User
Super User

Hi @lsoriaga,

 

To achieve this in Microsoft Fabric using PySpark in a Notebook, there are a few important points to consider:

Why the Error Occurs

  • When you attach a Lakehouse to a Notebook, it becomes visible in the Spark catalog only for that session.
  • If you try to reference another Lakehouse (from a different workspace) without attaching it, Spark cannot find it because it is not mounted in the current session.
  • Lakehouses in Fabric do not use schemas by default, so you must reference them by their catalog name when attached.

 

Solution: Attach Both Lakehouses to the Notebook

  1. Open your Notebook in wk1.
  2. In the Items pane, attach:
    • lakehouse1 (source)
    • lakehouse2 (destination)
  3. After attaching, both Lakehouses will appear in the Spark catalog under their names.

  

# Read the Delta table from lakehouse1
df = spark.read.format("delta").load("Tables/table1")

If you want to use the catalog reference:

df = spark.table("lakehouse1.table1")

 

Write to Lakehouse2

You can save the transformed DataFrame as a Delta table in lakehouse2:
 
 
# Save as Delta in lakehouse2
df_transformed.write.format("delta").mode("overwrite").save("Tables/new_table")

Or register it as a table in the Spark catalog for lakehouse2:

df_transformed.write.format("delta").mode("overwrite").saveAsTable("lakehouse2.new_table")

 

Question for you:
Do you want the table in lakehouse2 to replace the existing one each time (overwrite), or append new data for historical snapshots?
 

If this response was helpful in any way, I’d gladly accept a 👍much like the joy of seeing a DAX measure work first time without needing another FILTER.

Please mark it as the correct solution. It helps other community members find their way faster (and saves them from another endless loop 🌀.

 

 

View solution in original post

Nabha-Ahmed
Super User
Super User

Hi @lsoriaga 

When a notebook is in Workspace 1, it cannot automatically see a Lakehouse from Workspace 2 unless you explicitly attach it as a Lakehouse item inside the notebook’s workspace.
Adding a Lakehouse to the catalog inside the code cell is not enough — it must be attached as an item at the notebook level.

To make Lakehouse2 visible:

  1. Open the notebook in wk1.
  2. Click Add > Lakehouse.
  3. Select lakehouse2 from wk2.
  4. After attaching it, you will see it in the left catalog panel and Spark can reference it.

Saving the transformed table into Lakehouse2

Once Lakehouse2 is attached, use:

# Write as Delta file into Lakehouse2
df.write.format("delta").mode("overwrite").save("Tables/my_new_table")

or to register a table in the Lakehouse catalog:

df.write.format("delta").mode("overwrite").saveAsTable("lakehouse2.my_new_table")

(You don’t need schemas because Lakehouses do not require schema prefixes.)

Important:

saveAsTable only works after the Lakehouse is attached to the notebook.
If not attached, Spark will throw “Lakehouse not found in catalog”.


If this solved your issue, please mark the answer as “Accepted” and receive KUDO

Best regards 

Nabha Ahmed

View solution in original post

5 REPLIES 5
v-nmadadi-msft
Community Support
Community Support

Hi @lsoriaga 

May I check if this issue has been resolved? If not, Please feel free to contact us if you have any further questions.


Thank you

Nabha-Ahmed
Super User
Super User

Hi @lsoriaga 

When a notebook is in Workspace 1, it cannot automatically see a Lakehouse from Workspace 2 unless you explicitly attach it as a Lakehouse item inside the notebook’s workspace.
Adding a Lakehouse to the catalog inside the code cell is not enough — it must be attached as an item at the notebook level.

To make Lakehouse2 visible:

  1. Open the notebook in wk1.
  2. Click Add > Lakehouse.
  3. Select lakehouse2 from wk2.
  4. After attaching it, you will see it in the left catalog panel and Spark can reference it.

Saving the transformed table into Lakehouse2

Once Lakehouse2 is attached, use:

# Write as Delta file into Lakehouse2
df.write.format("delta").mode("overwrite").save("Tables/my_new_table")

or to register a table in the Lakehouse catalog:

df.write.format("delta").mode("overwrite").saveAsTable("lakehouse2.my_new_table")

(You don’t need schemas because Lakehouses do not require schema prefixes.)

Important:

saveAsTable only works after the Lakehouse is attached to the notebook.
If not attached, Spark will throw “Lakehouse not found in catalog”.


If this solved your issue, please mark the answer as “Accepted” and receive KUDO

Best regards 

Nabha Ahmed

v-nmadadi-msft
Community Support
Community Support

Hi @lsoriaga ,


I wanted to check if you had the opportunity to review the valuable information provided by @Zanqueta  and @deborshi_nag . Please feel free to contact us if you have any further questions.


Thank you.

Zanqueta
Super User
Super User

Hi @lsoriaga,

 

To achieve this in Microsoft Fabric using PySpark in a Notebook, there are a few important points to consider:

Why the Error Occurs

  • When you attach a Lakehouse to a Notebook, it becomes visible in the Spark catalog only for that session.
  • If you try to reference another Lakehouse (from a different workspace) without attaching it, Spark cannot find it because it is not mounted in the current session.
  • Lakehouses in Fabric do not use schemas by default, so you must reference them by their catalog name when attached.

 

Solution: Attach Both Lakehouses to the Notebook

  1. Open your Notebook in wk1.
  2. In the Items pane, attach:
    • lakehouse1 (source)
    • lakehouse2 (destination)
  3. After attaching, both Lakehouses will appear in the Spark catalog under their names.

  

# Read the Delta table from lakehouse1
df = spark.read.format("delta").load("Tables/table1")

If you want to use the catalog reference:

df = spark.table("lakehouse1.table1")

 

Write to Lakehouse2

You can save the transformed DataFrame as a Delta table in lakehouse2:
 
 
# Save as Delta in lakehouse2
df_transformed.write.format("delta").mode("overwrite").save("Tables/new_table")

Or register it as a table in the Spark catalog for lakehouse2:

df_transformed.write.format("delta").mode("overwrite").saveAsTable("lakehouse2.new_table")

 

Question for you:
Do you want the table in lakehouse2 to replace the existing one each time (overwrite), or append new data for historical snapshots?
 

If this response was helpful in any way, I’d gladly accept a 👍much like the joy of seeing a DAX measure work first time without needing another FILTER.

Please mark it as the correct solution. It helps other community members find their way faster (and saves them from another endless loop 🌀.

 

 
deborshi_nag
Impactful Individual
Impactful Individual

from your notebook, use the following command to write to lakehouse2 in wk2 directly 

 

df.write.format("delta").mode("overwrite").save("abfss://<guid1>@onelake.dfs.fabric.microsoft.com/<guid2>/Tables/my_delta_table")

 

You can get the abfs path of lakehouse2 by clicking the properties of the table you plan to store data into.  

 

 

I trust this will be helpful. If you found this guidance useful, you are welcome to acknowledge with a Kudos or by marking it as a Solution.

Helpful resources

Announcements
December Fabric Update Carousel

Fabric Monthly Update - December 2025

Check out the December 2025 Fabric Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.