Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! It's time to submit your entry. Live now!
I have a notebook in wk1 that reads table1 from lakehouse1, performs a series of transformations, and then saves the new delta table in lakehouse2 in wk2, but I get an error saying that it cannot find the lakehouse in the catalogue when it is included in the notebook as an item. How can I make it visible? How can I use the save or saveAsTable statement to make it work? Lakehouses are without schemas.
Solved! Go to Solution.
Hi @lsoriaga,
# Read the Delta table from lakehouse1
df = spark.read.format("delta").load("Tables/table1")
If you want to use the catalog reference:
df = spark.table("lakehouse1.table1")
# Save as Delta in lakehouse2
df_transformed.write.format("delta").mode("overwrite").save("Tables/new_table")
Or register it as a table in the Spark catalog for lakehouse2:
df_transformed.write.format("delta").mode("overwrite").saveAsTable("lakehouse2.new_table")
If this response was helpful in any way, I’d gladly accept a 👍much like the joy of seeing a DAX measure work first time without needing another FILTER.
Please mark it as the correct solution. It helps other community members find their way faster (and saves them from another endless loop 🌀.
Hi @lsoriaga
When a notebook is in Workspace 1, it cannot automatically see a Lakehouse from Workspace 2 unless you explicitly attach it as a Lakehouse item inside the notebook’s workspace.
Adding a Lakehouse to the catalog inside the code cell is not enough — it must be attached as an item at the notebook level.
Once Lakehouse2 is attached, use:
# Write as Delta file into Lakehouse2
df.write.format("delta").mode("overwrite").save("Tables/my_new_table")or to register a table in the Lakehouse catalog:
df.write.format("delta").mode("overwrite").saveAsTable("lakehouse2.my_new_table")(You don’t need schemas because Lakehouses do not require schema prefixes.)
saveAsTable only works after the Lakehouse is attached to the notebook.
If not attached, Spark will throw “Lakehouse not found in catalog”.
If this solved your issue, please mark the answer as “Accepted” and receive KUDO
Best regards
Nabha Ahmed
Hi @lsoriaga
May I check if this issue has been resolved? If not, Please feel free to contact us if you have any further questions.
Thank you
Hi @lsoriaga
When a notebook is in Workspace 1, it cannot automatically see a Lakehouse from Workspace 2 unless you explicitly attach it as a Lakehouse item inside the notebook’s workspace.
Adding a Lakehouse to the catalog inside the code cell is not enough — it must be attached as an item at the notebook level.
Once Lakehouse2 is attached, use:
# Write as Delta file into Lakehouse2
df.write.format("delta").mode("overwrite").save("Tables/my_new_table")or to register a table in the Lakehouse catalog:
df.write.format("delta").mode("overwrite").saveAsTable("lakehouse2.my_new_table")(You don’t need schemas because Lakehouses do not require schema prefixes.)
saveAsTable only works after the Lakehouse is attached to the notebook.
If not attached, Spark will throw “Lakehouse not found in catalog”.
If this solved your issue, please mark the answer as “Accepted” and receive KUDO
Best regards
Nabha Ahmed
Hi @lsoriaga ,
I wanted to check if you had the opportunity to review the valuable information provided by @Zanqueta and @deborshi_nag . Please feel free to contact us if you have any further questions.
Thank you.
Hi @lsoriaga,
# Read the Delta table from lakehouse1
df = spark.read.format("delta").load("Tables/table1")
If you want to use the catalog reference:
df = spark.table("lakehouse1.table1")
# Save as Delta in lakehouse2
df_transformed.write.format("delta").mode("overwrite").save("Tables/new_table")
Or register it as a table in the Spark catalog for lakehouse2:
df_transformed.write.format("delta").mode("overwrite").saveAsTable("lakehouse2.new_table")
If this response was helpful in any way, I’d gladly accept a 👍much like the joy of seeing a DAX measure work first time without needing another FILTER.
Please mark it as the correct solution. It helps other community members find their way faster (and saves them from another endless loop 🌀.
from your notebook, use the following command to write to lakehouse2 in wk2 directly
df.write.format("delta").mode("overwrite").save("abfss://<guid1>@onelake.dfs.fabric.microsoft.com/<guid2>/Tables/my_delta_table")
You can get the abfs path of lakehouse2 by clicking the properties of the table you plan to store data into.