Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
arnaudlebet
New Member

Read Lakehouse A, write Lakehouse B, using Python only (not Spark)

Hello all,

I've been struggling with a very simple and common use case which is reading from one Lakehouse (A) and writing to another Lakehouse (B) using a Notebook using Python environment only (not Spark). Both Lakehouses are within the same workspace.

 

I have some files in a Lakehouse, which I need to run parametrized pipelines. It seems that my notebook is always taking the "Default" lakehouse. So within the same notebook, I cannot read the configuration file in my lakehouse A and in the same process write something in another Lakehouse B with direct reference like: "lakehouse_A", "lakehouse_B".

 

Is there a way to perform these cross-lakehouse operations in Fabric without much manual intervention ?

4 REPLIES 4
v-tsaipranay
Community Support
Community Support

Hi @arnaudlebet , 

 

We haven’t received an update from you in some time. Could you please let us know if the issue has been resolved?
If you still require support, please let us know, we are happy to assist you.

 

Thank you.

v-tsaipranay
Community Support
Community Support

Hi @arnaudlebet ,

Thanks for reaching out to the Microsoft fabric community forum.  

 

Could you please let us know if the issue has been resolved? I wanted to check if you had the opportunity to review the information provided by @SaiTejaTalasila  and @OnurOz . If you still require support, please let us know, we are happy to assist you.

 

Thank you.

SaiTejaTalasila
Super User
Super User

Hi @arnaudlebet ,

 

As of now, Microsoft Fabric notebooks run on a Spark-powered runtime, even when you're using the "Python" environment. That environment is essentially a lightweight Spark session optimized for Python workloads, not a standalone Python kernel like you'd find in traditional Jupyter setups.

 

You can refer the below article by Sandeep pawar to change the lakehouse programmaticaly.

https://fabric.guru/programmatically-removing-updating-default-lakehouse-of-a-fabric-notebook

 

I hope this helps.

 

Thanks,

Sai Teja 

OnurOz
Advocate I
Advocate I

Hi Arnaudlebet,

 

Microsoft Fabric currently allows you to set only one default Lakehouse for your notebook session, which means all standard path-based operations (like simple Pandas file reads/writes) are directed to this default Lakehouse. However, there are methods to work with data across multiple Lakehouses within a single notebook in Python-only environments, though they do require some manual handling of file system paths.

 

You can add more than one Lakehouse to your notebook via the Lakehouse Explorer pane. However, only one Lakehouse can be set as "default" at a time, which controls where relative file paths operate.

 

o work directly with files from different Lakehouses in the same notebook, use the ABFS (Azure Blob File System) absolute path for the files you want to read from or write to in the non-default Lakehouse. You copy this path from the Lakehouse Explorer or file context menu.

 

When using Pandas (not Spark), you can reference these ABFS paths directly in your code when reading or writing files. For example:

 

import pandas as pd

# Read from a configuration file in Lakehouse A using its ABFS path
config_df = pd.read_csv('abfss://root@<lakehouse_A_id>.dfs.fabric.microsoft.com/Files/config.csv')

# Process your data as required...

# Write result to Lakehouse B using its ABFS path
result_df.to_csv('abfss://root@<lakehouse_B_id>.dfs.fabric.microsoft.com/Files/result.csv')

 

Substitute <lakehouse_A_id> and <lakehouse_B_id> with your actual Fabric Lakehouse resource IDs.

 

Also check this thread: https://community.fabric.microsoft.com/t5/Fabric-platform/Switching-Lakehouses-in-a-notebook/m-p/384...

 

Hope that helps.

Onur


😊 If this post helped you, feel free to give it some Kudos! 👍

And if it answered your question, please mark it as the accepted solution.


Helpful resources

Announcements
Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

October Fabric Update Carousel

Fabric Monthly Update - October 2025

Check out the October 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.