Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
In python notebook is there a way to read delta table from default lakehouse which is already mounted instead of using absolute path with workspace?
Would prefer to use relative paths, the code snippets for writing to lakehouse table have this option but when reading from table the code snippet seems to only have option full path 😕
Yes, you can read Delta tables in Microsoft Fabric Notebooks using relative paths, similar to how you write them. The key is to reference the mounted Lakehouse path, typically under /lakehouse/default (for the default Lakehouse).
If the Lakehouse is already mounted in the notebook (as it usually is) you can use a relative path like
from deltalake import DeltaTable # Relative path to the delta table inside the default lakehouse relative_table_path = "/lakehouse/default/Files/tables/my_table" # Read delta table dt = DeltaTable(relative_table_path) df = dt.to_pyarrow_dataset().to_table().to_pandas() display(df)
Do not use the full Fabric workspace URL (abfss://...) unless required for advanced use cases or cross-Lakehouse access. For Tables instead of Files, use /lakehouse/default/Tables/my_table if that is where your Delta tables are written.
Thanks, for this delta table why does it need to have several intermediate functions to load into a dataframe:
dt.to_pyarrow_dataset().to_table().to_pandas()
Is there a better way to read delta table into a data frame to manipulate in Python notebook?
What is "to_pyarrow_dataset"?
yeah you can do this as well & convert to pandas if needed
df = spark.read.format("delta").load("/lakehouse/default/Tables/my_table")
df.show()
The reason why pyarrow was used:
to_pyarrow_dataset()
Loads the Delta table as a PyArrow dataset, which is a fast, columnar format. This enables efficient filtering, column pruning, and scanning of large data volumes.
.to_table()
Converts the Arrow dataset to an in-memory Arrow Table.
.to_pandas()
Finally, converts the Arrow Table to a Pandas DataFrame.
Because:
The Delta Lake Python bindings (delta-rs) are optimized for interoperability with PyArrow, not Pandas.
They expect users to control the intermediate stages, for ex: filtering data before loading it into memory via Arrow.
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.