Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I am creating a dataframe with data from silver lakehouse so this is the default one. dev_silver_lk
But I want to create a dimension Delta Parquet file in the gold lakehouse dev_gold_lk
I have also mounted this Data lake to the notebook (They both are in public preview as I have Schemas ticked)
And I just cant seem to do it?
Surely this should be possible?
I suspect it might be because my Lakehouse has Preview schemas? I tried this and it didnt work in Pyspark Notebook
Try providing full ABFSS path:
DELTA_TABLE_PATH = "abfss://(workspace id)@onelake.dfs.fabric.microsoft.com/(lakehouse id)/Tables/(tablename)"
delta_table = DeltaTable.forPath(spark, DELTA_TABLE_PATH)
...
Hi @DebbieE ,
Thanks for reaching out to the Microsoft fabric community forum.
Currently Lakehouse with schema enabled have some limitations as they are still in public preview,
source: Lakehouse schemas (Preview) - Microsoft Fabric | Microsoft Learn
Please try performing the operation with schemas disabled to see if the issue still occurs.
If this post helps, then please consider Accepting as solution to help the other members find it more quickly and consider giving a KUDOS. Feel free to reach out if you need further assistance.
Thank you
You could try saveAsTable:
dfBlockersKey_reordered.write \
.format("delta") \
.mode("overwrite") \
.saveAsTable("dev_timesheet_gold_lh.dimCustomer")
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
10 | |
4 | |
4 | |
3 | |
3 |