The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
I am creating a dataframe with data from silver lakehouse so this is the default one. dev_silver_lk
But I want to create a dimension Delta Parquet file in the gold lakehouse dev_gold_lk
I have also mounted this Data lake to the notebook (They both are in public preview as I have Schemas ticked)
And I just cant seem to do it?
Surely this should be possible?
I suspect it might be because my Lakehouse has Preview schemas? I tried this and it didnt work in Pyspark Notebook
Solved! Go to Solution.
Hi @DebbieE ,
Thanks for reaching out to the Microsoft fabric community forum.
Currently Lakehouse with schema enabled have some limitations as they are still in public preview,
source: Lakehouse schemas (Preview) - Microsoft Fabric | Microsoft Learn
Please try performing the operation with schemas disabled to see if the issue still occurs.
If this post helps, then please consider Accepting as solution to help the other members find it more quickly and consider giving a KUDOS. Feel free to reach out if you need further assistance.
Thank you
I have given up on Lakehouse with Schema enabled. Another fail for me right now
Hi @DebbieE
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided by the community members for the issue worked. If our response addressed, please mark it as Accept as solution and click Yes if you found it helpful.
Thanks and regards
Hi @DebbieE
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If our responses has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.
Hi @DebbieE
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
Thank you.
Try providing full ABFSS path:
DELTA_TABLE_PATH = "abfss://(workspace id)@onelake.dfs.fabric.microsoft.com/(lakehouse id)/Tables/(tablename)"
delta_table = DeltaTable.forPath(spark, DELTA_TABLE_PATH)
...
Hi @DebbieE ,
Thanks for reaching out to the Microsoft fabric community forum.
Currently Lakehouse with schema enabled have some limitations as they are still in public preview,
source: Lakehouse schemas (Preview) - Microsoft Fabric | Microsoft Learn
Please try performing the operation with schemas disabled to see if the issue still occurs.
If this post helps, then please consider Accepting as solution to help the other members find it more quickly and consider giving a KUDOS. Feel free to reach out if you need further assistance.
Thank you
You could try saveAsTable:
dfBlockersKey_reordered.write \
.format("delta") \
.mode("overwrite") \
.saveAsTable("dev_timesheet_gold_lh.dimCustomer")