The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredCompete to become Power BI Data Viz World Champion! First round ends August 18th. Get started.
Hi.
I try to write some data to a lakehouse.
Setup
An error occurs when writing to the the lakehouse.
Reproducible example. A notebook
# Semantic link package used to retrive data.
import sempy
import sempy.fabric as fabric
workspaces_df = fabric.list_workspaces()
delta_table_name = "Workspaces"
workspaces_df.write.mode("overwrite").format("delta").saveAsTable(delta_table_name)
, returns error message
AttributeError: 'DataFrame' object has no attribute 'write'
I have tried
- converting the dataframe to a Spark dataframe
but not found a solution so far.
Does anyone have a working solution for saving the output from Seamntic Link to a lakehouse File or table?
If so, it would be much appreciated
Kind regards
Solved! Go to Solution.
You definitely need to convert to a spark dataframe before writing. Can you show the code you used where you converted to a spark dataframe? I used spark.createDataFrame.
Hi. Thanks for the reply @blopez11 .
It works using the convertion to spark dataframe as you suggested.
The code belows works:
import sempy
import sempy.fabric as fabric
workspaces = fabric.list_workspaces()
# For simplicity, keep columns withouth " " in column names in order to avoid Error Invalid column name when writing
workspaces_subset = workspaces[['Type', 'Id']]
workspaces_subset_spark = spark.createDataFrame(workspaces_subset)
# Write to deault lakehouse
table_name = "workspaces_subset"
workspaces_subset_spark.write.format("delta").mode("Overwrite").saveAsTable(table_name)
The error I experienced was probably since the dataframe was unintentionally converted to Series before trying to convert to spark dataframe.
The unintentionnaly convertion to Series was due to selecting 1 column from the dataframe, i.e.
# Data type unintenionally change from pandas dataframe to Series
workspaces_series = workspaces_df['Id']
Thanks for the help!
You definitely need to convert to a spark dataframe before writing. Can you show the code you used where you converted to a spark dataframe? I used spark.createDataFrame.