Supplies are limited. Contact info@espc.tech right away to save your spot before the conference sells out.
Get your discountScore big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount
Hi everyone,
We are working on Fabric Preview and today we are trying to import data from DataFrame in NoteBook (PySpark) into a Table in Lakehouse.
Here is the code we have used:
table_name = "dds_estimated"
spark_df.write.mode("overwrite").format("delta").option("mergeSchema", "true").save(f"Tables/{table_name}")
print(f"Done loading {table_name}")
Then we encountered a problem in the table format and were asked to use the following command to edit the table properties:
spark.sql(f'''
ALTER TABLE dds_estimated SET TBLPROPERTIES (
'delta.columnMapping.mode' = 'name',
'delta.minReaderVersion' = '2',
'delta.minWriterVersion' = '5'
)
''')
Overall, we can run the notebook done but nothing import to our table, it's still blank and have a notice:
Table uses column mapping which is not supported.
What should we do to import DataFrame to Table in Lakehouse?
User | Count |
---|---|
31 | |
17 | |
11 | |
9 | |
8 |
User | Count |
---|---|
48 | |
31 | |
23 | |
17 | |
15 |