Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
I am having an issue building delta files through a notebook and having it populate the SQL Endpoint.
Below is my code to create the delta table
SalesDocs_DF.write.format('delta').mode('overwrite').save('Tables/Sales_Inventory')
When I try to convert to SQL Endpoint I get the following Error:
Table uses column mapping which is not supported.
Corrective Action: Recreate the table without column mapping property.
I even tried to query through a data warehouse and I cannot access the delta table I made through a notebook. I am not sure what I am doing wrong here.
Solved! Go to Solution.
I appreciate the help from you all. I atually endedup figuring out how to make it work.
I found the following code through some researchonto it and it ended up working. I think my columns had some spaces or something at the end.
from pyspark.sql.functions import col
def remove_bda_chars_from_columns(df: (
return df.select([col(x).alias(x.replace(" ", "_").replace("/", "").replace("%", "pct").replace("(", "").replace(")", "")) for x in df.columns])
SalesDocs_DF = SalesDocs_DF.transform(lambda df: remove_bda_chars_from_columns(df))
I appreciate the help from you all. I atually endedup figuring out how to make it work.
I found the following code through some researchonto it and it ended up working. I think my columns had some spaces or something at the end.
from pyspark.sql.functions import col
def remove_bda_chars_from_columns(df: (
return df.select([col(x).alias(x.replace(" ", "_").replace("/", "").replace("%", "pct").replace("(", "").replace(")", "")) for x in df.columns])
SalesDocs_DF = SalesDocs_DF.transform(lambda df: remove_bda_chars_from_columns(df))
This is awesome! Saved my life today, great solution, thank you!
Adding on to this for anyone that has a similar issue. This allows me to write a delta table without the column mapping so I can access the table through the SQL Endpoint.
Hi @ghernandezmf ,
Can you please help me understand?
What all types of transformation operations are performed to the dataframe?
If possible please share the screenshot of your issue?
If you can provide me more details, I will try to guide you.
User | Count |
---|---|
29 | |
10 | |
4 | |
3 | |
1 |
User | Count |
---|---|
45 | |
15 | |
14 | |
10 | |
9 |