Supplies are limited. Contact info@espc.tech right away to save your spot before the conference sells out.
Get your discountScore big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount
I am having an issue building delta files through a notebook and having it populate the SQL Endpoint.
Below is my code to create the delta table
SalesDocs_DF.write.format('delta').mode('overwrite').save('Tables/Sales_Inventory')
When I try to convert to SQL Endpoint I get the following Error:
Table uses column mapping which is not supported.
Corrective Action: Recreate the table without column mapping property.
I even tried to query through a data warehouse and I cannot access the delta table I made through a notebook. I am not sure what I am doing wrong here.
Solved! Go to Solution.
I appreciate the help from you all. I atually endedup figuring out how to make it work.
I found the following code through some researchonto it and it ended up working. I think my columns had some spaces or something at the end.
from pyspark.sql.functions import col
def remove_bda_chars_from_columns(df: (
return df.select([col(x).alias(x.replace(" ", "_").replace("/", "").replace("%", "pct").replace("(", "").replace(")", "")) for x in df.columns])
SalesDocs_DF = SalesDocs_DF.transform(lambda df: remove_bda_chars_from_columns(df))
I appreciate the help from you all. I atually endedup figuring out how to make it work.
I found the following code through some researchonto it and it ended up working. I think my columns had some spaces or something at the end.
from pyspark.sql.functions import col
def remove_bda_chars_from_columns(df: (
return df.select([col(x).alias(x.replace(" ", "_").replace("/", "").replace("%", "pct").replace("(", "").replace(")", "")) for x in df.columns])
SalesDocs_DF = SalesDocs_DF.transform(lambda df: remove_bda_chars_from_columns(df))
This is awesome! Saved my life today, great solution, thank you!
Adding on to this for anyone that has a similar issue. This allows me to write a delta table without the column mapping so I can access the table through the SQL Endpoint.
Hi @ghernandezmf ,
Can you please help me understand?
What all types of transformation operations are performed to the dataframe?
If possible please share the screenshot of your issue?
If you can provide me more details, I will try to guide you.
User | Count |
---|---|
4 | |
4 | |
3 | |
2 | |
2 |
User | Count |
---|---|
10 | |
8 | |
7 | |
6 | |
6 |