Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Don't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.

Reply
ghernandezmf
Frequent Visitor

Lakehouse to SQL Endpoint keeps giving error

I am having an issue building delta files through a notebook and having it populate the SQL Endpoint. 

 

Below is my code to create the delta table

 

SalesDocs_DF.write.format('delta').mode('overwrite').save('Tables/Sales_Inventory')

When I try to convert to SQL Endpoint I get the following Error:

Table uses column mapping which is not supported.

Corrective Action: Recreate the table without column mapping property.

 

I even tried to query through a data warehouse and I cannot access the delta table I made through a notebook. I am not sure what I am doing wrong here.

1 ACCEPTED SOLUTION
ghernandezmf
Frequent Visitor

I appreciate the help from you all. I atually endedup figuring out how to make it work.

 

I found the following code through some researchonto it and it ended up working. I think my columns had some spaces or something at the end.

 

from pyspark.sql.functions import col

def remove_bda_chars_from_columns(df: (

    return  df.select([col(x).alias(x.replace(" ", "_").replace("/", "").replace("%", "pct").replace("(", "").replace(")", "")) for x in df.columns])

SalesDocs_DF = SalesDocs_DF.transform(lambda df: remove_bda_chars_from_columns(df))

View solution in original post

6 REPLIES 6
ghernandezmf
Frequent Visitor

I appreciate the help from you all. I atually endedup figuring out how to make it work.

 

I found the following code through some researchonto it and it ended up working. I think my columns had some spaces or something at the end.

 

from pyspark.sql.functions import col

def remove_bda_chars_from_columns(df: (

    return  df.select([col(x).alias(x.replace(" ", "_").replace("/", "").replace("%", "pct").replace("(", "").replace(")", "")) for x in df.columns])

SalesDocs_DF = SalesDocs_DF.transform(lambda df: remove_bda_chars_from_columns(df))

This is awesome! Saved my life today, great solution, thank you!

Adding on to this for anyone that has a similar issue. This allows me to write a delta table without the column mapping so I can access the table through the SQL Endpoint. 

Anonymous
Not applicable

Hi @ghernandezmf ,

Glad to know that you were able resolve your issue.

R1k91
Super User
Super User

try to use 

SalesDocs_DF.write.format('delta').mode('overwrite').saveAsTable('Sales_Inventory')

--
Riccardo Perico
BI & Power BI Engineer @ Lucient Italia

Blog | GitHub

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Anonymous
Not applicable

Hi @ghernandezmf ,

Can you please help me understand?
What all types of transformation operations are performed to the dataframe?
If possible please share the screenshot of your issue?

If you can provide me more details, I will try to guide you.

Helpful resources

Announcements
Las Vegas 2025

Join us at the Microsoft Fabric Community Conference

March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Prices go up Feb. 11th.

JanFabricDE_carousel

Fabric Monthly Update - January 2025

Explore the power of Python Notebooks in Fabric!

JanFabricDW_carousel

Fabric Monthly Update - January 2025

Unlock the latest Fabric Data Warehouse upgrades!

JanFabricDF_carousel

Fabric Monthly Update - January 2025

Take your data replication to the next level with Fabric's latest updates!