Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
shivaazure
New Member

Loading data from Lakehouse to Snowflake using Copy data in Data Pipeline

I have source Lakehouse table and target snowflake, using datalake copy data 

 

trying to preview the source lakehouse data, but getting this error:

The data type is not supported in Delta format. Reason: Cannot find supported logical type for column name Extended_Status, delta type void

 

I am writing spark dataframe into lakehouse table. few columns contain nested JSON format.

3 REPLIES 3
Anonymous
Not applicable

Hi @shivaazure ,

 

Is my follow-up just to ask if the problem has been solved?

 

If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?

 

Thank you very much for your cooperation!

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Anonymous
Not applicable

Hi @shivaazure ,

 

Is my follow-up just to ask if the problem has been solved?

 

If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?

 

Thank you very much for your cooperation!

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Anonymous
Not applicable

Hi @shivaazure ,

 

The error message indicates that void type is not supported. I have the following suggestions:

 

Before writing to the Delta table, you can convert the data type of the column in the notebook:

from pyspark.sql.functions import col
from pyspark.sql.types import StringType

#Convert the column to string type
df = df.withColumn("Extended_Status", col("Extended_Status").cast(StringType()))

 

Then use the following code to create the Delta table:

df.write.mode("overwrite").format("delta").saveAsTable("your_table_name")

 

Try again to see if the error still occurs.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.