Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric certified for FREE! Don't miss your chance! Learn more

Reply
shivaazure
New Member

Loading data from Lakehouse to Snowflake using Copy data in Data Pipeline

I have source Lakehouse table and target snowflake, using datalake copy data 

 

trying to preview the source lakehouse data, but getting this error:

The data type is not supported in Delta format. Reason: Cannot find supported logical type for column name Extended_Status, delta type void

 

I am writing spark dataframe into lakehouse table. few columns contain nested JSON format.

3 REPLIES 3
Anonymous
Not applicable

Hi @shivaazure ,

 

Is my follow-up just to ask if the problem has been solved?

 

If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?

 

Thank you very much for your cooperation!

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Anonymous
Not applicable

Hi @shivaazure ,

 

Is my follow-up just to ask if the problem has been solved?

 

If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?

 

Thank you very much for your cooperation!

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Anonymous
Not applicable

Hi @shivaazure ,

 

The error message indicates that void type is not supported. I have the following suggestions:

 

Before writing to the Delta table, you can convert the data type of the column in the notebook:

from pyspark.sql.functions import col
from pyspark.sql.types import StringType

#Convert the column to string type
df = df.withColumn("Extended_Status", col("Extended_Status").cast(StringType()))

 

Then use the following code to create the Delta table:

df.write.mode("overwrite").format("delta").saveAsTable("your_table_name")

 

Try again to see if the error still occurs.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Helpful resources

Announcements
Sticker Challenge 2026 Carousel

Join our Community Sticker Challenge 2026

If you love stickers, then you will definitely want to check out our Community Sticker Challenge!

Free Fabric Certifications

Free Fabric Certifications

Get Fabric certified for free! Don't miss your chance.

January Fabric Update Carousel

Fabric Monthly Update - January 2026

Check out the January 2026 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.