Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
shivaazure
New Member

Loading data from Lakehouse to Snowflake using Copy data in Data Pipeline

I have source Lakehouse table and target snowflake, using datalake copy data 

 

trying to preview the source lakehouse data, but getting this error:

The data type is not supported in Delta format. Reason: Cannot find supported logical type for column name Extended_Status, delta type void

 

I am writing spark dataframe into lakehouse table. few columns contain nested JSON format.

3 REPLIES 3
Anonymous
Not applicable

Hi @shivaazure ,

 

Is my follow-up just to ask if the problem has been solved?

 

If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?

 

Thank you very much for your cooperation!

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Anonymous
Not applicable

Hi @shivaazure ,

 

Is my follow-up just to ask if the problem has been solved?

 

If so, can you accept the correct answer as a solution or share your solution to help other members find it faster?

 

Thank you very much for your cooperation!

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Anonymous
Not applicable

Hi @shivaazure ,

 

The error message indicates that void type is not supported. I have the following suggestions:

 

Before writing to the Delta table, you can convert the data type of the column in the notebook:

from pyspark.sql.functions import col
from pyspark.sql.types import StringType

#Convert the column to string type
df = df.withColumn("Extended_Status", col("Extended_Status").cast(StringType()))

 

Then use the following code to create the Delta table:

df.write.mode("overwrite").format("delta").saveAsTable("your_table_name")

 

Try again to see if the error still occurs.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.