Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes! Register now.

Reply
IAMCS
Helper I
Helper I

Common issue when writing data to warehouse from notebook

I was writing a df to warehouse , where I created a column as shown in image , and in the next image you can see the schema for the column as (nullable = false) , but for rest all columns it is (nullable=true), how can I make this column also nullable = true ?

IAMCS_0-1759218325751.png

IAMCS_1-1759218537935.png

because when for backfilling this table with historical data , I am using an excel file as source with same schema and when writing historical data to same table it says:

IAMCS_2-1759218664219.png

 

 

4 REPLIES 4
v-achippa
Community Support
Community Support

Hi @IAMCS,

 

Thank you for reaching out to Microsoft Fabric Community.

 

Thank you @rohit1991 for the prompt response. 

 

As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided by the user for the issue worked? or let us know if you need any further assistance.

 

Thanks and regards,

Anjan Kumar Chippa

rohit1991
Super User
Super User

Hi @IAMCS 

 

The error happens because the column you created in Spark was marked as not nullable (nullable = false), while in your Excel source the same column is nullable (nullable = true). This difference makes the schemas inconsistent, so when you try to write historical data back into the same table, Spark blocks it. To fix this, you need to make sure both schemas match. The easiest way is to recreate or cast the column in Spark with nullable = true, so it can accept empty values just like your Excel data. Once the schemas are aligned, the write will succeed without errors. Always check the schema before writing, because Spark is strict about column types and nullability. In short, your table column needs to be made nullable to match your source file and allow backfilling with no issues.


Did it work? ✔ Give a Kudo • Mark as Solution – help others too!

The column in not present in excel sheet , it is the column which I am later creating at end after reading the data from excel. So I don't think , it has something  related with excel sheet.

Hi @IAMCS 

 

You’re correct that this isn’t about the Excel sheet. The real reason is how Spark handles new columns. When you add a column using withColumn, Spark by default marks it as not nullable (nullable = false) unless you explicitly define it differently. That’s why this specific column shows up as not nullable, while the ones coming from Excel stay as nullable = true. To fix this, you need to explicitly allow nulls when creating or casting the column. One way is to rebuild the DataFrame with a schema where that column is set as nullable = true, for example by defining a StructType with nullable=True, or by recreating the column using a cast and schema override. This ensures the new column aligns with the rest of your data, avoids schema mismatch errors, and lets you backfill or write historical data without issues.


Did it work? ✔ Give a Kudo • Mark as Solution – help others too!

Helpful resources

Announcements
September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.