Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.

Reply
gdb729
Regular Visitor

Notebook Merge Only Inserting Not Updating Matched Rows

See my code below:

from delta.tables import *

# Get the target Delta table.
target_table = DeltaTable.forName(spark, "targettable")

# Define the source DataFrame (e.g., from a CSV file or another table).
source_table_path = "abfss://stagetable"
source_table = spark.read.format("delta").load(source_table_path)

# Run the merge operation.
(
    target_table.alias("target")
    .merge(
        source_table.alias("source"),
        condition="target.id = source.id AND target.transaction = source.transaction AND target.createdate = source.createdate"  # Replace with your matching condition
    )
    .whenMatchedUpdateAll()  # Update all columns if matched
    .whenNotMatchedInsertAll() # Insert if not matched
    .execute()
)

 

As expected when I run my code the first time without any records in the table that are in my stage table, it inserts the records.

 

The second time, when run the code, it is inserting the same records again, which to me seems like it is ignoring the condition clause because it should find matches for what it already inserted.  I'm struggling to understand why it is doing it.  Any help would be appreciated.

1 ACCEPTED SOLUTION
v-agajavelly
Community Support
Community Support

Hi @gdb729 ,

It sounds like you're really close but what you're describing usually points to the merge condition not evaluating as a true match, even though the data looks identical at first glance. A few things that can trip this up.

  1. Data type mismatches: For example, if target.id is an int and source.id is a string, Spark won't match them even if the values appear the same.
  2. Null values: Regular = comparison fails when either side is null. In Spark, null = null returns false. You’ll want to use the null-safe equality operator (<=>) instead.
  3. Whitespace / case issues:   Especially on strings like transaction, even a trailing space can cause a mismatch.

Regards,
Akhil.

View solution in original post

2 REPLIES 2
v-agajavelly
Community Support
Community Support

Hi @gdb729 ,

It sounds like you're really close but what you're describing usually points to the merge condition not evaluating as a true match, even though the data looks identical at first glance. A few things that can trip this up.

  1. Data type mismatches: For example, if target.id is an int and source.id is a string, Spark won't match them even if the values appear the same.
  2. Null values: Regular = comparison fails when either side is null. In Spark, null = null returns false. You’ll want to use the null-safe equality operator (<=>) instead.
  3. Whitespace / case issues:   Especially on strings like transaction, even a trailing space can cause a mismatch.

Regards,
Akhil.

Thanks for the quick response.  Ended up being a null filter and when I swapped the equality operator which I didn't think I needed as that field shouldn't be null, merge worked correctly.  

Helpful resources

Announcements
FabCon Global Hackathon Carousel

FabCon Global Hackathon

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!

September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Kudoed Authors