Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.

Reply
gdb729
Regular Visitor

Notebook Merge Only Inserting Not Updating Matched Rows

See my code below:

from delta.tables import *

# Get the target Delta table.
target_table = DeltaTable.forName(spark, "targettable")

# Define the source DataFrame (e.g., from a CSV file or another table).
source_table_path = "abfss://stagetable"
source_table = spark.read.format("delta").load(source_table_path)

# Run the merge operation.
(
    target_table.alias("target")
    .merge(
        source_table.alias("source"),
        condition="target.id = source.id AND target.transaction = source.transaction AND target.createdate = source.createdate"  # Replace with your matching condition
    )
    .whenMatchedUpdateAll()  # Update all columns if matched
    .whenNotMatchedInsertAll() # Insert if not matched
    .execute()
)

 

As expected when I run my code the first time without any records in the table that are in my stage table, it inserts the records.

 

The second time, when run the code, it is inserting the same records again, which to me seems like it is ignoring the condition clause because it should find matches for what it already inserted.  I'm struggling to understand why it is doing it.  Any help would be appreciated.

1 ACCEPTED SOLUTION
v-agajavelly
Community Support
Community Support

Hi @gdb729 ,

It sounds like you're really close but what you're describing usually points to the merge condition not evaluating as a true match, even though the data looks identical at first glance. A few things that can trip this up.

  1. Data type mismatches: For example, if target.id is an int and source.id is a string, Spark won't match them even if the values appear the same.
  2. Null values: Regular = comparison fails when either side is null. In Spark, null = null returns false. You’ll want to use the null-safe equality operator (<=>) instead.
  3. Whitespace / case issues:   Especially on strings like transaction, even a trailing space can cause a mismatch.

Regards,
Akhil.

View solution in original post

2 REPLIES 2
v-agajavelly
Community Support
Community Support

Hi @gdb729 ,

It sounds like you're really close but what you're describing usually points to the merge condition not evaluating as a true match, even though the data looks identical at first glance. A few things that can trip this up.

  1. Data type mismatches: For example, if target.id is an int and source.id is a string, Spark won't match them even if the values appear the same.
  2. Null values: Regular = comparison fails when either side is null. In Spark, null = null returns false. You’ll want to use the null-safe equality operator (<=>) instead.
  3. Whitespace / case issues:   Especially on strings like transaction, even a trailing space can cause a mismatch.

Regards,
Akhil.

Thanks for the quick response.  Ended up being a null filter and when I swapped the equality operator which I didn't think I needed as that field shouldn't be null, merge worked correctly.  

Helpful resources

Announcements
September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.

Top Kudoed Authors