Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
ramankr48
Helper II
Helper II

Notebook execution failed at Notebook service with http status code - '200'

Hi Team, I am getting errors while loading data into my bronze table through data factory, i have uploaded some data last friday and today i wanted to load some new data but after running the pipeline, i got error(same error) in gold layer notebook, so i delete all the data from all the layers and started reloading the data, starting from bronze but agaian got the same error :

 

the error is like this :

Notebook execution failed at Notebook service with http status code - '200', please check the Run logs on Notebook, additional details - 'Error name - UnsupportedOperationException, Error value - [DELTA_MULTIPLE_SOURCE_ROW_MATCHING_TARGET_ROW_IN_MERGE] Cannot perform Merge as multiple source rows matched and attempted to modify the same
target row in the Delta table in possibly conflicting ways. By SQL semantics of Merge,
when multiple source rows match on the same target row, the result may be ambiguous
as it is unclear which source row should be used to update or delete the matching
target row. You can preprocess the source table to eliminate the possibility of
multiple matches. 

2 ACCEPTED SOLUTIONS
ramankr48
Helper II
Helper II

Hi Team, the issue got solved, it was issue related to quote

View solution in original post

Hi @ramankr48 ,

We really appreciate your efforts and for letting us know the update on the issue. Please accept your reply as Solution so as to help other community members who may come across this post in the future.

Please continue using fabric community forum for your further assistance.
Thank you

View solution in original post

10 REPLIES 10
ramankr48
Helper II
Helper II

Hi Team, the issue got solved, it was issue related to quote

Hi @ramankr48 ,

We really appreciate your efforts and for letting us know the update on the issue. Please accept your reply as Solution so as to help other community members who may come across this post in the future.

Please continue using fabric community forum for your further assistance.
Thank you

v-nmadadi-msft
Community Support
Community Support

Hi @ramankr48 

May I ask if you have resolved this issue from suggestions mentioned by @FabianSchut  and @nilendraFabric by verifing if combination of student_id and course_id is always unique?  If so, please mark the helpful reply and accept it as the solution. And if some other troubleshooting step has worked please mention that step and Accept it as solution. This will be helpful for other community members who have similar problems to solve it faster.

Thank you.

nilendraFabric
Super User
Super User

Hello @ramankr48 

 


The error occurs when:
1. Multiple rows in the source dataset match the same row in the target Delta table during a `MERGE` operation.
2. This leads to conflicting updates or deletions, as SQL cannot determine which source row should be applied to the target row.

 

what are you doing in your copy activity?

 

Do you any sort of notebook doing merge?

 

Thanks

I am using the notebook, where I have written the merge query

Hello @ramankr48 , Can you try dropping your table and rerun the full load if possible.

Hi @nilendraFabric I have tried that 2-3 times but still facing this error
, adding my code snippet here, actually yesterday I have loaded some data and today again I tried to load some data with new student_id but some of the course_id is similar , in comaprison to already available data, and any way in merge query in the condition i am using both column with and condition, so i don't think there should be any issue, it is working as unique identifier

fact_student_performance_table_path = f"abfss://{workspace}@onelake.dfs.fabric.microsoft.com/LH_Gold.Lakehouse/Tables/fact_student_performance"
fact_deltastudentperf = DeltaTable.forPath(spark, fact_student_performance_table_path)


fact_deltastudentperf.alias("target").merge(
    df_fact_student_performance.alias("source"), 
    "target.Student_ID = source.Student_ID" and "target.Course_ID = source.Course_ID"
    ).whenMatchedUpdate(set = {
        "Enrollment_Date" : "source.Enrollment_Date",
        "Completion_Date" : "source.Completion_Date",
        "Status" : "source.Status",
        "Final_Grade" : "source.Final_Grade",
        "Attendance_Rate" : "source.Attendance_Rate",
        "Time_Spent_on_Course_hrs" : "source.Time_Spent_on_Course_hrs",
        "Assignments_Completed" : "source.Assignments_Completed",
        "Quizzes_Completed" :  "source.Quizzes_Completed",
        "Forum_Posts" : "source.Forum_Posts",
        "Messages_Sent" : "source.Messages_Sent",
        "Quiz_Average_Score" : "source.Quiz_Average_Score",
        "Assignment_Scores" : "source.Assignment_Scores",
        "Assignment_Average_Score" : "source.Assignment_Average_Score",
        "Project_Score" : "source.Project_Score",
        "Extra_Credit" : "source.Extra_Credit",
        "Overall_Performance" : "source.Overall_Performance",
        "Feedback_Score" : "source.Feedback_Score",
        "Completion_Time_Days" : "source.Completion_Time_Days",
        "Performance_Score" : "source.Performance_Score",
        "Course_Completion_Rate" : "source.Course_Completion_Rate",
        "Processing_Date" : "source.Processing_Date"

    }).whenNotMatchedInsert(values = {
        "Student_ID" : "source.Student_ID",
        "Course_ID" : "source.Course_ID",
        "Enrollment_Date" : "source.Enrollment_Date",
        "Completion_Date" : "source.Completion_Date",
        "Status" : "source.Status",
        "Final_Grade" : "source.Final_Grade",
        "Attendance_Rate" : "source.Attendance_Rate",
        "Time_Spent_on_Course_hrs" : "source.Time_Spent_on_Course_hrs",
        "Assignments_Completed" : "source.Assignments_Completed",
        "Quizzes_Completed" :  "source.Quizzes_Completed",
        "Forum_Posts" : "source.Forum_Posts",
        "Messages_Sent" : "source.Messages_Sent",
        "Quiz_Average_Score" : "source.Quiz_Average_Score",
        "Assignment_Scores" : "source.Assignment_Scores",
        "Assignment_Average_Score" : "source.Assignment_Average_Score",
        "Project_Score" : "source.Project_Score",
        "Extra_Credit" : "source.Extra_Credit",
        "Overall_Performance" : "source.Overall_Performance",
        "Feedback_Score" : "source.Feedback_Score",
        "Completion_Time_Days" : "source.Completion_Time_Days",
        "Performance_Score" : "source.Performance_Score",
        "Course_Completion_Rate" : "source.Course_Completion_Rate",
        "Processing_Date" : "source.Processing_Date"
    }).execute()

 

FabianSchut
Super User
Super User

Hi, what options do you use when landing the data in the bronze lakehouse? It seems like you use a merge statement, but is it possible that your unique identifier is not unique in the source? 

I am using the combination of student_id and course_id in fact table and in bronze and silver layer i have used individual column as unique identifier which is student_id and course_id respectively

Could you check the source with some sql scripts for example that the combination of student_id and course_id is always unique?

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.