Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
So,
Received data from an outside source regarding pertinent information for pulling into a data lakehouse on Fabric.
The Initial import of the data functioned from the CSV file once it was cleaned up.
The sme process was used to clean an update file meant for appending to the data set from the imporT.
When attempting to import the new file, it came back with:
The field names are the same, and the table does exist as it can be read from both through the website and through a SQL connection to the lakehouse.
Please advise as to why your system is treating the same name as something other than the same name.
Solved! Go to Solution.
Hi @Alex_Mattics,
Thanks for reaching out to the Microsoft fabric community forum.
It seems while importing data from an external source into the Lakehouse, the initial import succeeded, and data was successfully written to a Delta table in the Fabric Lakehouse. You applied the same process to a second CSV file meant to append to the existing table but encountered this error "[DELTA_FAILED_TO_MERGE_FIELDS] Failed to merge fields 'PERMIT_NUMBER' and 'PERMIT_NUMBER' Error Code: InvalidTable".
This usually happens because even though the column names look identical (PERMIT_NUMBER and PERMIT_NUMBER), Delta Lake is detecting a conflict between these two fields which usually happens because of hidden metadata mismatches. The reasons can vary from different Data types or one column might allow nulls and the other might not. This difference can also trigger a merge failure.
As @nilendraFabric has already responded to your query, kindly go through his response and check if it solves your query and then mark the helpful reply as solution so that other community members can find it easily.
I would also take a moment to thank @nilendraFabric, for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
If I misunderstand your needs or you still have problems on it, please feel free to let us know.
Best Regards,
Hammad.
Community Support Team
If this post helps then please mark it as a solution, so that other members find it more quickly.
Thank you.
Hi @Alex_Mattics ,
From what you described, it sounds like the import process is not recognizing existing records correctly — which usually points to either:
Missing or incorrect primary key mapping during import. If the system can’t match incoming rows to existing ones based on a unique key, it won’t update — it’ll either skip or try to insert duplicates (and fail silently if constraints are in place).
Import mode behavior — some systems treat “import” as insert-only unless explicitly told to do an upsert. If you’re using a tool or script, check if there’s an “update if exists” or “merge” option.
Triggers or constraints — as others mentioned, these can block updates silently or redirect logic. Try disabling them temporarily to isolate the issue.
If you’re using SQL Server, I’d recommend switching to a MERGE statement or using INSERT ... ON DUPLICATE KEY UPDATE logic if supported by your import tool.
Also, make sure your CSV headers exactly match the column names and that there’s no hidden whitespace or encoding issues — those can silently break mapping.
If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
For short, you can check these 2 things:
* Metadata of the source and target column (are they the same)
* Constrains you specific on target (does the source still comply with that constrain or not)
Hi @Alex_Mattics ,
From what you described, it sounds like the import process is not recognizing existing records correctly — which usually points to either:
Missing or incorrect primary key mapping during import. If the system can’t match incoming rows to existing ones based on a unique key, it won’t update — it’ll either skip or try to insert duplicates (and fail silently if constraints are in place).
Import mode behavior — some systems treat “import” as insert-only unless explicitly told to do an upsert. If you’re using a tool or script, check if there’s an “update if exists” or “merge” option.
Triggers or constraints — as others mentioned, these can block updates silently or redirect logic. Try disabling them temporarily to isolate the issue.
If you’re using SQL Server, I’d recommend switching to a MERGE statement or using INSERT ... ON DUPLICATE KEY UPDATE logic if supported by your import tool.
Also, make sure your CSV headers exactly match the column names and that there’s no hidden whitespace or encoding issues — those can silently break mapping.
If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
Hi @Alex_Mattics,
Thanks for reaching out to the Microsoft fabric community forum.
It seems while importing data from an external source into the Lakehouse, the initial import succeeded, and data was successfully written to a Delta table in the Fabric Lakehouse. You applied the same process to a second CSV file meant to append to the existing table but encountered this error "[DELTA_FAILED_TO_MERGE_FIELDS] Failed to merge fields 'PERMIT_NUMBER' and 'PERMIT_NUMBER' Error Code: InvalidTable".
This usually happens because even though the column names look identical (PERMIT_NUMBER and PERMIT_NUMBER), Delta Lake is detecting a conflict between these two fields which usually happens because of hidden metadata mismatches. The reasons can vary from different Data types or one column might allow nulls and the other might not. This difference can also trigger a merge failure.
As @nilendraFabric has already responded to your query, kindly go through his response and check if it solves your query and then mark the helpful reply as solution so that other community members can find it easily.
I would also take a moment to thank @nilendraFabric, for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
If I misunderstand your needs or you still have problems on it, please feel free to let us know.
Best Regards,
Hammad.
Community Support Team
If this post helps then please mark it as a solution, so that other members find it more quickly.
Thank you.
Hi @Alex_Mattics,
As we haven’t heard back from you, so just following up to our previous message. I'd like to confirm if you've successfully resolved this issue or if you need further help.
If yes, you are welcome to share your workaround and mark it as a solution so that other users can benefit as well. If you find a reply particularly helpful to you, you can also mark it as a solution.
If you still have any questions or need more support, please feel free to let us know. We are more than happy to continue to help you.
Thank you for your patience and look forward to hearing from you.
Hi @Alex_Mattics,
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution so that other community members can find it easily.
Thank you.
Hi @Alex_Mattics,
I just wanted to follow up on your thread. If the issue is resolved, it would be great if you could mark the solution so other community members facing similar issues can benefit too.
If not, don’t hesitate to reach out, we’re happy to keep working with you on this.
Hello @Alex_Mattics
Try this
df.write.format("delta").option("mergeSchema", "true").mode("append").saveAsTable(tablename)
chekc the column as well
print(df.columns)
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
70 | |
44 | |
14 | |
12 | |
5 |
User | Count |
---|---|
80 | |
74 | |
27 | |
8 | |
7 |