Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
MirjamPD
Regular Visitor

Sudden concurrency errors when running Fabric notebooks for ETL workloads

Hi everyone,

We’re suddenly experiencing concurrency-related errors when running our ETL pipelines using Microsoft Fabric notebooks. Nothing has changed in our setup or notebook logic, but starting September 15 2025, we began getting errors such as:

 

"Error occurred. Error type: <class 'delta.exceptions.ConcurrentAppendException'> . Error message: [DELTA_CONCURRENT_APPEND] ConcurrentAppendException: Files were added to the root of the table by a concurrent update. Please try the operation again."

 

We used to be able to perform between 10 and 20 parallel writes to the same table without any issues. Now, the effective concurrency has dropped to just one.

 

 

What we have tried

  • VACUUM on the table

  • Truncating/emptying the table

  • Creating a brand-new table and writing there

Questions

  1. Has there been any recent changes to cuncurrency or session management limits in Fabric Notebooks? Or has anything changed recently in Fabric regarding table/transaction locks behavior that would reduce parallel writers on the same table?

  2. Is there a recommended way to manage or queue concurrent notebook executions?

  3. Should we report this as a support case, or is it a known issue under investigation?

 

Any insights from the Fabric team or other users would be greatly appreciated.

Thanks in advance! 🙂

 

Mirjam

7 REPLIES 7
v-pgoloju
Community Support
Community Support

Hi @MirjamPD,

 

Just checking, have you had a chance to open a support ticket, as suggested. If so, we'd love to hear the current status or any updates from that.

If the issue was resolved through the support ticket, it would be great if you could share the solution here as well. It could really help other community members find answers more quickly.

 

Warm regards,
Prasanna Kumar

v-pgoloju
Community Support
Community Support

Hi @MirjamPD,

 

If the issue still persists, I’d recommend raising a support ticket with Microsoft. The support team can look into the backend and provide more in-depth assistance tailored to your environment.
https://learn.microsoft.com/en-us/power-bi/support/create-support-ticket

 

Thanks & regards,

Prasanna Kumar

v-pgoloju
Community Support
Community Support

Hi @MirjamPD,

 

Thank you for reaching out to the Microsoft Fabric Forum Community, and special thanks to @tayloramy and @vestergaardj  for prompt and helpful responses.

 

Just following up to see if the Response provided by community members were helpful in addressing the issue. if the issue still persists Feel free to reach out if you need any further clarification or assistance.

 

Best regards,
Prasanna Kumar

 

Hej,

Thanks for checking in — the issue is still not resolved.


We continue to experience concurrency/locking errors when multiple Fabric notebooks write to the same Delta table.


None of the previous suggestions solved it.


Could someone from the Fabric engineering or support team please confirm if this is a known issue or if there have been recent changes to concurrency behavior in Fabric?

vestergaardj
Most Valuable Professional
Most Valuable Professional

Hi @tayloramy 

 

We already have implemented retry, even with jitter for up to 30 seconds and still face this.

We receive a lot of files, and have no control over when they arrive - we load when they arrive.

In a separate fabric instance we even face this with files that are not writing in the same table 🤔🤯

 

Again, we used to be able to have 10 - 20 files runing in parallel, but are now down to 1. (F64)

If you encounter this when files are writing to different tables then somethig weird is going on. 

 

Here's a maybe gross solution - are you able to add a source filename column (provides your source files each have a unique name) or a combination if filename+load datetime (if they do not have unique names) and then partition that table by this column? That will ensure that each file loads into it's own parquet file. 

Not great for performance, but down the line you can make a copy of this table and optimize it before using it for anything. 

 

tayloramy
Community Champion
Community Champion

Hi @MirjamPD,

 

I’ve seen this exact error when multiple Spark jobs try to write to the same Delta table (or the same partitions) at roughly the same time. Delta Lake uses optimistic concurrency, if another transaction adds files that overlap what your job is reading/writing, Spark throws ConcurrentAppendException rather than silently merging the changes. 

 

I don't think there's been a recent change here - I experienced this issue many months ago. In your notebooks you could try to catch the ConcurrentAppendException error, and then pause for 5 seconds and try again. This isn't a great solution, but it's an easy one to implement. 

 

A more robust solution would be for your concurrent notebooks to each write to their own table and then after all notebooks are complete, have a last step notebook that merges the data all into one table. 

 

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Taylor Amy.

Helpful resources

Announcements
Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

October Fabric Update Carousel

Fabric Monthly Update - October 2025

Check out the October 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.