The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredAsk the Fabric Databases & App Development teams anything! Live on Reddit on August 26th. Learn more.
I have update statments at warehouse end which executes from ADF pipeline, we receive snapshot isolation errors on big tables where we ran udpate statments, wanted to understand if there are any better ways to handle these issues, as these are intermittent and does not happen when we ran re-try job of the same procedure, but in the process it starts the whole executions again and again, how can we avoid it in fabric warehouse?
Solved! Go to Solution.
Hi @AnmolGan81 , Thank you for reaching out to the Microsoft Community Forum.
Yes, you can handle snapshot isolation errors in Fabric Warehouse by using ADF’s built-in retry on the Stored Procedure activity, no need to modify your SQL procedures. Just go to the activity’s Settings tab in ADF and set a retry count and interval. This will automatically re-run the procedure if it fails due to transient issues like error 3960, without restarting the whole pipeline.
Pipelines and activities - Azure Data Factory & Azure Synapse | Microsoft Learn
Built-in policy definitions - Azure Data Factory | Microsoft Learn
Hi @AnmolGan81 , Thank you for reaching out to the Microsoft Community Forum.
You're hitting snapshot isolation errors in Fabric Warehouse because concurrent transactions are trying to update the same rows or overlapping partitions. Fabric uses snapshot-based concurrency with no user-configurable isolation level, so when row versions conflict, one transaction fails. This is expected behavior under heavy update workloads.
Stop using wide UPDATE statements and switch to MERGE with a filter that limits updates to only changed rows. This cuts down lock duration and write contention, reducing the chance of version conflicts. Avoid retrying the full pipeline in ADF. Instead, isolate the update step in its own activity with ADF’s built-in retry. If needed, you can also handle retry in T-SQL using TRY...CATCH with a delay and specific handling for error 3960.
If you're updating large volumes and still hitting contention, split the updates into smaller batches using a key or timestamp. For even heavier loads, consider moving to Delta Lake in a Lakehouse, which offers ACID transactions with better concurrency support via Spark.
Transactions in Warehouse Tables - Microsoft Fabric | Microsoft Learn
Performance Guidelines - Microsoft Fabric | Microsoft Learn
MERGE (Transact-SQL) - SQL Server | Microsoft Learn
We cannot stop using updates as these are part of our existing framework that contains SCD1 & SCD2 logics, neither we can run them sepretely inside the ADF, we have seen this behaviour happening for tables with huge datasets, and we dont have same procedures being executed from anywhere when these pipelines are running, is there anyway to check this what might be blocking.
For changing UPDATE statments to Merge is a huge thing but are you sure that this should resolve the issue that we are facing?
With respect to try and catch can you point me in the right direction with some blogs or examples that can help us in implementing this and further testing this stored procuedre?
Hi @AnmolGan81 , Thank you for reaching out to the Microsoft Community Forum.
Even if no other process is running the same procedure, snapshot isolation errors in Fabric Warehouse can still occur when updates touch large volumes of data. These aren’t blocking issues, they’re caused by Fabric’s internal row versioning. Under snapshot isolation (which is always enforced in Fabric), write-write conflicts can arise if the version store is overloaded or when overlapping updates happen, even within a single session.
Fabric doesn't expose lock-level views, but you can monitor execution behavior using sys.dm_pdw_exec_requests and sys.dm_pdw_request_steps to check for slow or stuck queries and get insight into execution stages.
Switching from UPDATE to MERGE only helps if it reduces the number of rows being written, for example, by excluding unchanged rows. If your current UPDATE already filters effectively, switching syntax won’t eliminate the issue.
The most practical mitigation is to add retry logic inside the stored procedure, targeting transient error 3960. Here’s a clean example:
DECLARE @retryCount INT = 0, @maxRetries INT = 3;
WHILE @retryCount < @maxRetries
BEGIN
BEGIN TRY
EXEC YourSCDProcedure;
BREAK;
END TRY
BEGIN CATCH
IF ERROR_NUMBER() = 3960
BEGIN
SET @retryCount += 1;
WAITFOR DELAY '00:00:05';
END
ELSE
BEGIN
THROW;
END
END CATCH
END
Please refer to the below documentation:
TRY...CATCH (Transact-SQL) - SQL Server | Microsoft Learn
Transactions in Warehouse Tables - Microsoft Fabric | Microsoft Learn
Performance Guidelines - Microsoft Fabric | Microsoft Learn
Read Transactions and Isolation Levels in Fabric W... - Microsoft Fabric Community
Can we add this retry logic inside the ADF Stored Procedure execution step, in order to avoid manually editing all teh stored procedures and putting the try catch statments into them?
Hi @AnmolGan81 , Thank you for reaching out to the Microsoft Community Forum.
Yes, you can handle snapshot isolation errors in Fabric Warehouse by using ADF’s built-in retry on the Stored Procedure activity, no need to modify your SQL procedures. Just go to the activity’s Settings tab in ADF and set a retry count and interval. This will automatically re-run the procedure if it fails due to transient issues like error 3960, without restarting the whole pipeline.
Pipelines and activities - Azure Data Factory & Azure Synapse | Microsoft Learn
Built-in policy definitions - Azure Data Factory | Microsoft Learn
Problem with adding re-try on ADF is that it starts re-execution of the whole stored procedure again, and more time is wasted, not sure if we have any other options, will keep re-try as an option for now..
Hi @AnmolGan81 , thanks for the update. I don't see any other options either. Please re-try and share your thoughts.
We found the issue which was causing this, we have designed a method to populate identity column in warehouse since identity is not supported in fabric yet, that method is called each time it inserts data where it stored max identity of a particular table, we have multiple ADF pipelines that runs paralley that is causing this table to insert the max identity, hence its been called multiple times by different pipelines, hence throwing snapshot isolation error, can we add some kind NOLOCK statment or something that it does not locks the table while updating the same table, hence able to update it for different tables at the same time is that possible?
Hi @AnmolGan81 , Thank you for reaching out to the Microsoft Community Forum.
NOLOCK won’t help here. It only affects reads, while your issue comes from multiple pipelines writing to the same max id row at the same time. Under Fabric’s snapshot isolation, those write/write conflicts will always throw errors. NOLOCK cannot prevent that.
The fix is to change how IDs are assigned. If Fabric supports it in your environment, the cleanest option is to use a SEQUENCE, which hands out unique numbers safely and without contention. If that isn’t available, use a small allocation table where each pipeline requests a block of IDs in one atomic update. That way, pipelines never clash on the same row.
Thanks for the response, can you elaborate more on this, we have 20-30 ADF pipelines and not fabric pipeliones that runs in sequence only but sometimes they overlap because and cause the block in the single table we have.
Let me check that one and get back.
Hi @AnmolGan81 , Thanks for the update. Please do get back and share your thoughts.
User | Count |
---|---|
2 | |
2 | |
2 | |
2 | |
1 |
User | Count |
---|---|
5 | |
4 | |
3 | |
2 | |
2 |