Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hello everybody,
We are transitioning our data pipelines from Synapse to Fabric and have encountered an error that wasn't present in Synapse.
The pipeline that we implemented in Fabric uses a copy activity with ADLS Gen2 as source and an on-prem Oracle Database as the destination. We are experiencing the error that if the source data contains more rows than specified in the "Write batch size" option, the copy activity fails:
Failure happened on 'destination' side. ErrorCode=OracleTableNotExistError,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The specified table <OUR_TABLE_NAME> doesn't exist.,Source=Microsoft.DataTransfer.Connectors.OracleV2Core,'
However, the copy activity actually writes data to the table with the number of rows of the specified batch size to the Oracle table before raising the error.
Since we used the same table as a destination in our Synapse pipeline, we can assume that the error is not on the database-side.
Here are some screenshots with test data that was used to recreate this behavior. The test data has 100 rows of data:
1) Working pipeline if the batch size is bigger than the source data:
2) Error if batch size if less than rows in source dataset:
Attempted solutions include:
All of these attempted solutions gave the same error.
As a temporary workaround, we set the batch size to a maximum. However, this batch size seems to be capped at around one million rows, and datasets larger than this threshold still result in errors
Does anyone have insights on why this error occurs or how to fix it? Any help would be greatly appreciated!
Thank you!
Have you tried setting a value for the 'write batch timeout' on your destination config?
I don't see a default value listed anywhere so I'm not sure how it will behave when it has written one batch and is waiting to write another batch. Maybe it isn't waiting at all, and closing the connection before the entire transfer is complete?
Try setting it to something like 00:01:00 for small batch sizes, or really however long you think it should/could reasonbly take to write the number of rows in your batch.
Hello @IntegrateGuru, thank you for your idea. I tried it again using different times for write batch timeout on the destination. I tried 1 second, 10 seconds, 30 seconds, 1 minute, and 10 minutes for a batch size of 15 rows. However, for all of these runs, the original error occured a few seconds after the pipeline was started.
HI @AwadFabric,
I'd like to suggest you take al look the following document about data factory feature limitations if they meets to your scenario:
Data Factory limitations overview - Microsoft Fabric | Microsoft Learn
Regards,
Xiaoxin Sheng
Hello @Anonymous, thank you for your suggestion. Unfortunately, I cannot see a solution to the problem in there.
Hi @AwadFabric,
Perhaps you can take a look the following link that told about the similar issue if it help with your scenario:
Copy activity successfully loads more rows than Write Batch Size in Azure pipeline - Microsoft Q&A
Regards,
Xiaoxin Sheng
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
10 | |
4 | |
4 | |
3 | |
3 |