Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Special holiday offer! You and a friend can attend FabCon with a BOGO code. Supplies are limited. Register now.
What are the best practices for handling large file transfers in Azure Data Factory, and how can I optimize performance for big data workflows?
Solved! Go to Solution.
If you are referring to Fabric Data Factory:
Try using Data Pipeline Copy Activity, or Dataflow Gen2 Fast Copy.
(If you are referring to Azure Data Factory: that's another forum.)
You could potentially look into Notebooks as well.
What kind of data source are you going to read from?
Hi @rahul_rajesh ,
If it's in Fabric Data Factory, you can test as frithjof_v said.
For your second question, to optimize performance for big data workflows in Azure Data Factory.
I think you can check out this official document below:
Best Regards
Yilong Zhou
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi @rahul_rajesh ,
If it's in Fabric Data Factory, you can test as frithjof_v said.
For your second question, to optimize performance for big data workflows in Azure Data Factory.
I think you can check out this official document below:
Best Regards
Yilong Zhou
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
If you are referring to Fabric Data Factory:
Try using Data Pipeline Copy Activity, or Dataflow Gen2 Fast Copy.
(If you are referring to Azure Data Factory: that's another forum.)
You could potentially look into Notebooks as well.
What kind of data source are you going to read from?