Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hi @nikhilank
This forum is designed to discuss Fabric related content. If you have a question about ADF, you can go to the following link for more professional help:
Azure Data Factory | Microsoft Community Hub
I can offer you some suggestions that you can consider:
Create multiple pipelines that can run in parallel instead of processing folders sequentially. This can significantly reduce the overall data ingestion time.
Dynamically build folder paths using parameters in the ADF pipeline. This allows you to loop through the date range without having to explicitly list each folder. Consider using the ForEach activity to process each folder dynamically, rather than using the Lookup activity to retrieve all folders.
Optimize Settings in the data factory, such as increasing parallelism and adjusting batch sizes for data movement activities. Take advantage of Snowflake's bulk loading capabilities to ingest data more efficiently.
By leveraging parallel processing, dynamic content, and optimized data movement strategies, you can significantly reduce the time it takes to ingest data from multiple historical folders to Snowflake.
If you have any questions about Fabric data factory, we look forward to your continued use of this forum.
Regards,
Nono Chen
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.