Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Be one of the first to start using Fabric Databases. View on-demand sessions with database experts and the Microsoft product team to learn just how easy it is to get started. Watch now

Reply
nikhilank
New Member

Full Load Common Data Model Folders(model.json) using Azure Data Factory

  • We recently implemented loading data from D365 Finance and Operations using Azure Synapse Link --> Common Data Model(model.js)
  • The data is loaded every 1 hour into a storage account and the data factory connects to this storage account, reads the data and ingests the data into snowflake.
  • Question: What is the fastest way to read data from all the historical folders and ingest the data to Snowflake using data factory? Considering 24 folders are created every day(as the Enable Incremental Update Folder Structure is set to 60 minutes for Synapse Link)
  • Note: I have already implemented getting all the folders from Storage account using the Lookup activity. This is very time consuming as there are 24 folders created every day. If I need to do a full load of a table right from the beginning after say 30 days, then I will have to loop through 30 * 24 = 720 folders 😮

    I appreciate your help! Thank you.
1 REPLY 1
v-nuoc-msft
Community Support
Community Support

Hi @nikhilank 

 

This forum is designed to discuss Fabric related content. If you have a question about ADF, you can go to the following link for more professional help:

 

Azure Data Factory | Microsoft Community Hub

 

I can offer you some suggestions that you can consider:

 

Create multiple pipelines that can run in parallel instead of processing folders sequentially. This can significantly reduce the overall data ingestion time.

 

Dynamically build folder paths using parameters in the ADF pipeline. This allows you to loop through the date range without having to explicitly list each folder. Consider using the ForEach activity to process each folder dynamically, rather than using the Lookup activity to retrieve all folders.

 

Optimize Settings in the data factory, such as increasing parallelism and adjusting batch sizes for data movement activities. Take advantage of Snowflake's bulk loading capabilities to ingest data more efficiently.

 

By leveraging parallel processing, dynamic content, and optimized data movement strategies, you can significantly reduce the time it takes to ingest data from multiple historical folders to Snowflake.

 

If you have any questions about Fabric data factory, we look forward to your continued use of this forum.

 

Regards,

Nono Chen

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

Helpful resources

Announcements
Las Vegas 2025

Join us at the Microsoft Fabric Community Conference

March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount!

Dec Fabric Community Survey

We want your feedback!

Your insights matter. That’s why we created a quick survey to learn about your experience finding answers to technical questions.

ArunFabCon

Microsoft Fabric Community Conference 2025

Arun Ulag shares exciting details about the Microsoft Fabric Conference 2025, which will be held in Las Vegas, NV.

December 2024

A Year in Review - December 2024

Find out what content was popular in the Fabric community during 2024.

Top Solution Authors