Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
hallmarke14
Frequent Visitor

Microsoft 365 Business Central Data Ingestion into Fabric - Incremental Refresh

We use Microsoft 365 Business Central and looking to start using Fabric.  I am trying to find a way to use dynamic filtering during data ingestion.  I have tried the Incremental Refresh feature in DFG2, but that is applied after the data has already been ingested.  I have created parameters in DFG2 and created them in the Pipeline, but I keep getting an error stating that "dynamic data sources not supported."

 

Is there something that I am not aware of that will allow us to do an initial load, then incrementally pull smaller sets of data, so it doesn't take forever to load each time?  Is there an alternative solution in Notebooks that might be better?  We are open to using a Lakehouse or Warehouse.  Thanks!

1 ACCEPTED SOLUTION
v-prasare
Community Support
Community Support

Hi @hallmarke14,

 

To efficiently load data from Microsoft 365 Business Central into Microsoft Fabric and avoid repeated full data loads, the most reliable approach is to use Fabric Notebooks instead of Dataflow Gen2. Dataflow Gen2 applies filters only after loading the full dataset, which can be inefficient for large tables. Fabric Notebooks provide more control, allowing you to filter data before it is loaded ideal for scenarios where you need an initial full load followed by incremental loads.

start with a one-time full data load using either a Dataflow or a Notebook, depending on what’s easier for your team. After the full load, store the most recent value of a column like ModifiedDateTime in your Lakehouse, which will act as a reference point for future loads. This value will be used as a filter when requesting new or changed records from Business Central.

Once the filtered data is fetched, convert it to a Spark DataFrame and append it to your existing Lakehouse table. Schedule this Notebook using a Fabric pipeline to run at regular intervals, or configure it to run based on an event or file drop if needed. This setup minimizes load times, ensures data freshness, and avoids issues with unsupported dynamic data sources in Dataflow Gen2.

 

 

 

 

Thanks,

Prashanth Are

MS Fabric community support

View solution in original post

2 REPLIES 2
v-prasare
Community Support
Community Support

Hi @hallmarke14,

 

To efficiently load data from Microsoft 365 Business Central into Microsoft Fabric and avoid repeated full data loads, the most reliable approach is to use Fabric Notebooks instead of Dataflow Gen2. Dataflow Gen2 applies filters only after loading the full dataset, which can be inefficient for large tables. Fabric Notebooks provide more control, allowing you to filter data before it is loaded ideal for scenarios where you need an initial full load followed by incremental loads.

start with a one-time full data load using either a Dataflow or a Notebook, depending on what’s easier for your team. After the full load, store the most recent value of a column like ModifiedDateTime in your Lakehouse, which will act as a reference point for future loads. This value will be used as a filter when requesting new or changed records from Business Central.

Once the filtered data is fetched, convert it to a Spark DataFrame and append it to your existing Lakehouse table. Schedule this Notebook using a Fabric pipeline to run at regular intervals, or configure it to run based on an event or file drop if needed. This setup minimizes load times, ensures data freshness, and avoids issues with unsupported dynamic data sources in Dataflow Gen2.

 

 

 

 

Thanks,

Prashanth Are

MS Fabric community support

Thank you, Prashanth, for confirming that Notebooks is the best approach.  I have been working on developing a Lakehouse and already begun the Data Transformations in Notebooks using Spark SQL.  I will begin working on Data Ingestion using PySpark, or whichever language works best.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.

Top Kudoed Authors