Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now

Reply
hallmarke14
Frequent Visitor

Microsoft 365 Business Central Data Ingestion into Fabric - Incremental Refresh

We use Microsoft 365 Business Central and looking to start using Fabric.  I am trying to find a way to use dynamic filtering during data ingestion.  I have tried the Incremental Refresh feature in DFG2, but that is applied after the data has already been ingested.  I have created parameters in DFG2 and created them in the Pipeline, but I keep getting an error stating that "dynamic data sources not supported."

 

Is there something that I am not aware of that will allow us to do an initial load, then incrementally pull smaller sets of data, so it doesn't take forever to load each time?  Is there an alternative solution in Notebooks that might be better?  We are open to using a Lakehouse or Warehouse.  Thanks!

1 ACCEPTED SOLUTION
v-prasare
Community Support
Community Support

Hi @hallmarke14,

 

To efficiently load data from Microsoft 365 Business Central into Microsoft Fabric and avoid repeated full data loads, the most reliable approach is to use Fabric Notebooks instead of Dataflow Gen2. Dataflow Gen2 applies filters only after loading the full dataset, which can be inefficient for large tables. Fabric Notebooks provide more control, allowing you to filter data before it is loaded ideal for scenarios where you need an initial full load followed by incremental loads.

start with a one-time full data load using either a Dataflow or a Notebook, depending on what’s easier for your team. After the full load, store the most recent value of a column like ModifiedDateTime in your Lakehouse, which will act as a reference point for future loads. This value will be used as a filter when requesting new or changed records from Business Central.

Once the filtered data is fetched, convert it to a Spark DataFrame and append it to your existing Lakehouse table. Schedule this Notebook using a Fabric pipeline to run at regular intervals, or configure it to run based on an event or file drop if needed. This setup minimizes load times, ensures data freshness, and avoids issues with unsupported dynamic data sources in Dataflow Gen2.

 

 

 

 

Thanks,

Prashanth Are

MS Fabric community support

View solution in original post

2 REPLIES 2
v-prasare
Community Support
Community Support

Hi @hallmarke14,

 

To efficiently load data from Microsoft 365 Business Central into Microsoft Fabric and avoid repeated full data loads, the most reliable approach is to use Fabric Notebooks instead of Dataflow Gen2. Dataflow Gen2 applies filters only after loading the full dataset, which can be inefficient for large tables. Fabric Notebooks provide more control, allowing you to filter data before it is loaded ideal for scenarios where you need an initial full load followed by incremental loads.

start with a one-time full data load using either a Dataflow or a Notebook, depending on what’s easier for your team. After the full load, store the most recent value of a column like ModifiedDateTime in your Lakehouse, which will act as a reference point for future loads. This value will be used as a filter when requesting new or changed records from Business Central.

Once the filtered data is fetched, convert it to a Spark DataFrame and append it to your existing Lakehouse table. Schedule this Notebook using a Fabric pipeline to run at regular intervals, or configure it to run based on an event or file drop if needed. This setup minimizes load times, ensures data freshness, and avoids issues with unsupported dynamic data sources in Dataflow Gen2.

 

 

 

 

Thanks,

Prashanth Are

MS Fabric community support

Thank you, Prashanth, for confirming that Notebooks is the best approach.  I have been working on developing a Lakehouse and already begun the Data Transformations in Notebooks using Spark SQL.  I will begin working on Data Ingestion using PySpark, or whichever language works best.

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.

Top Solution Authors
Top Kudoed Authors