Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Hello All,
Im using Pipelines to connect D365 FO data entities. I got stuck in few points where i need help.
I am currently working on integrating data from D365 FO into Fabric using Pipelines, specifically connecting data entities. However, I have encountered a few challenges and would greatly appreciate any guidance in resolving them.
Below are the steps which I followed,
1. Created a pipeline in my onelake.
2.Selected Source as Dynamics AX.
3.Given D365 URL and connection details.
4.Selected Data Entity, mapped the fileds and started pushing data.
Challenges:
Pushed data for a particular month say June 2025.
Now I extended the Data Entity in D365 FO and added few fields.
Now in the pipeline through copy data trying to map the new fileds. New fields are not available for mapping.
What steps should I follow to get the new fileds mapped.
I want to push historical data say 4 years of data how can I do it. I tried to push the 4 years but the pipeline got failed or it is taking more than 3 days and still not completed.
Thanks,
Boopathy
Solved! Go to Solution.
Hi @MKBoopathy ,
Thanks for sharing the background and your specific requirements.
Based on what you described, the Link to Fabric feature actually aligns well with your goals. It allows you to virtualize D365 FO data entities directly into a Fabric lakehouse without physically copying the data.
Since your Power BI reports are already using those entities, this setup ensures continuity. It also avoids load on BYOD and supports integration with services like Logic Apps and Power Automate by exposing the tables through the lakehouse.
That said, if you still prefer to use the copy activity to bring the data into Fabric, the earlier steps still apply -
i. removing and re-adding the source in your pipeline typically refreshes metadata, and ii.regenerating the mappings in D365 FO ensures the newly added fields are exposed.
iii. For historical loads, segmenting by date ranges helps avoid failures and long runtimes.
Hope this helps. Please reach out for further assistance.
Thank you.
Hi @MKBoopathy ,
We wanted to kindly follow up regarding your query. If you need any further assistance, please reach out.
Thank you.
Hi @MKBoopathy ,
Just checking in to see if you query is resolved and if any responses were helpful.
Otherwise, feel free to reach out for further assistance.
Thank you.
Hi @MKBoopathy ,
Thanks for your question.
As suggested by @jennratten , using the Link to Fabric feature is a more efficient option since it creates tables in the lakehouse without data duplication. This is particularly useful where near real-time data access is needed.
For more details, please follow this link: Link your Dataverse environment to Microsoft Fabric and unlock deep insights - Power Apps | Microsof...
However, if you still prefer to use the copy activity to ingest data into Fabric and you're not seeing the newly added fields from your extended Data Entity, try refreshing the metadata by removing and re-adding the source in your pipeline. In some cases, it might be also necessary to regenerate the mappings in D365 FO to reflect schema changes.
For loading large historical data, consider breaking the data load into smaller ranges, such as by year or quarter, instead of a full 4-year dataset in one go. This can help avoid pipeline timeouts and improve reliability.
Hope this helps. Please reach out for further assistance.
Thank you.
Hi v-veshwara-msft,
We preferred to go with Entity in Fabric because,
1. Planned for future automations.
2. Integration with logic apps, power automation etc.
3. Currently we have only BYOD. The load on BYOD is high so we planned for Fabric.
Now as there Power BI reports are already connected with entity we are setting up the same in fabric. But we are facing some hard steps which I mentioned initially.
So need advices and suggestions.
Hi @MKBoopathy ,
Thanks for sharing the background and your specific requirements.
Based on what you described, the Link to Fabric feature actually aligns well with your goals. It allows you to virtualize D365 FO data entities directly into a Fabric lakehouse without physically copying the data.
Since your Power BI reports are already using those entities, this setup ensures continuity. It also avoids load on BYOD and supports integration with services like Logic Apps and Power Automate by exposing the tables through the lakehouse.
That said, if you still prefer to use the copy activity to bring the data into Fabric, the earlier steps still apply -
i. removing and re-adding the source in your pipeline typically refreshes metadata, and ii.regenerating the mappings in D365 FO ensures the newly added fields are exposed.
iii. For historical loads, segmenting by date ranges helps avoid failures and long runtimes.
Hope this helps. Please reach out for further assistance.
Thank you.
Hi @MKBoopathy ,
Just wanted to check if the response provided was helpful. If further assistance is needed, please reach out.
Thank you.
Hello @MKBoopathy - Data does not have to be copied from Dataverse into Fabric. You can add a shortcut to a lakehouse to create virtualized tables. You enable this by using the Link to Fabric feature.
Please see this link for full instructions. Link all Dynamics 365 data to Microsoft Fabric