Check your eligibility for this 50% exam voucher offer and join us for free live learning sessions to get prepared for Exam DP-700.
Get StartedDon't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.
We are going to have Dev Test and Production Data Engineering workareas managed by deployment pipelines. We are dealing with a very large enterprise scale telecoms datalakehouse. What are the methods & best practices for selectively replicating the datalakehouse to the Test workarea and Dev with it's multiple feature branches? We do not want to write anything to Prod tables until the pipelines are in Prod.
Thanks
Mike
Solved! Go to Solution.
Hi @MikeH_SDE,
Thanks for reaching out to the Microsoft fabric community forum.
Yes, we can Create a trigger configuration table that specifies which pipelines to run in each environment (Dev, Test, and Production). This table can contain permissions or flags indicating active pipelines per environment. The deployment process should include logic to read from this table and determine which pipelines are enabled based on the current deployment context.
By implementing a flexible and dynamic approach, using configurable pipeline controls and execution logic based on the deployment environment, you can significantly minimize the risk of unnecessary data backfilling in your Dev and Test environments. This not only saves resources but also creates a clearer and more manageable environment for testing and development.
For detail information please refer the documentation link for your better understanding:
I hope my suggestions give you good ideas, if you need any further assistance, feel free to reach out.
If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.
Thank you.
@v-tsaipranay Thanks for the advice, two things we need to solve on this are:
I am wondering, as I believe data does not promote - definitions and processes promote, can we have a trigger configuration table that sets what pipeline in what environment are run?
The deployment process changes which environment column is checked so we only run in Dev and Test environments the pipelines we need. Otherwise Dev & Test would backfill everything when it is test run anyway
Hi @MikeH_SDE,
Thanks for reaching out to the Microsoft fabric community forum.
Yes, we can Create a trigger configuration table that specifies which pipelines to run in each environment (Dev, Test, and Production). This table can contain permissions or flags indicating active pipelines per environment. The deployment process should include logic to read from this table and determine which pipelines are enabled based on the current deployment context.
By implementing a flexible and dynamic approach, using configurable pipeline controls and execution logic based on the deployment environment, you can significantly minimize the risk of unnecessary data backfilling in your Dev and Test environments. This not only saves resources but also creates a clearer and more manageable environment for testing and development.
For detail information please refer the documentation link for your better understanding:
I hope my suggestions give you good ideas, if you need any further assistance, feel free to reach out.
If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.
Thank you.
User | Count |
---|---|
33 | |
14 | |
6 | |
3 | |
2 |
User | Count |
---|---|
39 | |
22 | |
11 | |
7 | |
6 |