Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
Hi all,
I may be miss reading things in the documentation, so I wanted some help!
I am setting up a fresh Fabric Tenant. I have set up a new workspace for the ingestion of data from numerous source systems, using folders to organize the source system workloads. I will have separate folders for each one, as well as a separate Lakehouse for each system.
I plan to then push this into a Transform or Silver layer (with it's own Deployment Pipeline), before publishing to separate business units workspaces, as the reporting layers.
My plan was to have a Bronze workspace for all ingestion, mirror over to silver workspace for transformations and store in a seperate lakehouse and then push out to several gold workspaces (some areas share source systems so I want to reuse items) for PBI reports.
I have set up a deployment pipeline and created the Dev, Test, and Prod workspaces from initiating this. Set up the first source system and pushed this through the pipeline. Now I am doing system number two, but when I set up a new Deployment pipeline it will not let me choose the workspace that both systems are in.
Does this mean that there is only 1 workspace and 1 deployment pipeline at a time?
Is it possible to have a deployment pipeline that pushes numerous source system data one at a time once they have been created? Then maintain the changes and push individual groups of items through the same deployment pipeline?
Any help on this would be welcome.
Solved! Go to Solution.
Deployment pipelines are a Power BI construct, and an artificial construct at that. Their main purpose is to standardize report development and deployment
The medallion architecture you mention is a Fabric construct, and is mainly focused on data refining and placement.
So these are different concepts, but they should not be multiplied. you should implement each of them only once, if at all.
Hi @DemoFour ,
Thank you for reaching out to the Microsoft Fabric Community.
I wanted to check if you had the opportunity to review the information provided by @lbendlin . Please feel free to contact us if you have any further questions. If the response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you
Good morning @v-tsaipranay ,
I am still reading documentation around what my options are, it has partially answered my question in that DP are not really useful in this situation, but it has not answered my question in regards to setting up an efficient tenant. I am now reading this CI/CD workflow options in Fabric - Microsoft Fabric | Microsoft Learn in order to help me plan the architecture and workflow.
Hi @DemoFour ,
Thank you for your clarification , you’re right Deployment Pipelines are mainly built for managing Power BI content, so they’re not always the best fit for a full data setup like the one you’re working on. Exploring CI/CD workflows is a smart move, especially for handling Notebooks, Data Pipelines, and Lakehouses more smoothly. One approach that might work well is combining Git integration for version control with CI/CD pipelines to manage data processes. For workspaces, you could organize them by business units or data layers (like Bronze, Silver, and Gold) to keep things clean and easy to manage. Please feel free to contact us if you have any further questions.
Thank you.
Hi @DemoFour ,
I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.
Hi @DemoFour ,
May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.
Thank you.
Deployment pipelines are a Power BI construct, and an artificial construct at that. Their main purpose is to standardize report development and deployment
The medallion architecture you mention is a Fabric construct, and is mainly focused on data refining and placement.
So these are different concepts, but they should not be multiplied. you should implement each of them only once, if at all.
Thanks @lbendlin ,
So it would be better to go down the Git integration route for doing the this version control on the ingestion with notebooks etc.?
I understand the construct of Medallion, I am still bringing in data as Ingest / Raw, then Transform then serve as models for analysts and trying to do this as per the platform tools. . .
There is very little detail about doing this properly, so I am learning as I go along in migrating from PBI reporting to wholesale data integrating and ELT and ETL. Making sure that we can build an enterprise tenant out in Fabric.
Would it not be too much overhead to have a single workspace per system?
there is no cost to a workspace. In fact there is a penalty for too many items in a workspace - 1000 max.
Your artifact structure should ideally follow the business structure.