Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
I tried deployement pipelines to move a warehouse from bronze layer having multiple tables and views to silver layer , using deployement pipeline. In the silver layer the warehouse got created , but I dont see any underlying item like tables and views in it , its empty.
Did I missed anything ?
Solved! Go to Solution.
Hi @IAMCS,
The reason deployment pipelines in Microsoft Fabric only support metadata-level artifacts (like the warehouse container itself) and not the physical objects inside (like tables, views, or data) comes down to how Fabric is designed to treat data vs. artifacts.
* Deployment pipelines are meant to manage and promote semantic and structural elements things like Power BI datasets, reports, lakehouses, warehouses as artifacts not the data within them. This ensures environments (Dev, Test, Prod) remain clean and isolated, without the risk of accidentally pushing test or dev data into production.
* Data Warehouses are treated as runtime engines. In Fabric, a warehouse is a compute engine backed by Delta Lake storage. The actual physical objects (tables, views) and their contents live inside the storage layer and are not tracked or versioned as artifacts within the workspace. The pipeline doesn't have visibility or control over those objects unless they're separately defined as deployable assets (like SQL scripts or notebooks).
* By not automatically deploying underlying data or schema, Fabric encourages teams to treat warehouse schema definitions as code which can then be version-controlled and deployed safely using tools like SQL scripts, notebooks, or DevOps pipelines. This makes the deployment process intentional and auditable.
To summarize deployment pipelines handle workspace artifacts, not internal warehouse contents, to ensure clean separation, prevent data leakage, and align with modern data ops practices.
If I misunderstand your needs or you still have problems on it, please feel free to let us know.
Best Regards,
Hammad.
Hi @IAMCS,
As we haven’t heard back from you, so just following up to our previous message. I'd like to confirm if you've successfully resolved this issue or if you need further help.
If yes, you are welcome to share your workaround and mark it as a solution so that other users can benefit as well. If you find a reply particularly helpful to you, you can also mark it as a solution.
If so, it would be really helpful for the community if you could mark the answer that helped you the most. If you're still looking for guidance, feel free to give us an update, we’re here for you.
Best Regards,
Hammad.
Yes Right, on my testing I observed for lakehouse and dataflows using deployement pipeline creates a blank lakehouse, for warehouses deployed using DP only table with their respective schema gets created in target but with no data , and for reports and notebooks they are deployed as it is , with whatever they have inside.
Can we have a specific reason for personal knowledge why * Deployment pipelines only support metadata and artifacts not the underlying data or physical objects inside the warehouse like tables or views.
Hi @IAMCS,
The reason deployment pipelines in Microsoft Fabric only support metadata-level artifacts (like the warehouse container itself) and not the physical objects inside (like tables, views, or data) comes down to how Fabric is designed to treat data vs. artifacts.
* Deployment pipelines are meant to manage and promote semantic and structural elements things like Power BI datasets, reports, lakehouses, warehouses as artifacts not the data within them. This ensures environments (Dev, Test, Prod) remain clean and isolated, without the risk of accidentally pushing test or dev data into production.
* Data Warehouses are treated as runtime engines. In Fabric, a warehouse is a compute engine backed by Delta Lake storage. The actual physical objects (tables, views) and their contents live inside the storage layer and are not tracked or versioned as artifacts within the workspace. The pipeline doesn't have visibility or control over those objects unless they're separately defined as deployable assets (like SQL scripts or notebooks).
* By not automatically deploying underlying data or schema, Fabric encourages teams to treat warehouse schema definitions as code which can then be version-controlled and deployed safely using tools like SQL scripts, notebooks, or DevOps pipelines. This makes the deployment process intentional and auditable.
To summarize deployment pipelines handle workspace artifacts, not internal warehouse contents, to ensure clean separation, prevent data leakage, and align with modern data ops practices.
If I misunderstand your needs or you still have problems on it, please feel free to let us know.
Best Regards,
Hammad.
Yes Right, on my testing I observed for lakehouse and dataflows using deployement pipeline creates a blank lakehouse, for warehouses deployed using DP only table with their respective schema gets created in target but with no data , and for reports and notebooks they are deployed as it is , with whatever they have inside.
Hi @IAMCS,
As we haven’t heard back from you, so just following up to our previous message. I'd like to confirm if you've successfully resolved this issue or if you need further help.
If yes, you are welcome to share your workaround and mark it as a solution so that other users can benefit as well. If you find a reply particularly helpful to you, you can also mark it as a solution.
If so, it would be really helpful for the community if you could mark the answer that helped you the most. If you're still looking for guidance, feel free to give us an update, we’re here for you.
Best Regards,
Hammad.
Hi @IAMCS,
Thanks for reaching out to the Microsoft fabric community forum.
Based on your description, it sounds like the deployment pipeline is only promoting the warehouse structure itself (i.e., the object), but not its contents such as the tables, views, or data. Here are a few key points to check and steps to help resolve this:
* Deployment pipelines only support metadata and artifacts not the underlying data or physical objects inside the warehouse like tables or views. This is by design. While the warehouse object is copied, the actual DDL (tables, views, procedures) is not included.
* To move the actual contents (schema objects like tables/views), you will need to manually recreate them in the target environment (silver layer) or automate the process using scripts. Here's how you can proceed:
* Use T-SQL scripts to generate CREATE TABLE, CREATE VIEW, etc., for all objects in your bronze warehouse.
* Store these scripts in a Lakehouse notebook, pipeline, or use a custom deployment stage in your DevOps process to apply them during deployment.
* Alternatively, use SQL scripts stored as Fabric items and include them in the deployment pipeline to recreate the structure in each environment.
if you're using Dataflows or Notebooks to populate your bronze warehouse, make sure similar processes exist or are parameterized for the silver layer.
If I misunderstand your needs or you still have problems on it, please feel free to let us know.
Best Regards,
Hammad.
Community Support Team
User | Count |
---|---|
3 | |
2 | |
2 | |
1 | |
1 |
User | Count |
---|---|
12 | |
8 | |
4 | |
3 | |
2 |