Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more
I created deployement pipelines to move object from DEV to PROD. I was able to move lakehouses but not schema's getting moved. Any idea what could be the issue here.I didnt get any error. Movement was showing as successful deployement.
Solved! Go to Solution.
Hi @Ymatole
The behavior you’re seeing is expected. Deployment pipelines in Fabric move the Lakehouse item (metadata and settings) but do not migrate the schema or underlying data from the SQL endpoint or OneLake. That’s why the deployment shows as successful, but your tables/schemas don’t appear in PROD.
To move schemas and data you’ll need a separate process:
Schema: Export DDL (CREATE TABLE/VIEW, etc.) from DEV and run it in PROD.
Data: Copy the Delta/Files from OneLake using a Data Pipeline, Notebook (COPY INTO or Spark CREATE TABLE ... USING DELTA LOCATION ...), or another copy mechanism.
Shortcuts: Must be recreated in each environment.
Rules: Use Deployment Rules only for connection/config settings, not schema/data.
In short, deployment pipelines handle the item, but schema and data migration require an additional scripted process.
If this post helps, then please consider Accepting it as the solution to help the other members find it more quickl
Hi @Ymatole,
Thank you for posting your query in Microsoft Fabric Community Forum. Also, thanks to @Ritaf1983 ,for those inputs on this thread.
What you are seeing is expected. Deployment pipeline mainly moves the Lakehouse or Warehouse item, but it doesn’t always bring all schemas or table structures. Sometimes a few may come through, but it is not reliable for full schema movement.
If you want the complete schema and tables in PROD, you will need to move them separately either by running the DDL (create table scripts), or by copying the data using Data Pipeline/Notebook. Shortcuts also need to be recreated in each environment.
Kindly refer to the below documentation links for better understanding:
https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-git-deployment-pipelines
https://learn.microsoft.com/en-us/fabric/cicd/deployment-pipelines/intro-to-deployment-pipelines?tab...
https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-schemas
Hope this helps if you have any queries we are happy to assist you further.
Regards,
Harshitha.
I have created the deployement pipelines using guest users and unable to give access to others. Any workaround for same ?
Another issue which I am facing is binding issue. I created one of the pipeline manually in target workspace and I want to bind this item to source item. Not able to do this in compare tab as object is not visible there. How to do binding ?
Hi @Ymatole,
For your first point on guest users:
Deployment pipelines require full Admin permissions at both the pipeline level and the workspace level. Guest (B2B) users are usually limited to view-only access unless your tenant admin has explicitly enabled the setting " Allow external guest users to edt and manage content in the organization." Because of this restriction, pipelines created by a guest cannot typically be shared or managed by others. The recommended workaround is to have an internal licensed account own the pipeline, then grant your team members access through security groups or direct assignment. This ensures full manageability and avoids guest account limitations.
For the binding issue:
Pipelines pair items across stages based on type + name (and path). If you manually created an item in the target workspace and it doesn’t exactly match, it won’t appear in the Compare tab and cannot be bound. To resolve this you can:
Rename the target item to match the source item’s name and type, then unassign and reassign the workspace to force a re-evaluation.
If that still doesn’t work, delete the manually created item in the target stage and deploy it from the source through the pipeline so it’s properly linked.
I hope this information is helpful to you. If you need any further clarification or assistance, please feel free to let me know.
Regards,
Community Support Team.
In my case, it copied schema and table sturucture too for few schema's only. Similarly for warehouse migration , it did copy few scehma's and table structures. Not all got copied.
Hi @Ymatole,
Thank you for posting your query in Microsoft Fabric Community Forum. Also, thanks to @Ritaf1983 ,for those inputs on this thread.
What you are seeing is expected. Deployment pipeline mainly moves the Lakehouse or Warehouse item, but it doesn’t always bring all schemas or table structures. Sometimes a few may come through, but it is not reliable for full schema movement.
If you want the complete schema and tables in PROD, you will need to move them separately either by running the DDL (create table scripts), or by copying the data using Data Pipeline/Notebook. Shortcuts also need to be recreated in each environment.
Kindly refer to the below documentation links for better understanding:
https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-git-deployment-pipelines
https://learn.microsoft.com/en-us/fabric/cicd/deployment-pipelines/intro-to-deployment-pipelines?tab...
https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-schemas
Hope this helps if you have any queries we are happy to assist you further.
Regards,
Harshitha.
Hi @Ymatole
The behavior you’re seeing is expected. Deployment pipelines in Fabric move the Lakehouse item (metadata and settings) but do not migrate the schema or underlying data from the SQL endpoint or OneLake. That’s why the deployment shows as successful, but your tables/schemas don’t appear in PROD.
To move schemas and data you’ll need a separate process:
Schema: Export DDL (CREATE TABLE/VIEW, etc.) from DEV and run it in PROD.
Data: Copy the Delta/Files from OneLake using a Data Pipeline, Notebook (COPY INTO or Spark CREATE TABLE ... USING DELTA LOCATION ...), or another copy mechanism.
Shortcuts: Must be recreated in each environment.
Rules: Use Deployment Rules only for connection/config settings, not schema/data.
In short, deployment pipelines handle the item, but schema and data migration require an additional scripted process.
If this post helps, then please consider Accepting it as the solution to help the other members find it more quickl
Simply said: they don't work at all
The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!
| User | Count |
|---|---|
| 39 | |
| 38 | |
| 38 | |
| 28 | |
| 27 |
| User | Count |
|---|---|
| 124 | |
| 88 | |
| 73 | |
| 66 | |
| 65 |