Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Get Fabric certified for FREE! Don't miss your chance! Learn more
Hello.
I'm implementing a CI / CD solution with Azure DevOps Pipeline to update Fabric in a classic DEV -> TEST -> PREPROD -> PROD environment, where in each environment there is a dedicated workspace.
- in feature, dev and test i use git integration
- in preprod and prod i use fabric deployment pipeline
How deal with lakehouses?
- If i have a new table, i get it from feature to prod?
- If i have a new column in a table, how get it from feature to prod?
The starting point is a source database from get a sql server ( from get structure, data ) ... but this is just to give a little context, the real question is how pass changes from feature to prod?
From API REST i don't see any useful ...
And we know there are some limitations with Git / Fabric Deployment Pipelines with this item ...
Thanks for help
Regards,
Giulio
Solved! Go to Solution.
Hi @giulio-diluca first, the answer to your questions on "How to deal with lakehouses?" as follow (as of now):
#1. If i have a new table, i get it from feature to prod? - You need to create/update your schemas in lakehouses via notebooks, why? Because the lakehouse structure is not tracked by Git-integration as shown on the picture below and the API only supports control-plane operations on Lakehouses (e.g. create, rename, etc.)
NOTE. I actually tested with Lakehouses created with and without schema support, in both cases, the schema is not tracked; deployment pipelines will only re-create the lakehouse when missing or skip otherwise
#2. If i have a new column in a table, how get it from feature to prod? - Same answer as #1.
Additional notes.
Until just recently, a very limited number of items were supported by Git-integration, we have hugh progress with the announcment that all items are now supported. Unfortunately, lakehouse schemas are still not tracked... so, to summarize my answer is, you can only deploy the lakehouses via deployment pipelines, the rest, has to be re-created/update in each workspace: schemas via notebooks and data via your choosen method for ingestion (data factory pipelines, notebooks, dataflows, etc.)
Hope you find this information useful, ping me if you need additional clarifications ... meanwhile, I would appreciate a kudos and mark this as a solution if deemed appropiate. All the best
Hi @giulio-diluca , Thank you for reaching out to the Microsoft Community Forum.
@svenchio is correct about the fundamental limitation. Fabric still doesn’t track the internal structure of a lakehouse through Git or Deployment Pipelines. Even with schema support turned on, Git only captures the lakehouse metadata JSON, not the actual Delta tables, not the folders and not the columns. Deployment Pipelines behave the same way, they can create the lakehouse if it’s missing, but they won’t carry a new table or a changed column from one stage to the next. So, the behaviour they described is accurate.
In a feature -> dev -> test -> preprod -> prod flow, the lakehouse itself never moves its schema downstream. The only thing that flows reliably is code. Notebooks, SQL files or pipelines are what Git tracks and those are what get deployed into the next workspace. That means any new table, altered table or added column has to be expressed as code. When that notebook or pipeline is deployed in test or prod, you run it there and it recreates the same structure in that environment. The schema doesn’t travel, the code that builds the schema does and your ingestion or ETL process brings the data.
Introduction to CI/CD in Microsoft Fabric - Microsoft Fabric | Microsoft Learn
Overview of Fabric deployment pipelines - Microsoft Fabric | Microsoft Learn
Thank you @svenchio for your valuable response.
Hi @giulio-diluca first, the answer to your questions on "How to deal with lakehouses?" as follow (as of now):
#1. If i have a new table, i get it from feature to prod? - You need to create/update your schemas in lakehouses via notebooks, why? Because the lakehouse structure is not tracked by Git-integration as shown on the picture below and the API only supports control-plane operations on Lakehouses (e.g. create, rename, etc.)
NOTE. I actually tested with Lakehouses created with and without schema support, in both cases, the schema is not tracked; deployment pipelines will only re-create the lakehouse when missing or skip otherwise
#2. If i have a new column in a table, how get it from feature to prod? - Same answer as #1.
Additional notes.
Until just recently, a very limited number of items were supported by Git-integration, we have hugh progress with the announcment that all items are now supported. Unfortunately, lakehouse schemas are still not tracked... so, to summarize my answer is, you can only deploy the lakehouses via deployment pipelines, the rest, has to be re-created/update in each workspace: schemas via notebooks and data via your choosen method for ingestion (data factory pipelines, notebooks, dataflows, etc.)
Hope you find this information useful, ping me if you need additional clarifications ... meanwhile, I would appreciate a kudos and mark this as a solution if deemed appropiate. All the best
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
Check out the February 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 27 | |
| 11 | |
| 10 | |
| 7 | |
| 6 |
| User | Count |
|---|---|
| 51 | |
| 39 | |
| 26 | |
| 15 | |
| 14 |