Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
jochenj
Advocate III
Advocate III

Best Practice for Lakehouse Table & View Deployments?

i'm looking for best-practices how to track and deploy changes on LH-Managed Tables and LH-SQL-Views between workspaces. 
How do you handle this task dynamically and with less-maintenance efforts?


Situation:

In Fabric WH and DB the git-integrations tracks all DB Objects like Tables,VIews,Procs... with this GIT integration we can use fabric deployment piplines to transfer changes from DEV (WorkspaceA) to PROD (WorkspaceB).

In Lakehouses we are missing the Object-Level tracking in GIT as stated here: Lakehouse deployment pipelines and git integration - Microsoft Fabric | Microsoft Learn

 Important
Only the Lakehouse container artifact is tracked in git in the current experience. Tables (Delta and non-Delta) and Folders in the Files section aren't tracked and versioned in git.


Solution Approaches:

  1.  Manually created/maintained notebooks
       script-out all LH-Tables and LH-SQL-Views (which dependant on LH-Tables). Update that notebook on every change. Deploy this notebook regualary from DEV>PROD and then run this notebook manually
  2. Dynamic Notebook
    which dynamically reads all Table and View Definition, write it to a transportable file and then executes the notebook against target environments. Extracting the View-Definition can be done with INFORMATION_SCHEMA.VIEWs and i think LH-Table definitions should also be retrievable. Maybe us get_lakehouse_tables in sempy_labs library for the job?! (not tried)
  3. Leverage Azure DevOps Pipelines
    basically the same like approach 2 but orchestrate the process with DevOps Pipeline

 

Anyone with other options and/or practical experience what works best or maybe even also a working notebook solution?
I shouldn't be the first with this requirement?! Spoiler: My background is SQL-Datawarehounsing and not Lakehousing, so maybe my approach from Warehousing must be adpated to Lakehouse world?! We are implementing currenlty a lakehouse solution that has hundreds of LH-Tables and VIews. The solution relies on quiete extensive SQL Endpoint Views (much hof the transformation logic is inside views, this views act as input for a Notebook MERGE to silver-layer LH-Tables). Tracking all this tables/view with a manual notebook is not really feasible and time-intensive.. Side note: We moved to LH-Approach in hope that it is more capacity-efficent as our previous approach building a DWH based on Fabric Database, because we had to found out that the DB consumptions was eating our complete F64 capacity on data loads. But now challeging how to apply a working CI/CD Concept for LH 

1 ACCEPTED SOLUTION
v-menakakota
Community Support
Community Support

Hi @jochenj ,

Thanks for reaching out to the Microsoft fabric community forum. 

Cosider this as a workaround and try it once. 
Create a notebook in your development workspace that automatically extracts metadata for all:

  • Lakehouse tables (Delta format)
  • SQL views (from the Lakehouse SQL endpoint)
  • semPy library use get_lakehouse_tables() to retrieve table definitions
  • SQL query SELECT * FROM INFORMATION_SCHEMA.VIEWS to get view definitions

You can serialize the output (e.g., as .json or .sql files) and store them in the Files section of your Lakehouse or in a dedicated folder within OneLake. This makes the metadata portable and ready for deployment to other environments.

If I misunderstand your needs or you still have problems on it, please feel free to let us know.   

Best Regards, 
Menaka. 
Community Support Team  

View solution in original post

4 REPLIES 4
v-menakakota
Community Support
Community Support

Hi @jochenj ,

Thanks for reaching out to the Microsoft fabric community forum. 

Cosider this as a workaround and try it once. 
Create a notebook in your development workspace that automatically extracts metadata for all:

  • Lakehouse tables (Delta format)
  • SQL views (from the Lakehouse SQL endpoint)
  • semPy library use get_lakehouse_tables() to retrieve table definitions
  • SQL query SELECT * FROM INFORMATION_SCHEMA.VIEWS to get view definitions

You can serialize the output (e.g., as .json or .sql files) and store them in the Files section of your Lakehouse or in a dedicated folder within OneLake. This makes the metadata portable and ready for deployment to other environments.

If I misunderstand your needs or you still have problems on it, please feel free to let us know.   

Best Regards, 
Menaka. 
Community Support Team  

Hi  @jochenj ,

May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster. 

 

Thank you. 

Hi  @jochenj ,

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution so that other community members can find it easily. 

 
Thank you. 

Hi @jochenj ,

May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster. 

 

Thank you

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

May FBC25 Carousel

Fabric Monthly Update - May 2025

Check out the May 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.