Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

View all the Fabric Data Days sessions on demand. View schedule

Reply
michaelgambling
Frequent Visitor

SIlver Layer - Additional Columns

Hi All,

 

I have a Fabric pipeline in place that is metadata driven, implements a watermark strategy and does the following:

 

1) Detects metadata control table.

2) Runs a four-each activity that conducts the following:

    - Looks up an old watermark stored in the metadata control table.

    - Looks up the latest watermark from the source.

    - Conducts a copy data activity, that runs with an append table action to copy data top a 'staging' table.

    - A notebook is run, this notebooks sole purpose is to move only changed data between the 'staging' table and 'bronze' table.

    - A stored proceedure runs as the finaly task, that updates the watermark column in my metadata control table.

 

The pipeline is largely based on this article: https://learn.microsoft.com/en-us/fabric/data-factory/tutorial-incremental-copy-data-warehouse-lakeh... However the only addition is that the pipeline uses a metadata driven approach also.

 

From here i need to have the ability to move the data from the Bronze lakehouse, to the Silver lakehouse. How would people reccomend i handle the ability to add additional columns within the silver tables? I am quite unsure as to what would be 'best practice', should i be handling this in my pipeline, or should i give PowerBi engineers direct access to the Silver Lakehouse to do this themselves?

 

My only thought is that whenever the pipeline activity to copy from the Bronze lakehouse to the Silver lakehouse, then the data would be overwritten and columns would be deleted and require re-creation every time?

 

I am unsure what the best soloution is, the reason i have had to try and set something like this up is because you cannot create additional columns within the tables on the Semantic model layer, and it appears the only way you can otherwise do it is within the lakehouse directly...

 

I am fairly new to data engineering, forgive me if im overthinking this..?

1 ACCEPTED SOLUTION
v-venuppu
Community Support
Community Support

Hi @michaelgambling ,

Thank you for reaching out to Microsoft Fabric Community.

Thank you @Vinodh247 for the prompt response.

Here are few documentations that will help:

1.Medallion Lakehouse Architecture in Fabric (Bronze/Silver/Gold)

Implement medallion lakehouse architecture in Fabric - Microsoft Fabric | Microsoft Learn

2.Lakehouse End-to-End Tutorial (Fabric)

Lakehouse end-to-end scenario: overview and architecture - Microsoft Fabric | Microsoft Learn

3.Handle Schema Drift (Dataflow Gen2)

How to handle schema drift in Dataflow Gen2 - Microsoft Fabric | Microsoft Learn

4.Schema Drift Concepts (Mapping Dataflows)

Schema drift in mapping data flow - Azure Data Factory & Azure Synapse | Microsoft Learn

5.Lab Exercise: Build Medallion Lakehouse

Create a medallion architecture in a Microsoft Fabric lakehouse | mslearn-fabric

6.Incremental Copy Patterns (Watermarking)

Incrementally load data from Data Warehouse to Lakehouse - Microsoft Fabric | Microsoft Learn

View solution in original post

6 REPLIES 6
v-venuppu
Community Support
Community Support

Hi @michaelgambling ,

I hope the information provided is helpful.I wanted to check whether you were able to resolve the issue with the provided solutions.Please let us know if you need any further assistance.

Thank you.

v-venuppu
Community Support
Community Support

Hi @michaelgambling ,

May I ask if you have resolved this issue? Please let us know if you have any further issues, we are happy to help.

Thank you.

v-venuppu
Community Support
Community Support

Hi @michaelgambling ,

I wanted to check if you had the opportunity to review the information provided and resolve the issue..?Please let us know if you need any further assistance.We are happy to help.

Thank you.

v-venuppu
Community Support
Community Support

Hi @michaelgambling ,

Thank you for reaching out to Microsoft Fabric Community.

Thank you @Vinodh247 for the prompt response.

Here are few documentations that will help:

1.Medallion Lakehouse Architecture in Fabric (Bronze/Silver/Gold)

Implement medallion lakehouse architecture in Fabric - Microsoft Fabric | Microsoft Learn

2.Lakehouse End-to-End Tutorial (Fabric)

Lakehouse end-to-end scenario: overview and architecture - Microsoft Fabric | Microsoft Learn

3.Handle Schema Drift (Dataflow Gen2)

How to handle schema drift in Dataflow Gen2 - Microsoft Fabric | Microsoft Learn

4.Schema Drift Concepts (Mapping Dataflows)

Schema drift in mapping data flow - Azure Data Factory & Azure Synapse | Microsoft Learn

5.Lab Exercise: Build Medallion Lakehouse

Create a medallion architecture in a Microsoft Fabric lakehouse | mslearn-fabric

6.Incremental Copy Patterns (Watermarking)

Incrementally load data from Data Warehouse to Lakehouse - Microsoft Fabric | Microsoft Learn

Vinodh247
Solution Sage
Solution Sage

This is a common point of confusion when designing Bronze/Silver/Gold layers in fabric. The best practice is to treat the silver layer as a curated, standardized, business ready data layer that sits between your raw bronze and consumption focused gold. That means the transformations, enrichment, and the addition of new columns (for business logic, calculated fields, derived metrics, or standardized formats) should happen as part of your data eng pipeline, not left to PBI engineers. If you allow PBI engineers to add columns directly in the Silver lakehouse, you risk losing consistency, governance, and repeatability because their changes will not survive pipeline refreshes. To avoid overwriting issues, design your Bronze-to-Silver pipeline to be idempotent and schema-aware: either handle schema drift (autodetect and merge new columns) or manage transformations centrally in notebooks or dataflows that explicitly add those additional columns during the load. That way, your Silver tables become the single, trusted layer that always exposes the right schema, while PBI engineers focus on modeling and visualization in the semantic layer. In short, keep schema evolution and column additions inside your pipeline, not ad hoc in PBI or manual edits in the lakehouse.

 


Please 'Kudos' and 'Accept as Solution' if this answered your query.

Regards,
Vinodh
Microsoft MVP [Fabric]

Thanks for the freedback, is there any documentation that might assist in how to set something like this up? Im a Sys Ad, i have no background in data engineering, anything you can siggest article wise that may help me make sense of this would be greatly appreciated, thanks so much fo ryour feedback!

Helpful resources

Announcements
November Fabric Update Carousel

Fabric Monthly Update - November 2025

Check out the November 2025 Fabric update to learn about new features.

Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Kudoed Authors