Supplies are limited. Contact info@espc.tech right away to save your spot before the conference sells out.
Get your discountScore big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount
Hi All,
We are building a new power bi project(pbip) and as part of this we will build multiple datasets(probably around 20) in direct query mode. Although most of these datasets are different and used by differnt team but there masterdata is usually similar(few tables here and there).
So, we want to build a solution that if we need to add a new tables to dataset( or a new column in existing dataset), we do not want to update all the datasets definations manually one by one.
Is there a way we can build some sort of config that all the datasets can pickup automatically.
Thanks in advance.
Regards,
Mohit Leekha
Hi @mohitleekha1 ,
Guides for TOM & Deployment Pipelines:
Tabular Object Model (TOM) | Microsoft Learn : conceptual overview of the object model.
Programming Power BI semantic models with the Tabular Object Model (TOM) | Microsoft Learn : step-by-step guide on applying TOM to Power BI datasets via XMLA endpoints.
Overview of Fabric deployment pipelines - Microsoft Fabric | Microsoft Learn: explains how to manage and promote schema changes across environments (Dev → Test → Prod).
TOM with PBIP/TMDL vs BIM
TOM works with both old and new formats, and combining it with deployment pipelines gives you a scalable way to manage schema changes across multiple datasets.
Hope this helps, please feel free to reachout for any further question.
Thank you.
Hi @mohitleekha1 ,
Thank you for reaching out to the Microsoft fabric community forum.
I understand the issue you are facing. When working with multiple semantic datasets, maintaining consistency in master data tables can become time consuming if every dataset has to be updated individually. Out of the box, Power BI does not provide a built-in configuration file that automatically pushes schema changes (new tables or new columns) to all datasets.
The recommended approach is to centralize your master data into a single semantic model and let your other datasets connect to it through composite models. This way, any structural change you make in the master model such as adding a new table or column will automatically flow through to all dependent datasets without requiring manual updates.
If you need more automation across existing datasets, another option is to use Tabular Editor with TOM scripting or deployment pipelines. With this approach, you can define a script or template and apply schema changes programmatically to multiple datasets in one step, instead of editing each one manually.
Provided below Link for your Reference:
Use composite models in Power BI Desktop - Power BI | Microsoft Learn
Hope this helps, feel free to reachout for any further question.
Thank you.
Hi
Thanks for your suggestions. Quick followup questions:
1.) Could you please point me to some guides where i can read about doing this using TOM and Deployment pipelines usage for this purpose
2.) Is it possible to impletement TOM model with the new pbip and TMDL, or with the old BIM style only where we build the master model in tabular editor and use TOM to generate child bim files.
User | Count |
---|---|
10 | |
5 | |
4 | |
4 | |
3 |
User | Count |
---|---|
14 | |
9 | |
5 | |
5 | |
4 |