Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.

Reply
jaryszek
Impactful Individual
Impactful Individual

How to create a new semantic model based on existing one on OneLake?

Hello,

Imagine that you have 1000 customers. And I have a baseline remote semantic model on OneLake. 
And now i want to create the new one specific only for different customer. It be a schema change there or different data granularity. 

1. How to make the copy of current semantic model? What are tools to this (without semantic labs)

2. How to handle versioning of semantic models?

3. How to make a versioning for reports created on those semantic models? 

4. How to make a common semantic model and migrate only common data? 

5. How to support customer where I they want me to use app architecture on my side?

6. How to support customer where they have their tenant and fabric onelake but we need to fulfill it with our schemas?

Thank you,
Jacek

1 ACCEPTED SOLUTION
v-tsaipranay
Community Support
Community Support

Hi @jaryszek ,

Thank you for reaching out to the Microsoft fabric community forum.

 

Copying a semantic model:

Currently, Fabric does not offer a one-click feature to duplicate a semantic model. However, you can do this by downloading the semantic model as a .pbix file in Power BI Desktop and then re-publishing it to another workspace. Another option is to use deployment pipelines in Fabric to move semantic models between workspaces, which creates copies for various environments or customers. If your model uses OneLake data, you might also parameterize it so each copy references a different customer’s data.

 

Versioning of semantic models:

Semantic models do not include built-in version control features. It is best to use Fabric’s Git integration to manage these models in source control. You can export models with Tabular Model Definition Language (TMDL) or Tabular Editor, save them in a Git repository, and monitor changes as you would with code. Deployment pipelines are also useful for managing the lifecycle across Dev, Test, and Production environments.

Reference: Overview of Fabric Git integration - Microsoft Fabric | Microsoft Learn

 

Versioning for reports:

Just like semantic models, reports can be versioned by integrating with Git or storing .pbix files in source control. Deployment pipelines help ensure reports are consistently moved between environments with the correct version, keeping them aligned with the semantic models they rely on.

Reference: Overview of Fabric deployment pipelines - Microsoft Fabric | Microsoft Learn

 

Common semantic model with customer-specific extensions:

One effective strategy is to develop a shared or “hub” semantic model that includes all common elements such as measures, dimensions, and facts. For each customer, you can then build thin semantic models that reference the shared model and incorporate any customer-specific schema modifications or additional logic. This approach minimizes duplication, maintains consistent business logic across customers, and supports necessary customization.

 

Supporting customers via app architecture:

If customers would like you to handle everything from a central location, you can create and manage the semantic model within your workspace and share reports using Power BI apps. These apps offer a secure way to package and distribute semantic models and reports to customer groups. For better isolation, you can set up separate workspaces and apps for each customer, which also helps you manage permissions and updates more easily.

Reference: Publish an app in Power BI - Power BI | Microsoft Learn

 

Supporting customers with their own tenant and OneLake:

Currently, direct semantic model sharing is not available in cross-tenant scenarios. To address this, you can either export and deploy the semantic model into the other tenant, or use OneLake shortcuts to share data between tenants, enabling customers to build their own semantic models using your data. Alternatively, you can provide your model definitions (such as .pbix or TMDL files) so they can be deployed in the customer’s Fabric environment under a managed service agreement.

Reference: Unify data sources with OneLake shortcuts - Microsoft Fabric | Microsoft Learn

Hope this helps, please feel free to reach out for any further questions.

 

Thank you.

View solution in original post

7 REPLIES 7
v-tsaipranay
Community Support
Community Support

Hi @jaryszek ,

 

You're correct, thin semantic models only support adding new measures, not changing the schema or relationships. If you need to modify relationships or structure for each customer, it's best to create a separate semantic model for every customer. You can do this by exporting the baseline model as a .pbix or .pbip file and re-publishing it, or by managing definitions using TMDL or Tabular Editor in Git.

Each customer-specific model can still connect to your OneLake Lakehouse or Warehouse (including shortcuts), so you keep the benefits of OneLake while being able to customize the schema as needed.

 

Thank you.

v-tsaipranay
Community Support
Community Support

Hi @jaryszek ,

 

At present, semantic models built directly in Fabric do not expose parameters in the same way as Power BI Desktop. To introduce parameterization, you typically need to define query parameters in Power Query within Power BI Desktop or apply them through Tabular Editor/TMDL by creating partitions or filters that can mimic parameter behavior. This allows you to customize models per customer.

Reference: https://learn.microsoft.com/en-us/power-bi/transform-model/desktop-what-if

 

Deployment pipelines in Fabric are designed around three stages (Dev → Test → Prod). For multi-customer scenarios, you can:

  • Create separate pipelines for each customer or maintain a baseline pipeline for shared development and then replicate or clone workspaces per customer.

While pipelines do not branch into multiple customer environments automatically, structuring them this way allows you to control versions and ensure isolation across customers.

 

To build a shared or “hub” semantic model on OneLake:

  1. Save your main data, such as fact and dimension tables, in a Fabric Lakehouse or Warehouse on OneLake. If necessary, use OneLake shortcuts to bring together data from different sources.
  2. In Fabric, set up a new semantic model that references these Lakehouse or Warehouse tables. Define all shared measures, hierarchies, and relationships, and publish the model in a central workspace for your team to manage as the standard model.
  3. For customer-specific requirements, build thin semantic models that link to the shared model. These thin models let you add extra measures or schema changes for each customer without repeating the baseline logic.

This approach ensures consistent business definitions across customers, while still allowing flexibility.

 

Thank you.

Thank you.

One more question:

  1. For customer-specific requirements, build thin semantic models that link to the shared model. These thin models let you add extra measures or schema changes for each customer without repeating the baseline logic.

    Can you point me to the docs regarding creating creating link between One Lake model and created a new one? How to make this in order to not loose all OneLake traits?

    And one more:

  2. What if customer wants have data on his tenant? How to link my tenant lakehouse with their tenant and build deployment pipeline for them ? 
    So I have one lake, creating a link to customer's tenant. But how to move semantic models and reports for them? It is possible to use deployment pipeline in this case? How? 

    Best,
    Jacek

Hi @jaryszek ,

 

1. Linking Thin Semantic Models to a Shared OneLake Model Without Losing OneLake Traits To support customer-specific requirements while maintaining a centralized architecture, you can build thin semantic models that extend a shared OneLake-based model without losing OneLake traits. This is achieved by using composite models in Power BI Desktop, where the thin model connects to the shared semantic model via DirectQuery for Power BI datasets or Analysis Services.

This setup allows you to add customer-specific measures, calculated columns, or schema extensions in the thin layer while preserving the core logic and OneLake integration of the base model. To ensure OneLake traits are retained, the base model must be published using Direct Lake mode, and the thin model should avoid importing data. You can also use Tabular Editor or TMDL to define partitions or filters that simulate parameterization per customer. This layered modeling approach ensures consistency, reusability, and flexibility across customer implementations.

For more details, refer to Learn about Microsoft OneLake Delta table integration in Power BI and Microsoft Fabric - Microsoft F...

Semantic Link: OneLake integrated Semantic Models | Microsoft Fabric Blog | Microsoft Fabric

 

2. Linking to a Customer’s Tenant Lakehouse and Deploying Semantic Models and Reports If a customer prefers to keep their data within their own Microsoft Fabric tenant, you can architect a solution that links your tenant to their Lakehouse using OneLake Shortcuts. These shortcuts allow you to reference data stored in the customer’s tenant without duplicating it, provided Fabric-to-Fabric authentication is configured (note: Entra B2B guest access is not supported for this).

To deploy semantic models and reports into the customer’s tenant, you’ll need to export them as .pbip or .pbix files and either share them securely or deploy them under a managed service agreement. After import, the models must be rebound to the customer’s Lakehouse or shortcut path to ensure proper data connectivity. While Fabric deployment pipelines do not natively support cross-tenant automation, you can simulate this using REST APIs and Power Automate flows authenticated via service principals. This approach enables you to maintain version control and deployment consistency across tenants.

For implementation guidance, see External Data Sharing in Microsoft Fabric - Microsoft Fabric | Microsoft Learn

Optimizing for CI/CD in Microsoft Fabric | Microsoft Fabric Blog | Microsoft Fabric

Hope this helps, please feel free to reach out for any further questions.

 

Thank you.

thank you, so thin model will not address the case for me if i do not want to create only measures. I want to change relationships for example. Which i can not. And do not want to use any composite models. 

jaryszek
Impactful Individual
Impactful Individual

Thank you! 

So: 

Copying a semantic model:

"If your model uses OneLake data, you might also parameterize it so each copy references a different customer’s data." 

How to parametrize it if parameters are not avaiable in OneLake models? Only by adding parameters through tabular editor? 

"Another option is to use deployment pipelines in Fabric to move semantic models between workspaces, which creates copies for various environments or customers." 
Can you please redirect me to source how to set up them for different customers? 

Common semantic model with customer-specific extensions:
"One effective strategy is to develop a shared or “hub” semantic model that includes all common elements" 

How to create shared semantic model based on OneLake?
Best,
Jacek



v-tsaipranay
Community Support
Community Support

Hi @jaryszek ,

Thank you for reaching out to the Microsoft fabric community forum.

 

Copying a semantic model:

Currently, Fabric does not offer a one-click feature to duplicate a semantic model. However, you can do this by downloading the semantic model as a .pbix file in Power BI Desktop and then re-publishing it to another workspace. Another option is to use deployment pipelines in Fabric to move semantic models between workspaces, which creates copies for various environments or customers. If your model uses OneLake data, you might also parameterize it so each copy references a different customer’s data.

 

Versioning of semantic models:

Semantic models do not include built-in version control features. It is best to use Fabric’s Git integration to manage these models in source control. You can export models with Tabular Model Definition Language (TMDL) or Tabular Editor, save them in a Git repository, and monitor changes as you would with code. Deployment pipelines are also useful for managing the lifecycle across Dev, Test, and Production environments.

Reference: Overview of Fabric Git integration - Microsoft Fabric | Microsoft Learn

 

Versioning for reports:

Just like semantic models, reports can be versioned by integrating with Git or storing .pbix files in source control. Deployment pipelines help ensure reports are consistently moved between environments with the correct version, keeping them aligned with the semantic models they rely on.

Reference: Overview of Fabric deployment pipelines - Microsoft Fabric | Microsoft Learn

 

Common semantic model with customer-specific extensions:

One effective strategy is to develop a shared or “hub” semantic model that includes all common elements such as measures, dimensions, and facts. For each customer, you can then build thin semantic models that reference the shared model and incorporate any customer-specific schema modifications or additional logic. This approach minimizes duplication, maintains consistent business logic across customers, and supports necessary customization.

 

Supporting customers via app architecture:

If customers would like you to handle everything from a central location, you can create and manage the semantic model within your workspace and share reports using Power BI apps. These apps offer a secure way to package and distribute semantic models and reports to customer groups. For better isolation, you can set up separate workspaces and apps for each customer, which also helps you manage permissions and updates more easily.

Reference: Publish an app in Power BI - Power BI | Microsoft Learn

 

Supporting customers with their own tenant and OneLake:

Currently, direct semantic model sharing is not available in cross-tenant scenarios. To address this, you can either export and deploy the semantic model into the other tenant, or use OneLake shortcuts to share data between tenants, enabling customers to build their own semantic models using your data. Alternatively, you can provide your model definitions (such as .pbix or TMDL files) so they can be deployed in the customer’s Fabric environment under a managed service agreement.

Reference: Unify data sources with OneLake shortcuts - Microsoft Fabric | Microsoft Learn

Hope this helps, please feel free to reach out for any further questions.

 

Thank you.

Helpful resources

Announcements
September Power BI Update Carousel

Power BI Monthly Update - September 2025

Check out the September 2025 Power BI update to learn about new features.

August 2025 community update carousel

Fabric Community Update - August 2025

Find out what's new and trending in the Fabric community.