Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Learn more

Reply
ShijuKuchelan
Frequent Visitor

Deploy Master Semantic Model 2 Multiple Workspaces with Schema Change Using Microsoft Fabric Mirror

I have a source workspace that contains a master semantic model and reports. I want to automate the process of:

  1. Copying the semantic model and reports to multiple workspaces (around 1000).
  2. Updating the connection and schema name because I use tenant-per-schema within the mirrored database in OneLake.
  3. Switching the data source to Delta Lake via Microsoft Fabric’s Mirroring feature.

What is the ideal approach to achieve this?

Options Considered

  1. Import PBIX files using the Import API:

    POST https://api.powerbi.com/v1.0/myorg/groups/{groupId}/imports?datasetDisplayName={datasetDisplayName}

    Then update the schema using:

    POST https://api.powerbi.com/v1.0/myorg/groups/{groupId}/datasets/{datasetId}/Default.UpdateDatasources

    (Hypothetical—open to better suggestions)

  2. Import PBIT files using the same Import API and then update schema similarly.

  3. Copy semantic model and reports directly from the source folder to destination workspaces and change schema in the semantic model (unsure about the API collection for this).

What is the best approach to implement this? Please provide recommended APIs and steps.

1 ACCEPTED SOLUTION
v-karpurapud
Community Support
Community Support

Hi @ShijuKuchelan 


Thank you for contacting the Microsoft Fabric Community Forum.

 

As mentioned by @GilbertQ , the recommended and industry-standard approach is to use a parameterized PBIT deployment together with the Update Parameters API. This is also the Microsoft-endorsed method for managing multi-workspace or multi-tenant Power BI environments.

 

Create a master Power BI Template (.pbit) that includes your model relationships, measures, and visuals, along with parameters for key data-source elements such as Server, Database, Schema, or Tenant ID. These parameters allow each workspace to dynamically connect to the correct tenant data.

Once the template is ready, use the Power BI REST APIs to automate deployment by importing the .pbit file into each workspace. After that, update the parameter values for each tenant as shown in the snapshot included in the provided  ,MS document.

 

vkarpurapud_0-1761821301731.png

 

This method fully supports Delta Lake and Microsoft Fabric Mirror connections. You can update parameters like SchemaName for each workspace so it connects to the appropriate tenant schema within the mirrored database.
 

In short, parameterized PBIT deployment with automated parameter updates is the most scalable, flexible, and Microsoft-supported solution for multi-tenant semantic model management in Power BI and Fabric.


 

I hope this information is helpful. . If you have any further questions, please let us know. we can assist you further.

 

Regards,

Microsoft Fabric Community Support Team.
 

View solution in original post

5 REPLIES 5
ShijuKuchelan
Frequent Visitor

@v-karpurapud  @GilbertQ 
I created a semantic model in a source workspace (connected to Delta Lake via Fabric). Then, using Power BI Desktop, I built a report and saved it as a .pbit file.
I programmatically imported the .pbit into a target workspace using the Power BI Import API: POST https://api.powerbi.com/v1.0/myorg/groups/${workspaceId}/imports
 
The report was successfully imported into the target workspace, but it still logically points to the semantic model in the source workspace, which breaks the isolation between environments.
I'm not sure how to update parameters like SchemaName in the semantic model definition.So as a workaround, I followed the approach below.
 
1. Export model using POST /v1/workspaces/semanticModels/{id}/getDefinition
1.1Parse TMDL and extract OneLake connection details:
├─ Workspace ID: xxx-xxx-xx-xx-xxxx
├─ Item ID (Lakehouse): xxx-xx-xx-xx-xxx
└─ Found in: definition/expressions.tmdl
 
1.2Identify DirectLake configuration
 
 ├─ Mode: directLake
├─ Schema: TENANT_1
├─ Expression Source: DirectLake - DB_NAME
└─ Found in: definition/tables/test.tmdl
 
1.3 Replace tenant-specific schema names 
 
1.4 Re-encode and create semantic model in target workspace using POST /v1/workspaces/{id}/semanticModels
 
 
2.PBIT Import
validate and upload .pbit using /imports API
 
3.Report Rebinding
   Use POST /reports/{id}/Rebind to point report to new semantic model
   
4.Semantic Model Refresh
Trigger DirectLake refresh via /datasets/{id}/refreshes
 
 
Can you provide a step-by-step guide to update parameters such as SchemaName and DirectLake via fabric or powerBI desktop ?
Is this a valid workaround for migrating reports and semantic models between Fabric workspaces? 
Are there best practices for handling OneLake connections and schema transformations in TMDL?
v-karpurapud
Community Support
Community Support

Hi @ShijuKuchelan 

I wanted to check if you’ve had a chance to review the information provided. If you have any further questions, please let us know. Has your issue been resolved? If not, please share more details so we can assist you further.

Thank You.

v-karpurapud
Community Support
Community Support

Hi @ShijuKuchelan 


Thank you for contacting the Microsoft Fabric Community Forum.

 

As mentioned by @GilbertQ , the recommended and industry-standard approach is to use a parameterized PBIT deployment together with the Update Parameters API. This is also the Microsoft-endorsed method for managing multi-workspace or multi-tenant Power BI environments.

 

Create a master Power BI Template (.pbit) that includes your model relationships, measures, and visuals, along with parameters for key data-source elements such as Server, Database, Schema, or Tenant ID. These parameters allow each workspace to dynamically connect to the correct tenant data.

Once the template is ready, use the Power BI REST APIs to automate deployment by importing the .pbit file into each workspace. After that, update the parameter values for each tenant as shown in the snapshot included in the provided  ,MS document.

 

vkarpurapud_0-1761821301731.png

 

This method fully supports Delta Lake and Microsoft Fabric Mirror connections. You can update parameters like SchemaName for each workspace so it connects to the appropriate tenant schema within the mirrored database.
 

In short, parameterized PBIT deployment with automated parameter updates is the most scalable, flexible, and Microsoft-supported solution for multi-tenant semantic model management in Power BI and Fabric.


 

I hope this information is helpful. . If you have any further questions, please let us know. we can assist you further.

 

Regards,

Microsoft Fabric Community Support Team.
 

ShijuKuchelan
Frequent Visitor

Thank you so much, @GilbertQ . What is the Power BI or industry standard in terms of importing reports and the semantic model?
Create a parameter that is used for your data sources — does this work for a Delta Lake connection?
If I want to experiment with the remaining options, could you provide more details?

GilbertQ
Super User
Super User

Hi @ShijuKuchelan 

 

That looks like it could work well in terms of importing the PBIX file to the different workspaces instead of updating the data source. If you had to create a parameter that is used for your data sources, you could then update the parameters programmatically in each of your workspaces to point to the data source you require. Here is the API to use. Datasets - Update Parameters In Group - REST API (Power BI Power BI REST APIs) | Microsoft Learn





Did I answer your question? Mark my post as a solution!

Proud to be a Super User!







Power BI Blog

Helpful resources

Announcements
Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

October Power BI Update Carousel

Power BI Monthly Update - October 2025

Check out the October 2025 Power BI update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Kudoed Authors