VivienneVereen

Programmatically deploy Semantic Models and Reports via Semantic Link Labs

Looking for governance across semantic models and reports? Need a programmatic solution to deploy Direct Lake (and other) semantic models and reports across workspaces and Lakehouses? Semantic Link Labs is the answer. Thanks to Michael Kovalsky and Markus Cozowicz for their efforts in creating this incredible library! In this article, we will review the Power BI architecture and output of a Fabric notebook that leverages Semantic Link Labs to programmatically deploy semantic models and reports across workspaces and schema-enabled Lakehouses, providing central governance.

 

Steps:

 

  1. Deploy semantic model to new workspace and rename
  2. Update semantic model connection to new Lakehouse
  3. (Optional) Check semantic model Lakehouse connection
  4. Update Direct Lake table partition to new schema
  5. (Optional) Get Tabular Model Scripting Language (TMSL) to confirm lineage
  6. Clone report to new workspace and rebind to new semantic model
  7. (Optional) Launch report to preview

 

Architecture

 

Diagram Power BI Only.png

 

Partitioning a semantic model and reports into smaller source tables, for example by store (Europe, Asia, North America) has many benefits including:

 

  • Performance optimization: Reduced row counts improve query performance.
  • Decreased concurrent queries: Subsets of users generate fewer concurrent queries.
  • Simplified security management: Option to eliminate or simplify Row-Level Security (RLS) requirements.

 

In the example below, we have an architecture in which each store has its own schema with shared dimensions (shortcuts within Gold Lakehouse). As best practice semantic models and reports are in separate workspaces and reports are distributed via apps:

  • One Data Engineering – Gold workspace with Lakehouse
  • Three Data Hub workspaces with Lakehouses and Direct Lake Semantic Models
  • Three Reporting Hub workspaces with Reports and Apps

 

VivienneVereen_0-1742917903338.png

 

Within the Data Engineering – Gold workspace we have a schema-enabled Lakehouse with schemas for each store:

 

VivienneVereen_1-1742917913542.png

 

The Fact tables in each schema are directly populated via ETL processes (in this case a simple change in where clause) while dimensions are shared across schemas via shortcuts from the “master” dbo schema.

 

Each Data Hub workspace has their own Reporting Lakehouse with schema shortcuts to their respective schema in the Data Engineering – Gold workspace. Example for Europe shown below:

 

VivienneVereen_2-1742917933839.png

 

VivienneVereen_3-1742917933841.png

 

Now, the goal is to develop three Direct Lake semantic models and reports that point to unique workspaces, Lakehouses and schemas. Instead of having to do this manually each time a change is made to the semantic model or report (which makes for a governance nightmare and leaves a lot of room for error), we will leverage Semantic Link Labs in Fabric notebooks to automate the deployment.

 

Let’s treat the Europe store as the starting point and source of truth for our development. We have a working semantic model and report for Europe and would like to deploy to Asia and North America.

 

VivienneVereen_4-1742917954633.png

 

VivienneVereen_5-1742917954633.png

 

VivienneVereen_6-1742917969524.png

 

VivienneVereen_7-1742917975322.png

VivienneVereen_8-1742917995077.png

 

Programmatic Deployment using Semantic Link Labs

 

The notebook below will programmatically deploy the Europe semantic model and report to Asia and North America across workspaces with different Lakehouses and lineage.

 

 

VivienneVereen_0-1742931330559.pngVivienneVereen_1-1742931330563.png

VivienneVereen_0-1742934775004.png

VivienneVereen_1-1742935396565.png

VivienneVereen_2-1742935396568.png

VivienneVereen_3-1742935466660.png

VivienneVereen_4-1742935466666.png

VivienneVereen_5-1742935514373.png

VivienneVereen_6-1742935514378.png

 

Reviewing the results after running the notebook

 

As seen below, we now have two new semantic models and reports for Asia and North America with updated workspaces and Lakehouses:

 

VivienneVereen_7-1742935835194.png

 

VivienneVereen_8-1742935835197.png

 

Viewing the reports, everything works as expected:

 

VivienneVereen_0-1742935904043.png

 

VivienneVereen_1-1742935904046.png

 Note, a few manual steps remain

 

  • After initial model deployment: need to update security role members, cloud connection from SSO (default) to fixed identity, access permissions.
  • Once deployed and model is overwritten only cloud connection needs to be updated.

However, the above items should be able to be done programmatically via Semantic Link Labs and this is next on my to-do list 🙂

 

1. Adding security role members on the new semantic models:

 

VivienneVereen_0-1742936125166.png

 

2. Updating Cloud connection settings to fixed identity (for RLS purposes):

 

VivienneVereen_1-1742936135207.png

 

3. Updating permissions:

 

VivienneVereen_2-1742936143880.png

 

4. Updating app:

 

VivienneVereen_1-1742937130720.png

 

Reporting Hub – Asia App from TestUser1 perspective with dynamic Product RLS implemented:

 

VivienneVereen_4-1742936171650.png

 

Final Thoughts

 

Coming from a Power BI development background, I’m by no means a Python expert but Semantic Link makes data science and automation accessible to everyone! The extent of capabilities within Semantic Link Labs is truly impressive. Tasks that would have taken hours if not days including testing were done within minutes using minimal scalable code. Give it a try and let me know your thoughts!

 

References