Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join the FabCon + SQLCon recap series. Up next: Power BI, Real-Time Intelligence, IQ and AI, and Data Factory take center stage. All sessions are available on-demand after the live show. Register now

hasrikak

Automating Microsoft Fabric Workspace Deployments Using Fabric CLI and Python

Microsoft Fabric Deployment using Fabric CLI 

 

Deploying Microsoft Fabric artifacts across environments can become complex especially when manual deployment or traditional deployment pipelines are not an option.

Microsoft Fabric provides the below deployment approaches today

 

Fabric Deployment Pipelines - promote content stage-to-stage (Dev → Test → Prod) inside the Fabric service.

Git Integration - sync a workspace to an Azure DevOps or GitHub repo for version control and branch-based collaboration.

Fabric CLI - a command-line tool that lets you create, import, export, and manage Fabric artifacts directly from a terminal or by a script.

 

Each approach has its place. This blog focuses on Fabric CLI and what makes it uniquely powerful -as it can import artifacts into workspace directly from local files, provision connections, shortcuts, Spark pools, and workspace access control, resolve cross-artifact dependencies automatically, and deploy to any number of workspaces etc.

 

The Fabric CLI provides a powerful and flexible alternative, enabling seamless migration of workspace assets from one Fabric workspace to another.

 

Why CLI Deployment?

 

hasrikak_0-1775664838808.png

 

This approach becomes particularly valuable in scenarios where:

 

  • Manual deployment is not feasible, or the team prefers automation-first processes.
  • Fabric Deployment Pipelines cannot be utilized due to environmental constraints or access to the target Fabric workspace is unavailable, such as when deploying to customer-managed workspaces.
  • The target workspace does not yet exist and must be created and fully configured during deployment.
  • Connections, shortcuts, and access control Deployment Pipelines do not create connections, One Lake shortcuts, or workspace role assignments and much more. These must be set up separately. The CLI handles all of them as part of the same deployment flow.
  • Spark pool configurations CLI helps you manage Spark pool settings. With the CLI, admin teams can define pool sizes, node counts, and auto-scale limits in the config file - ensuring every workspace is provisioned with the right compute guardrails.
  • Dynamic references between artifacts or resources notebook parameters, semantic model source connections, storage account source for shortcuts are dynamically configured based on config file

 

When Does the One‑Click Fabric CLI Shine?

 

The CLI is especially effective in real-world industry scenarios, such as:

 

Self‑Deployable Shared Utilities at Scale

Teams building reusable Fabric utilities such as data‑quality frameworks, golden‑query generators, or standardized notebook libraries often need a clean way to share them without ongoing hand‑holding. Using the CLI, these assets can be packaged with configuration files and self‑deployed by other teams (or even across tenants) into their own workspaces.

 

✔ Automated Workspace Provisioning for Reusable Solutions

Teams building generic, configurable, or multi‑tenant Fabric solutions often need a consistent, repeatable way to provision a new workspace—complete with pipelines, notebooks, semantic models, reports, and cross‑artifact references. The CLI enables a “workspace-from-scratch” setup with full dependency wiring.

 

✔ Platform Engineering

Self-service workspace provisioning with guardrails—Spark pool sizes, shortcut policies, and RBAC are all config-driven.

 

✔ Large-Scale Rollouts Across Many Fabric Workspaces

A retail enterprise deploying an updated data pipeline across dozens of store-specific Fabric workspaces can leverage this tool to push pipelines, notebooks, and reports uniformly ensuring consistency, reducing manual overhead, and preventing configuration drift.

 

How is this done ? 

 

Using Fabric CLI & Python Automation (the scripting language used here is Python) and all the scripts are bundled make available for execution with one command in Terminal.

All scripts are bundled together and executed with a single command that makes an entire workspace stands up in one shot providing the one click experience.

 

A quick summary of steps -

 

Picture11.png

 

 

Key Features (Fabric CLI + Python Automation)

 

This solution goes beyond a simple sequence of CLI command executions. It reflects deliberate architectural choices that ensure production-grade standards — idempotent resource creation, dependency-aware ordering, and config-driven extensibility.

The same toolkit adapts to different needs without code changes: infrastructure-only (--skip-code), code-only redeployments (--skip-infra), verbose or minimal logging, and selective artifact targeting through config updates.

 

 

hasrikak_2-1775664838812.png

 

 

Repeatable - Same config, same result, every time. Run the deployment today, tomorrow, or six months from now -the output is identical. No human variance, no forgotten steps.

 

Config-Driven - One JSON file is the single source of truth. Workspace, Lakehouse, Shortcuts, Notebooks, Pipelines, Semantic Models, Reports, Spark Pools, Access Control -every artifact and its parameters are declared in config. Swap the file, deploy a new environment. Also, artifacts can stay environment-agnostic. Hardcoded IDs and connection strings are replaced with ##parameterName## tokens. At deploy time, the scripts scan every artifact file, find matching placeholders, and replace them with actual values from the config. The same source code works everywhere — only the config changes.

 

CI/CD Ready - The master script is a single python oneinstaller.py call. Drop it into an Azure DevOps pipeline or a GitHub Actions workflow and you have continuous deployment for Fabric -no portal clicks, no manual handoffs. 

 

Auditable Logging - Every deployment action is logged twice: a detailed running log and a structured csv file for artifact level audit trail. Each row captures the timestamp, artifact name, type, status, the exact CLI command executed. Import the CSV into Power BI or Excel for deployment analytics across runs.

 

Dependency-Aware Ordering - Artifacts don't exist in isolation. A Shortcut needs a Connection. A Pipeline references Notebook IDs. A Report binds to a Semantic Model ID. The orchestrator deploys artifacts in the correct sequence -so every reference points to a real, already-provisioned resource.

 

Selective Deployment - Not every run needs to deploy everything. The script supports targeted execution.

 

SelectiveDeployment.png

 

The deployment approach is intentionally flexible. For notebook-only changes, infrastructure redeployment is unnecessary. A new workspace requires a complete deployment, while updates to Spark pool configurations can be handled by deploying infrastructure changes alone.

   

Intelligent Retry - The deployment process is designed to be safe, repeatable, and idempotent. Infrastructure components such as the Lakehouse, connections, shortcuts, folders, and ACLs are first checked for existence in Fabric and skipped if already present, while the Spark pool is updated in place when it exists. All code artifacts including notebooks, pipelines, models, and reports are always overwritten using a force flag to ensure the latest changes are applied. 

 

 Graceful Degradation - As every deployment may not need every feature, the script adjusts based on what's configured:

  • No storage account? Connection and shortcut creation are skipped entirely - no errors, no empty resources.
  • No SPN Object ID? Workspace RBAC assignment to application is skipped.
  • No folder configuration? Folder creation is skipped.
  • No shortcut configuration? Shortcut creation is skipped.

You don't need to comment out code or maintain separate scripts for different scenarios. The config drives what runs.

 

Implementation Walkthrough

 

Prerequisites:

 

1. There is a need for Python 3.10 and Fab CLI to be installed prior to deployment

2. Fabric Artifacts that needs to be deployed needs to be placed in respective project structure. Any parameters that need to be replaced by config needs to be replaced with place holders. 

There is a detailed guidance available in README.md

 

Terminal based deployment:

 

This is a terminal based deployment that gets triggered by python oneinstaller.py command.

Every deployment starts with a config file.

 

Here is the structure that drives the entire automation -

 

Terminal1.png

 

This is available as fabric_config.json in the github repo.

 

A clear and detailed explanation on how the configuration needs to be filled\updates in available in README.md in the attached repo. Once the code in place, config updated, the deployment is triggered when the python oneinstaller.py command is issued.

 

Examples of CLI commands

 

1. Lakehouse Creation

 

The Lakehouse is the first artifact deployed. Shortcuts, notebooks, and pipelines all reference it.

The automation checks existence first, creates if does not exist.

 

Terminal2.png

 

2. Creating a Connection

 

Connections are a prerequisite for shortcuts and external data access. The CLI creates them with the below commands

 

Terminal3.png

 

The script reads connection details from the config (server name, auth type, privacy level), constructs the command dynamically, and checks existence first, so re-running never creates duplicates.

 

Order of deployment

 

Artifacts don't exist in isolation. A shortcut needs a connection. A pipeline references notebook IDs. A report binds to a semantic model GUID. Deploy them in the wrong order and references break silently.

The orchestrator enforces this sequence automatically.

 

Picture6.png

 

The sequence of steps that are part of full deployment. There are pre-flight checks to ensure python and fabric cli availability, takes user consent for workspace, then proceeds with infrastructure deployment, where Lakehouse, Connections, Sparkpool etc are deployed, which is followed by code deployment.

 

Picture7.png

 

 

The options for the deployment

 

The subsequent runs need not deploy everything, or there could be project specific requirements. The script supports targeted execution, there are options given to skip infra, skip code etc.

 

 

Picture8.png

 

Retry-Safe (Idempotent) by Design

 

The deployment process is engineered to be safe, repeatable, and idempotent. Core infrastructure components including lakehouses, connections, shortcuts etc are validated for existence and reused\skipped if already available, while configuration‑driven resources such as Spark pools are updated in place. Code artifacts such as notebooks, pipelines, models, and reports are re‑imported during each run to ensure the deployed environment consistently reflects the latest source state.

 

Every component follows one of three strategies 

 

Strategy

Applies To

Behaviour

Justification

Skip if exists

Lakehouse, Connection, Shortcuts, Folders

Checks existence first. If already there, reuses the existing resource and moves on.

Resources like the lakehouse, connection, and shortcuts are created once. On re-run, the script detects they already exist, skips creation, and retrieves their IDs for downstream use.

Update in place

Spark Pool, Workspace Access

Applies the latest config settings, even if the resource already exists.

Configuration-driven components like the Spark pool are expected to change over time. Users may want to adjust node sizes, auto-scale ranges etc. On re-run, the script applies the latest settings from config.

Always re-import

Notebooks, Pipelines, Models, Reports

Force re-imports with -f so the deployed version always matches your source.

Code is expected to change frequently. On every run, code artifacts are force re-imported (-f flag) so the deployed version always matches your source

 

This means you can re-run the deployment at any point after a failure, after a code change, after a config tweak and it does the right thing.

 

Flow-diagram:

 

Picture9.png

 

Wrapping Up

 

The One-Command Installer

There is a clear guidance in the attached repo on how to update the config file, how to prepare the code to make the code deploy-ready. The code preparation is one time thing and one does not need to repeat the process for every deployment. However, if the code is updated, you may need to apply that change to local copies of code that are prepared to be deployed. Once the prerequisites are done, the entire deployment happens with single command - python oneinstaller.py.

 

Code Repository

 

The config‑driven automation toolkit for deploying Microsoft Fabric workspaces using Fabric CLI and Python is available here.
The script provisions infrastructure, manage dependencies, and deploy artifacts across environments with a single command.
Ideal for scalable, repeatable Fabric deployments. Also the code has detailed guidance on how the script works in README.md

HRDIUtilities/FabricCLI at main · microsoft/HRDIUtilities

 

What You Can Build On Top of This

This deployment kit is a starting point, a reference implementation that demonstrates what's possible with config-driven Fabric automation. Here are ways to extend it:

 

  • Multi-workspace rollouts - Loop over an array of config files to deploy across environments or tenants in a single run.
  • CI/CD integration - Wrap python oneinstaller.py in an Azure DevOps pipeline or GitHub Actions workflow. The --minimal flag gives clean output for build logs. CSV audit trails can be published as pipeline artifacts.
  • Deployment dashboards - Import the CSV logs into Power BI to visualize deployment trends, failure rates, and component-level status over time

Example

 

Picture10.png

 

  • Custom artifact types - The placeholder system is open-ended. Add new parameters to config, use ##yourParameter## in your artifacts, and the deployment handles the replacement automatically.
  • Pre/post deployment hooks — Add validation scripts, trigger data refreshes, or send notifications before or after deployment.

 

A note on what this Is (and isn’t) 

 

This is a guidance and reference implementation, a working example of how Fabric workspace deployments can be automated using the Fabric CLI and Python. Every organization's environment is different. The config structure, naming conventions, and deployment order work well for the scenarios described here. Your mileage may vary based on your Fabric capacity, tenant configuration, networking policies, and the specific artifacts you're deploying.

 

Use it as-is, adapt it to your conventions, or cherry-pick the patterns that fit. The Fabric CLI commands behind it are documented and stable - the automation is just orchestration on top.

 

Contributors @hasrikak @kranthimeda