Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreJoin the FabCon + SQLCon recap series. Up next: Power BI, Real-Time Intelligence, IQ and AI, and Data Factory take center stage. All sessions are available on-demand after the live show. Register now
Microsoft Fabric Deployment using Fabric CLI
Deploying Microsoft Fabric artifacts across environments can become complex especially when manual deployment or traditional deployment pipelines are not an option.
Microsoft Fabric provides the below deployment approaches today
Fabric Deployment Pipelines - promote content stage-to-stage (Dev → Test → Prod) inside the Fabric service.
Git Integration - sync a workspace to an Azure DevOps or GitHub repo for version control and branch-based collaboration.
Fabric CLI - a command-line tool that lets you create, import, export, and manage Fabric artifacts directly from a terminal or by a script.
Each approach has its place. This blog focuses on Fabric CLI and what makes it uniquely powerful -as it can import artifacts into workspace directly from local files, provision connections, shortcuts, Spark pools, and workspace access control, resolve cross-artifact dependencies automatically, and deploy to any number of workspaces etc.
The Fabric CLI provides a powerful and flexible alternative, enabling seamless migration of workspace assets from one Fabric workspace to another.
Why CLI Deployment?
This approach becomes particularly valuable in scenarios where:
When Does the One‑Click Fabric CLI Shine?
The CLI is especially effective in real-world industry scenarios, such as:
✔ Self‑Deployable Shared Utilities at Scale
Teams building reusable Fabric utilities such as data‑quality frameworks, golden‑query generators, or standardized notebook libraries often need a clean way to share them without ongoing hand‑holding. Using the CLI, these assets can be packaged with configuration files and self‑deployed by other teams (or even across tenants) into their own workspaces.
✔ Automated Workspace Provisioning for Reusable Solutions
Teams building generic, configurable, or multi‑tenant Fabric solutions often need a consistent, repeatable way to provision a new workspace—complete with pipelines, notebooks, semantic models, reports, and cross‑artifact references. The CLI enables a “workspace-from-scratch” setup with full dependency wiring.
✔ Platform Engineering
Self-service workspace provisioning with guardrails—Spark pool sizes, shortcut policies, and RBAC are all config-driven.
✔ Large-Scale Rollouts Across Many Fabric Workspaces
A retail enterprise deploying an updated data pipeline across dozens of store-specific Fabric workspaces can leverage this tool to push pipelines, notebooks, and reports uniformly ensuring consistency, reducing manual overhead, and preventing configuration drift.
How is this done ?
Using Fabric CLI & Python Automation (the scripting language used here is Python) and all the scripts are bundled make available for execution with one command in Terminal.
All scripts are bundled together and executed with a single command that makes an entire workspace stands up in one shot providing the one click experience.
A quick summary of steps -
Key Features (Fabric CLI + Python Automation)
This solution goes beyond a simple sequence of CLI command executions. It reflects deliberate architectural choices that ensure production-grade standards — idempotent resource creation, dependency-aware ordering, and config-driven extensibility.
The same toolkit adapts to different needs without code changes: infrastructure-only (--skip-code), code-only redeployments (--skip-infra), verbose or minimal logging, and selective artifact targeting through config updates.
Repeatable - Same config, same result, every time. Run the deployment today, tomorrow, or six months from now -the output is identical. No human variance, no forgotten steps.
Config-Driven - One JSON file is the single source of truth. Workspace, Lakehouse, Shortcuts, Notebooks, Pipelines, Semantic Models, Reports, Spark Pools, Access Control -every artifact and its parameters are declared in config. Swap the file, deploy a new environment. Also, artifacts can stay environment-agnostic. Hardcoded IDs and connection strings are replaced with ##parameterName## tokens. At deploy time, the scripts scan every artifact file, find matching placeholders, and replace them with actual values from the config. The same source code works everywhere — only the config changes.
CI/CD Ready - The master script is a single python oneinstaller.py call. Drop it into an Azure DevOps pipeline or a GitHub Actions workflow and you have continuous deployment for Fabric -no portal clicks, no manual handoffs.
Auditable Logging - Every deployment action is logged twice: a detailed running log and a structured csv file for artifact level audit trail. Each row captures the timestamp, artifact name, type, status, the exact CLI command executed. Import the CSV into Power BI or Excel for deployment analytics across runs.
Dependency-Aware Ordering - Artifacts don't exist in isolation. A Shortcut needs a Connection. A Pipeline references Notebook IDs. A Report binds to a Semantic Model ID. The orchestrator deploys artifacts in the correct sequence -so every reference points to a real, already-provisioned resource.
Selective Deployment - Not every run needs to deploy everything. The script supports targeted execution.
The deployment approach is intentionally flexible. For notebook-only changes, infrastructure redeployment is unnecessary. A new workspace requires a complete deployment, while updates to Spark pool configurations can be handled by deploying infrastructure changes alone.
Intelligent Retry - The deployment process is designed to be safe, repeatable, and idempotent. Infrastructure components such as the Lakehouse, connections, shortcuts, folders, and ACLs are first checked for existence in Fabric and skipped if already present, while the Spark pool is updated in place when it exists. All code artifacts including notebooks, pipelines, models, and reports are always overwritten using a force flag to ensure the latest changes are applied.
Graceful Degradation - As every deployment may not need every feature, the script adjusts based on what's configured:
You don't need to comment out code or maintain separate scripts for different scenarios. The config drives what runs.
Prerequisites:
1. There is a need for Python 3.10 and Fab CLI to be installed prior to deployment
2. Fabric Artifacts that needs to be deployed needs to be placed in respective project structure. Any parameters that need to be replaced by config needs to be replaced with place holders.
There is a detailed guidance available in README.md
This is a terminal based deployment that gets triggered by python oneinstaller.py command.
Every deployment starts with a config file.
Here is the structure that drives the entire automation -
This is available as fabric_config.json in the github repo.
A clear and detailed explanation on how the configuration needs to be filled\updates in available in README.md in the attached repo. Once the code in place, config updated, the deployment is triggered when the python oneinstaller.py command is issued.
The Lakehouse is the first artifact deployed. Shortcuts, notebooks, and pipelines all reference it.
The automation checks existence first, creates if does not exist.
Connections are a prerequisite for shortcuts and external data access. The CLI creates them with the below commands
The script reads connection details from the config (server name, auth type, privacy level), constructs the command dynamically, and checks existence first, so re-running never creates duplicates.
Artifacts don't exist in isolation. A shortcut needs a connection. A pipeline references notebook IDs. A report binds to a semantic model GUID. Deploy them in the wrong order and references break silently.
The orchestrator enforces this sequence automatically.
The sequence of steps that are part of full deployment. There are pre-flight checks to ensure python and fabric cli availability, takes user consent for workspace, then proceeds with infrastructure deployment, where Lakehouse, Connections, Sparkpool etc are deployed, which is followed by code deployment.
The options for the deployment
The subsequent runs need not deploy everything, or there could be project specific requirements. The script supports targeted execution, there are options given to skip infra, skip code etc.
Retry-Safe (Idempotent) by Design
The deployment process is engineered to be safe, repeatable, and idempotent. Core infrastructure components including lakehouses, connections, shortcuts etc are validated for existence and reused\skipped if already available, while configuration‑driven resources such as Spark pools are updated in place. Code artifacts such as notebooks, pipelines, models, and reports are re‑imported during each run to ensure the deployed environment consistently reflects the latest source state.
Every component follows one of three strategies
Strategy | Applies To | Behaviour | Justification |
Skip if exists | Lakehouse, Connection, Shortcuts, Folders | Checks existence first. If already there, reuses the existing resource and moves on. | Resources like the lakehouse, connection, and shortcuts are created once. On re-run, the script detects they already exist, skips creation, and retrieves their IDs for downstream use. |
Update in place | Spark Pool, Workspace Access | Applies the latest config settings, even if the resource already exists. | Configuration-driven components like the Spark pool are expected to change over time. Users may want to adjust node sizes, auto-scale ranges etc. On re-run, the script applies the latest settings from config. |
Always re-import | Notebooks, Pipelines, Models, Reports | Force re-imports with -f so the deployed version always matches your source. | Code is expected to change frequently. On every run, code artifacts are force re-imported (-f flag) so the deployed version always matches your source |
This means you can re-run the deployment at any point after a failure, after a code change, after a config tweak and it does the right thing.
Flow-diagram:
Wrapping Up
There is a clear guidance in the attached repo on how to update the config file, how to prepare the code to make the code deploy-ready. The code preparation is one time thing and one does not need to repeat the process for every deployment. However, if the code is updated, you may need to apply that change to local copies of code that are prepared to be deployed. Once the prerequisites are done, the entire deployment happens with single command - python oneinstaller.py.
Code Repository
HRDIUtilities/FabricCLI at main · microsoft/HRDIUtilities
This deployment kit is a starting point, a reference implementation that demonstrates what's possible with config-driven Fabric automation. Here are ways to extend it:
Example
A note on what this Is (and isn’t)
This is a guidance and reference implementation, a working example of how Fabric workspace deployments can be automated using the Fabric CLI and Python. Every organization's environment is different. The config structure, naming conventions, and deployment order work well for the scenarios described here. Your mileage may vary based on your Fabric capacity, tenant configuration, networking policies, and the specific artifacts you're deploying.
Use it as-is, adapt it to your conventions, or cherry-pick the patterns that fit. The Fabric CLI commands behind it are documented and stable - the automation is just orchestration on top.
Contributors @hasrikak @kranthimeda
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.