Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now
We are using an Azure DevOps pipelines to deploy Power BI (PBIP) reports from a Git repo to Microsoft Fabric workspaces through a Service Principal connection. Each workspace is mapped to a folder in a branch. The pipeline is functional and in use and now we are looking to add a new feature:
Whenever a feature/release subbranch is created from a main, parent branch (like Dev_November created from Dev), and only a subset of reports are changed, we want the pipeline to deploy only the changed reports to the corresponding subworkspace (different from the Dev workspace), instead of all the reports in an automated/hands-off fashion.
Here is the pipeline I have so far:
What I have tried so far:
Is there a way we reliably deploy only the changed PBIP reports to a Fabric workspace when mapping to a new branch/folder, without triggering a full sync of all items? Is there a supported way to do this with Fabric’s Git integration or REST API? Are there any workarounds that could be done (e.g. creating a subfolder when this sync happens, syncing all the reports and then removing the ones that haven't changed, etc etc)
pipeline.yml Stage 2:
- stage: FabricSync
displayName: "Stage 2: Connect/Init + Status + UpdateFromGit (per workspace path)"
dependsOn:
- ComposeAndCommit
condition: succeeded()
variables:
- name: WORK_UNITS_JSON_VAR
value: $[ stageDependencies['ComposeAndCommit']['LoadConfig'].outputs['loadCfg.WORK_UNITS_JSON'] ]
- name: CONNECTION_ID_VAR
value: $[ stageDependencies['ComposeAndCommit']['LoadConfig'].outputs['loadCfg.CONNECTION_ID'] ]
- name: ORGANIZATION_NAME_VAR
value: $[ stageDependencies['ComposeAndCommit']['LoadConfig'].outputs['loadCfg.ORGANIZATION_NAME'] ]
- name: PROJECT_NAME_VAR
value: $[ stageDependencies['ComposeAndCommit']['LoadConfig'].outputs['loadCfg.PROJECT_NAME'] ]
- name: REPOSITORY_NAME_VAR
value: $[ stageDependencies['ComposeAndCommit']['LoadConfig'].outputs['loadCfg.REPOSITORY_NAME'] ]
- name: BRANCH_NAME_VAR
value: $[ stageDependencies['ComposeAndCommit']['LoadConfig'].outputs['loadCfg.BRANCH_NAME'] ]
- name: TENANT_ID_VAR
value: $[ stageDependencies['ComposeAndCommit']['LoadConfig'].outputs['loadCfg.TENANT_ID'] ]
- name: CLIENT_ID_VAR
value: $[ stageDependencies['ComposeAndCommit']['LoadConfig'].outputs['loadCfg.CLIENT_ID'] ]
jobs:
- job: SyncJob
displayName: "Sync Fabric workspace(s) from Azure Repos"
pool:
vmImage: ubuntu-latest
steps:
# Ensure the same branch content is locally available for git diff
- task: 6d15af64-176c-496d-b583-fd2ae21d4df4@1
displayName: "Checkout self at $(BRANCH_NAME_VAR)"
inputs:
repository: self
persistCredentials: true
fetchDepth: 0
ref: $(BRANCH_NAME_VAR)
- task: Bash@3
displayName: "Connect -> Initialize -> Status -> UpdateFromGit (Fabric REST) — per workspace"
env:
CLIENT_SECRET: $(CLIENT_SECRET)
WORK_UNITS_JSON: $(WORK_UNITS_JSON_VAR)
CONNECTION_ID: $(CONNECTION_ID_VAR)
ORGANIZATION_NAME: $(ORGANIZATION_NAME_VAR)
PROJECT_NAME: $(PROJECT_NAME_VAR)
REPOSITORY_NAME: $(REPOSITORY_NAME_VAR)
BRANCH_NAME: $(BRANCH_NAME_VAR)
TENANT_ID: $(TENANT_ID_VAR)
CLIENT_ID: $(CLIENT_ID_VAR)
inputs:
targetType: inline
script: |
set -euo pipefail
echo "Installing jq..."
sudo apt-get update -y >/dev/null && sudo apt-get install -y jq >/dev/null
# Validate SP
: "${TENANT_ID:?TENANT_ID missing}"
: "${CLIENT_ID:?CLIENT_ID missing}"
: "${CLIENT_SECRET:?CLIENT_SECRET missing}"
base="https://api.fabric.microsoft.com/v1"
org="${ORGANIZATION_NAME}"
proj="${PROJECT_NAME}"
repo="${REPOSITORY_NAME}"
branch="${BRANCH_NAME}"
connId="${CONNECTION_ID}"
# Local git repo (for diffing)
repoDir="$(Build.SourcesDirectory)"
git -C "$repoDir" config --global --add safe.directory "$repoDir"
git -C "$repoDir" fetch --all --tags --prune >/dev/null || true
# Normalize work units: include deployMode
wuJson="${WORK_UNITS_JSON:-}"
[ -n "$wuJson" ] && [ "$wuJson" != "null" ] || { echo "ERROR: WORK_UNITS_JSON empty"; exit 1; }
WORK_UNITS="$(echo "$wuJson" | jq -c '[ .[] | { id, name, sourceFolder, directoryName, deployMode } ]')"
[ "$(echo "$WORK_UNITS" | jq -r 'length')" -gt 0 ] || { echo "ERROR: No work units"; exit 1; }
echo "Work units to deploy:"; echo "$WORK_UNITS" | jq .
# Acquire Fabric token
token_resp_headers="$(mktemp)"
token_resp_body="$(mktemp)"
http_code=$(
curl -sS -D "$token_resp_headers" -o "$token_resp_body" -w "%{http_code}" \
-X POST "https://login.microsoftonline.com/${TENANT_ID}/oauth2/v2.0/token" \
-H "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode "grant_type=client_credentials" \
--data-urlencode "client_id=${CLIENT_ID}" \
--data-urlencode "client_secret=${CLIENT_SECRET}" \
--data-urlencode "scope=https://api.fabric.microsoft.com/.default"
)
token="$(jq -r '.access_token // empty' "$token_resp_body")"
if [ "$http_code" != "200" ] || [ -z "$token" ] || [ "$token" = "null" ]; then
echo "❌ Failed to acquire token. HTTP $http_code"
echo "Response headers:"; sed -n '1,40p' "$token_resp_headers"
echo "Response body:"; (jq . "$token_resp_body" 2>/dev/null || cat "$token_resp_body")
exit 1
fi
authH="Authorization: Bearer ${token}"
jsonH="Content-Type: application/json"
# Helpers .................................................................
poll_state_then_result () {
local next="$1"
while : ; do
local headers body status retry loc
headers="$(mktemp)"
body="$(curl -sS -D "$headers" -H "$authH" "$next")"
status="$(echo "$body" | jq -r '.status // empty' 2>/dev/null || true)"
if [ -n "$status" ] && [ "$status" != "Succeeded" ]; then
retry="$(awk -F': ' '/^Retry-After:/ {print $2}' "$headers" | tr -d '\r')"
sleep "${retry:-10}"
loc="$(awk -F': ' '/^Location:/ {print $2}' "$headers" | tr -d '\r')"
[ -n "$loc" ] && next="$loc"
continue
fi
loc="$(awk -F': ' '/^Location:/ {print $2}' "$headers" | tr -d '\r')"
if [ -n "$loc" ]; then curl -sS -H "$authH" "$loc"; else echo "$body"; fi
break
done
}
poll_operation_id () {
local opId="$1"
local opUrl="${base}/operations/${opId}"
while : ; do
local headers body status retry loc
headers="$(mktemp)"
body="$(curl -sS -D "$headers" -H "$authH" "$opUrl")"
status="$(echo "$body" | jq -r '.status // empty')"
if [ "$status" = "Succeeded" ]; then
loc="$(awk -F': ' '/^Location:/ {print $2}' "$headers" | tr -d '\r')"
if [ -n "$loc" ]; then curl -sS -H "$authH" "$loc"; else echo "$body"; fi
break
fi
retry="$(awk -F': ' '/^Retry-After:/ {print $2}' "$headers" | tr -d '\r')"
sleep "${retry:-10}"
done
}
# Core sync ................................................................
sync_one_workspace () {
local wsId="$1" label="$2" dirName="$3" deployMode="$4"
echo ""
echo "====== Workspace: $label ($wsId) — directoryName=${dirName} (deployMode=${deployMode}) ======"
# 1) Workspace sanity
if ! curl -sS -H "$authH" "${base}/workspaces/${wsId}" >/dev/null; then
echo "❌ Workspace not accessible: ${wsId}"
return 1
fi
# 2) Connect (idempotent)
connectBody="$(jq -n --arg org "$org" --arg proj "$proj" --arg repo "$repo" \
--arg branch "$branch" --arg dir "$dirName" --arg conn "$connId" '
{
gitProviderDetails: {
organizationName: $org,
projectName: $proj,
gitProviderType: "AzureDevOps",
repositoryName: $repo,
branchName: $branch,
directoryName: $dir
},
myGitCredentials: { source: "ConfiguredConnection", connectionId: $conn },
itemSyncMode: "Mirror"
}')"
curl -sS -o /dev/null -X POST -H "$authH" -H "$jsonH" \
-d "$connectBody" "${base}/workspaces/${wsId}/git/connect" || true
# Ensure Git Credentials are configured
curl -sS -X PATCH -H "$authH" -H "$jsonH" \
-d "$(jq -n --arg conn "$connId" '{ source: "ConfiguredConnection", connectionId: $conn }')" \
"${base}/workspaces/${wsId}/git/myGitCredentials" >/dev/null
# 3) Initialize (PreferRemote)
echo "Initialize (PreferRemote)…"
initHeaders="$(mktemp)"
initResp="$(curl -sS -D "$initHeaders" -H "$authH" -H "$jsonH" \
-X POST -d '{"initializationStrategy":"PreferRemote"}' \
"${base}/workspaces/${wsId}/git/initializeConnection" || true)"
initCode="$(awk 'NR==1{print $2}' "$initHeaders")"
remoteCommitHash=""; workspaceHead=""
if [ "$initCode" = "200" ]; then
remoteCommitHash="$(echo "$initResp" | jq -r '.remoteCommitHash // empty')"
workspaceHead="$(echo "$initResp" | jq -r '.workspaceHead // empty')"
else
loc="$(awk -F': ' '/^Location:/ {print $2}' "$initHeaders" | tr -d '\r')"
if [ -n "$loc" ]; then
initResult="$(poll_state_then_result "$loc")"
remoteCommitHash="$(echo "$initResult" | jq -r '.remoteCommitHash // empty')"
workspaceHead="$(echo "$initResult" | jq -r '.workspaceHead // empty')"
fi
fi
echo "Init remoteCommitHash=${remoteCommitHash:-<none>} workspaceHead=${workspaceHead:-<none>}"
# 4) Status (if needed)
if [ -z "${remoteCommitHash:-}" ] || [ "$remoteCommitHash" = "null" ]; then
stHeaders="$(mktemp)"
stBody="$(curl -sS -D "$stHeaders" -H "$authH" "${base}/workspaces/${wsId}/git/status")"
if grep -q "^HTTP/.* 202" "$stHeaders"; then
loc="$(awk -F': ' '/^Location:/ {print $2}' "$stHeaders" | tr -d '\r')"
echo "Status pending; polling $loc"
stBody="$(poll_state_then_result "$loc")"
fi
remoteCommitHash="$(echo "$stBody" | jq -r '.remoteCommitHash // empty')"
workspaceHead="$(echo "$stBody" | jq -r '.workspaceHead // empty')"
echo "Status remoteCommitHash=${remoteCommitHash:-<none>} workspaceHead=${workspaceHead:-<none>}"
fi
# 4.5) SHORT‑CIRCUIT for deployMode == "changed"
if [ "${deployMode:-all}" = "changed" ]; then
dirRel="${dirName#/}" # strip leading slash for git pathspec
# Ensure both commits exist locally (workspaceHead may not be fetched yet)
if [ -n "${workspaceHead:-}" ] && ! git -C "$repoDir" cat-file -e "${workspaceHead}^{commit}" 2>/dev/null; then
git -C "$repoDir" fetch --depth=0 origin >/dev/null || true
fi
# If workspaceHead missing (first-time), treat as "changed" => deploy
if [ -n "${workspaceHead:-}" ] && git -C "$repoDir" cat-file -e "${workspaceHead}^{commit}" 2>/dev/null; then
changes="$(git -C "$repoDir" diff --name-only "${workspaceHead}..${remoteCommitHash}" -- "$dirRel" || true)"
else
changes="__assume_changed__"
fi
if [ -z "${changes:-}" ]; then
echo "No file changes under '${dirRel}' between workspaceHead and branch HEAD; skipping UpdateFromGit for ${label} (deploy_mode=changed)."
return 0
fi
echo "Changed paths under '${dirRel}':"
echo "$changes" | sed 's/^/ • /'
fi
# 5) Guard — if nothing coming from Git, stop
if [ -z "${remoteCommitHash:-}" ] || [ "$remoteCommitHash" = "null" ]; then
echo "No incoming changes from Git; nothing to update for $label."
return 0
fi
# 6) Update From Git
echo "Updating from Git (remote=${remoteCommitHash}, head=${workspaceHead:-<null>})…"
payloadCore="$(jq -n --arg r "$remoteCommitHash" --arg h "$workspaceHead" '
{
remoteCommitHash: $r,
options: { allowOverrideItems: true },
conflictResolution: { conflictResolutionType: "Workspace", conflictResolutionPolicy: "PreferRemote" }
}
| if ($h != null and $h != "") then .workspaceHead = $h else . end
')"
upHeaders="$(mktemp)"; upBodyFile="$(mktemp)"
curl -sS -D "$upHeaders" -o "$upBodyFile" \
-X POST -H "$authH" -H "Content-Type: application/json; charset=utf-8" \
--data-binary "$payloadCore" \
"${base}/workspaces/${wsId}/git/updateFromGit" || true
upStatus="$(awk 'NR==1{print $2}' "$upHeaders")"
upLoc="$(awk -F': ' '/^Location:/ {print $2}' "$upHeaders" | tr -d '\r')"
opId="$(awk -F': ' '/^x-ms-operation-id:/ {print $2}' "$upHeaders" | tr -d '\r')"
echo "UpdateFromGit HTTP $upStatus"
(jq . "$upBodyFile" 2>/dev/null || cat "$upBodyFile")
if [ "$upStatus" = "400" ] && grep -qi "updateFromGitRequest" "$upBodyFile"; then
payloadWrapped="$(jq -n --argjson core "$payloadCore" '{ updateFromGitRequest: $core }')"
upHeaders2="$(mktemp)"; upBodyFile2="$(mktemp)"
curl -sS -D "$upHeaders2" -o "$upBodyFile2" \
-X POST -H "$authH" -H "Content-Type: application/json; charset=utf-8" \
--data-binary "$payloadWrapped" \
"${base}/workspaces/${wsId}/git/updateFromGit" || true
upStatus="$(awk 'NR==1{print $2}' "$upHeaders2")"
upLoc="$(awk -F': ' '/^Location:/ {print $2}' "$upHeaders2" | tr -d '\r')"
opId="$(awk -F': ' '/^x-ms-operation-id:/ {print $2}' "$upHeaders2" | tr -d '\r')"
(jq . "$upBodyFile2" 2>/dev/null || cat "$upBodyFile2")
mv "$upHeaders2" "$upHeaders" 2>/dev/null || true
mv "$upBodyFile2" "$upBodyFile" 2>/dev/null || true
fi
updateResult="{}"
if [ "$upStatus" = "202" ]; then
if [ -n "$upLoc" ]; then
echo "Update accepted; polling $upLoc"
updateResult="$(poll_state_then_result "$upLoc")"
elif [ -n "$opId" ]; then
echo "Update accepted (no Location). Polling by operation id: $opId"
updateResult="$(poll_operation_id "$opId")"
else
echo "❌ 202 Accepted but neither Location nor x-ms-operation-id present; cannot poll."
return 1
fi
elif [ "$upStatus" = "200" ]; then
updateResult="$(cat "$upBodyFile")"
else
echo "❌ UpdateFromGit did not start (HTTP $upStatus). See response above."
return 1
fi
echo "Update result (final):"
(echo "$updateResult" | jq . 2>/dev/null) || echo "$updateResult"
finalStatus="$(echo "$updateResult" | jq -r '.status // empty')"
if [ -n "$finalStatus" ] && [ "$finalStatus" != "Succeeded" ]; then
echo "❌ UpdateFromGit final status for $label: ${finalStatus}"
return 1
fi
# 7) Post-update status
st2Headers="$(mktemp)"
st2Body="$(curl -sS -D "$st2Headers" -H "$authH" "${base}/workspaces/${wsId}/git/status")"
if grep -q "^HTTP/.* 202" "$st2Headers"; then
loc="$(awk -F': ' '/^Location:/ {print $2}' "$st2Headers" | tr -d '\r')"
echo "Status pending; polling $loc"
st2Body="$(poll_state_then_result "$loc")"
fi
postRemote="$(echo "$st2Body" | jq -r '.remoteCommitHash // empty')"
postHead="$(echo "$st2Body" | jq -r '.workspaceHead // empty')"
changesCount="$(echo "$st2Body" | jq -r '.changes | length // 0')"
echo "Post-update ($label): remoteCommitHash=${postRemote:-<none>} workspaceHead=${postHead:-<none>} changes=${changesCount}"
}
# Iterate all work units (now with deployMode)
failures=0
echo "$WORK_UNITS" | jq -c '.[]' | while read -r wu; do
wsId="$(echo "$wu" | jq -r '.id')"
wsName="$(echo "$wu" | jq -r '.name')"
dirName="$(echo "$wu" | jq -r '.directoryName')"
deployMode="$(echo "$wu" | jq -r '.deployMode // "all"')"
if ! sync_one_workspace "$wsId" "$wsName" "$dirName" "$deployMode"; then
failures=$((failures+1))
fi
done
if [ "${failures:-0}" -gt 0 ]; then
echo "❌ Stage 2 encountered ${failures} failure(s)."
exit 1
fi
echo "✔ Stage 2 completed for all workspace(s)."
Hi @yazdanb,
We are following up once again regarding your query. Could you please confirm if you have raised ideas with Microsoft? If not please raise idea and upvote it. Should you need further assistance in the future, we encourage you to reach out via the Microsoft Fabric Community Forum and create a new thread. We’ll be happy to help.
Thanks,
Prashanth
u can do this in Azure DevOps by utilizing the fabric-cicd Python library and the parameterization functionality to filter for certain items. Just be aware that this will take some development effort on your part.
Hi @yazdanb ,
Please refer to below useful links:
Limitations:
The first-time sync of a workspace (e.g., when you connect a new branch/folder) will typically pull all items in that folder/branch. The docs label it like “If called after the Connect and Initialize Connection APIs, it will perform a full update of the entire workspace.”
When you switch branches or change the directoryName mapping, items present in the old branch/folder but not in the new one get deleted.
There’s no documented parameter in updateFromGit for “deploy only these paths/files” (at least in the public doc as of now). So full control is indirect (via folder/files structure) rather than native “include/exclude list”.
Permissions: Make sure your Service Principal has all required scopes and the workspace has capacity etc. Permissions issues show up if scope insufficient.
You’ll need good CI logging/tracking to ensure items you expect to deploy were actually deployed.
I’d encourage you to submit your detailed feedback and ideas via Microsoft's official feedback channels, such as the Microsoft Fabric Ideas.
Feedback submitted here is often reviewed by the product teams and can lead to meaningful improvement. since clearer guidance would definitely help capacity administrators interpret these metrics more confidently. This helps the product group prioritize improvements, and the more customers that raise this, the faster it gets addressed.
Thanks,
Prashanth Are
MS Fabric community
Check out the November 2025 Power BI update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
| User | Count |
|---|---|
| 3 | |
| 3 | |
| 2 | |
| 2 | |
| 2 |