Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
I am creating a deployment of a solution containing of some workspaces with different resources. I have started from this sample project: https://github.com/Azure-Samples/modern-data-warehouse-dataops/tree/main/single_tech_samples/fabric/....
It has helped me a lot but I haven't found a good way to handle data pipelines referencing each other.
When a data pipeline "a" is created it gets a new object id that has to be referenced by another pipeline "b" calling "a". The pipeline-content.json of "b" needs an updated value for "referenceName" that matches the "objectId" of "a".
Has anyone found approach how to handle this?
I ended up creating a bash script that contained the IDs that were to be replaced. When I created
new resources, I supplemented the file with the new IDs and ran it to update my pipelines.
updateIdsInPipelines.sh
#!/bin/bash
# Original
o_workspaceId=someId
# New
workspace_id=""
# Replace the old ID with the new ID in all files named pipeline-content.json
# WorkspaceId
if [ -n "$workspace_id" ]; then
find . -name "pipeline-content.json" -exec sed -i "s/$o_workspaceId/$workspace_id/g" {} +
fi
in bootstrap.sh
#!/bin/bash
if [ -n "$lakehouse_id" ]; then
find . -name "updateIdsInPipelines.sh" -exec sed -i "s/lakehouse_id=\"\"/"lakehouse_id=$lakehouse_id"/g" {} +
fi
echo "Update ids"
./updateIdsInPipelines.sh
Thank you for your reply! I was thinking about replacing values in pipeline-content.json. I guess that's what I will have to do in the bootstrap.sh eventually...
Hi @GusC_SWE ,
Here are a few approaches you might consider:
1. Use parameters in your pipelines to dynamically pass the object IDs. When you create pipeline “a”, capture its object ID and pass it as a parameter to pipeline “b”. This way, you can update the referenceName in pipeline-content.json dynamically.
2. Automate the process using Azure DevOps. After creating pipeline “a”, use a script or task in your Azure DevOps pipeline to update the pipeline-content.json of pipeline “b” with the new object ID of “a”. This can be done using Azure CLI or PowerShell scripts.
3. Write custom scripts to handle the update. After deploying pipeline “a”, run a script that fetches the object ID of “a” and updates the pipeline-content.json of “b”. This script can be integrated into your CI/CD pipeline.
4. Utilize Azure Data Factory’s REST API to programmatically update the pipeline definitions. After creating pipeline “a”, make an API call to fetch its object ID and another call to update the pipeline-content.json of pipeline “b”.
Finally I think you can read this document: Build a data pipeline by using Azure Pipelines - Azure Pipelines | Microsoft Learn
Best Regards
Yilong Zhou
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.