The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends September 15. Request your voucher.
Hi, I want to create a Python notebook to submit pipeline runs and check their status programmatically. Two APIs I plan to use are:
1. Run on demand pipeline job: https://community.fabric.microsoft.com/t5/Data-Pipeline/Execute-Data-Pipeline-Via-API/m-p/3740462
2. Get pipeline job instance: https://learn.microsoft.com/en-us/fabric/data-factory/pipeline-rest-api-capabilities#get-pipeline-jo...
I initially assumed the two APIs would integrate seamlessly. However, during implementation, I realized that the 'run on demand' API only returns a status code and not the job run ID. Is there a way to retrieve the IDs of jobs submitted through the 'run on demand' API?
After syncing internally with domain experts, we discovered that—despite the lack of mention in the official documentation—the API call does return some metadata. For example, the job run URL can be fetched via response.headers['location'].
Sample Response Headers:
cache-control: no-store, must-revalidate, no-cache
pragma: no-cache
content-type: application/octet-stream
retry-after: 60
x-ms-job-id: 38fd5429-136b-4041-9353-211d561039f8
strict-transport-security: max-age=31536000; includeSubDomains
x-frame-options: deny
x-content-type-options: nosniff
requestid: 1372206e-f857-4e10-9a98-fbbcc5ce0c55
access-control-expose-headers: RequestId,Location,Retry-After
date: Wed, 27 Aug 2025 05:06:33 GMT
Filtering by PipelineId and time range isn't a sufficiently reliable solution—especially since I plan to submit pipeline runs in batches. At this point, the only viable workaround seems to be implementing custom logging that captures an exhaustive list of parameter values along with the PipelineRunId and PipelineId. However, this feels unnecessarily complex for addressing such a fundamental need. I have proposed an idea at "Run on demand pipeline run" API returns job run I... - Microsoft Fabric Community
Thanks for submitting idea in ideas forum. Feedback submitted here is often reviewed by the product teams and can lead to meaningful improvement.
Thanks
Prashanth
MS Fabric community
Plz refer the below blog for trigger and status API integrations :
The run on Demand API currently returns only an HTTP status code, such as 202 Accepted, without providing the runId needed to directly query the status of the submitted pipeline run. To work around this limitation, you can use the Get Pipeline Jobs endpoint to retrieve a list of recent pipeline runs after triggering the job. By filtering the results using the pipelineID and the timestamp close to your submission time, you can identify the specific run and extract its runId. Once you have the runId, you can poll the Get Pipeline Job Instance API to check the run status and execution details, such as start time, end time, and current status. If you need a guaranteed way to link the pipeline trigger to its run, consider implementing a logging mechanism that records metadata during the trigger process or explore triggering through the Fabric SDK when it becomes available for more seamless integration.
Please 'Kudos' and 'Accept as Solution' if this answered your query.