Supplies are limited. Contact info@espc.tech right away to save your spot before the conference sells out.
Get your discountScore big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount
Hi, I want to create a Python notebook to submit pipeline runs and check their status programmatically. Two APIs I plan to use are:
1. Run on demand pipeline job: https://community.fabric.microsoft.com/t5/Data-Pipeline/Execute-Data-Pipeline-Via-API/m-p/3740462
2. Get pipeline job instance: https://learn.microsoft.com/en-us/fabric/data-factory/pipeline-rest-api-capabilities#get-pipeline-jo...
I initially assumed the two APIs would integrate seamlessly. However, during implementation, I realized that the 'run on demand' API only returns a status code and not the job run ID. Is there a way to retrieve the IDs of jobs submitted through the 'run on demand' API?
Filtering by PipelineId and time range isn't a sufficiently reliable solution—especially since I plan to submit pipeline runs in batches. At this point, the only viable workaround seems to be implementing custom logging that captures an exhaustive list of parameter values along with the PipelineRunId and PipelineId. However, this feels unnecessarily complex for addressing such a fundamental need. I have proposed an idea at "Run on demand pipeline run" API returns job run I... - Microsoft Fabric Community
Plz refer the below blog for trigger and status API integrations :
The run on Demand API currently returns only an HTTP status code, such as 202 Accepted, without providing the runId needed to directly query the status of the submitted pipeline run. To work around this limitation, you can use the Get Pipeline Jobs endpoint to retrieve a list of recent pipeline runs after triggering the job. By filtering the results using the pipelineID and the timestamp close to your submission time, you can identify the specific run and extract its runId. Once you have the runId, you can poll the Get Pipeline Job Instance API to check the run status and execution details, such as start time, end time, and current status. If you need a guaranteed way to link the pipeline trigger to its run, consider implementing a logging mechanism that records metadata during the trigger process or explore triggering through the Fabric SDK when it becomes available for more seamless integration.
Please 'Kudos' and 'Accept as Solution' if this answered your query.
User | Count |
---|---|
2 | |
2 | |
1 | |
1 | |
1 |