Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
As contextualized by the picture above, is it possible to pass to 'PL_monitor' activity:
- the whole '@pipeline().parameters' content wrapped as an object (or anything actionable)
- the whole '@pipeline().variables' content wrapped as an object (or anything actionable)
PS: I know (or think I know :D) that 'parameters' and 'variables' are not actual existing '@pipeline()' attributes but did name them taht way as I thought they would make sense.
I would love that to be possible to allow to pass a complex object with content "not yet processed" in my "PL_monitor" activity so that, if I want to add a feature to my monitoring activity, I do not have to come back to every Data Pipeline using the PL_monitor activity to add the variable name and value...
Solved! Go to Solution.
No, afaik, adf or Microsoft Fabric Pipelines do not natively support passing the entire set of pipeline parameters or variables as a single object to another pipeline in the Invoke Pipeline activity. Parameters and variables are not exposed as built-in collections or objects that you can reference as a whole (like @pipeline().parameters or @pipeline().variables)
You can only reference each parameter or variable individually by its name (e.g., @pipeline().parameters.param_a)
Did you try below already?
Store configuration or monitoring context centrally (in an azure table storage, key vault secret, or a configuration file in data lake).
Pipelines push monitoring events to a central store or call an API that handles monitoring, instead of calling a dedicated monitoring pipeline.
(OR)
Standardize a single complex parameter (monitoringContext) that all pipelines agree on.
Instead of trying to pass everything, define upfront which key pieces of context (executionId, status, any key params) need to be sent for monitoring.
This way you decouple the pipelines and the monitor stays generic.
Hi @MathieuSGA ,
We would like to confirm if you've successfully resolved this issue or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are more than happy to continue to help you.
Thank you for your patience and look forward to hearing from you.
Best Regards,
Chaithra E.
Hi,
I have to say that I did not get what I was expected, at least from my understanding.
Or if I do, it might be that it's not possible.
The root of my search was to be able to pass more information than needed, at the moment, about the context of the pipeline to be monitored. The goal is to prevent the need to go back into all "observability_pipeline" (the one displayed in my initial question), to add a parameter that might be specific to one pipeline (and allo retro-compatibility while planning for observability pipelines "migration" to the new parameters signature.
In a nutshell, the constraint was to not need to "Standardize a single complex parameter (monitoringContext) that all pipelines agree on" and deal with that in the unique PL_monitor pipeline to restrain changes to that specific location and allow for quick release.
I'm going to accept @Vinodh247 since it did raise some interesting ways of working.
Thanks for your help
Hi @MathieuSGA ,
The optimal approach you can use is to store the variables and parameters in JSON format. You can then pass these parameters to the monitoring activity.
Example :
{
"param_1" : "value1",
"param_2" : "value2"
}
For that, Instead of using parameters & variables in Fabric Data Factory, You should use notebooks. They are using Apache Spark and Delta Lake altoghther. ( Data Lake and then Data Lakehouse) ( Databricks first implemented a Apache Spark/Delta Lake and then Microsoft has followed Databricks)
You should use notebooks1, notebooks2.......etc. Each Notebook have its own separate table.
You should always use Notebooks instead of Fabric Data Factory/ Fabric Data Warehouse. ( Recommended)
and the use below code:
df = spark.createDataFrame([(14, "Tom"), (23, "Alice"), (16, "Bob")], ["age", "name"]) df.collect() // This is to get a data from Apache Spark ( Data Lake)
then use Delta Lake ( Data Lakehouse) to get final results
Hi @MathieuSGA ,
To standardize a single complex parameter (monitoringContext) across Azure Data Factory or Microsoft Fabric Data Pipelines, follow this structured approach:
Define a common object structure (monitoringContext)
Decide upfront what fields all pipelines will send. Example format:
{
"executionId": "abc123",
"pipelineName": "PL_Sample",
"status": "Success",
"triggerTime": "2025-07-07T10:00:00Z",
"parameters": {
"env": "DEV",
"source": "Blob",
"target": "SQL"
}}
You can define this as a JSON object parameter called monitoringContext.
Add it as a parameter to all pipelines
Populate it in the trigger or parent pipeline
Pass it to monitor pipeline or logging step
Keep monitor logic generic
Optionally log it in a central store
Best Regards,
Chaithra E.
No, afaik, adf or Microsoft Fabric Pipelines do not natively support passing the entire set of pipeline parameters or variables as a single object to another pipeline in the Invoke Pipeline activity. Parameters and variables are not exposed as built-in collections or objects that you can reference as a whole (like @pipeline().parameters or @pipeline().variables)
You can only reference each parameter or variable individually by its name (e.g., @pipeline().parameters.param_a)
Did you try below already?
Store configuration or monitoring context centrally (in an azure table storage, key vault secret, or a configuration file in data lake).
Pipelines push monitoring events to a central store or call an API that handles monitoring, instead of calling a dedicated monitoring pipeline.
(OR)
Standardize a single complex parameter (monitoringContext) that all pipelines agree on.
Instead of trying to pass everything, define upfront which key pieces of context (executionId, status, any key params) need to be sent for monitoring.
This way you decouple the pipelines and the monitor stays generic.
Your solution seems very advanced (to me at least); I feel like it's way over my head.
I like the idea of decoupling things but I have to honestly say that your suggestions are leveraging aspects/concepts that I'm not familiar or experienced with.
Would you have article to recommend on those topics specifically ?