When using Dataflow Gen2 in Microsoft Fabric, parameters behave differently depending on how the Dataflow is executed, even when the same parameters and values are used. After detailed investigation and confirmation with Microsoft Support, we observed the following inconsistent behavior: Direct Dataflow execution (via Dataflow UI): When a query or query-related logic is passed as a Dataflow parameter, the Dataflow correctly re-evaluates the query and adapts the schema accordingly. Pipeline-triggered Dataflow execution: When the same parameter is passed from a Pipeline to the Dataflow, the schema is not re-evaluated or adapted, even though the parameter values are identical. Internally, the Pipeline appears to execute the Dataflow using a different execution path / mashup handling mechanism than a direct Dataflow run. This leads to schema mismatches, mashup document errors, or an inability to support dynamic query logic when triggered via a Pipeline. Why This Is a Problem From a user perspective, Dataflow parameters should behave consistently, regardless of whether the Dataflow is executed directly or via a Pipeline. Currently, solutions that work perfectly in a direct Dataflow run break when orchestrated via Pipelines, even though Pipelines are the recommended orchestration mechanism in Fabric. This limits the ability to build generic, parameter-driven Dataflows that are orchestrated by Pipelines. It also creates confusion, as there is no clear documentation explaining that schema changes are only supported in direct Dataflow runs but not in Pipeline-triggered runs. Expected Behavior Parameter handling and schema evaluation should be harmonized between: Direct Dataflow Gen2 runs Pipeline-triggered Dataflow Gen2 runs If a Dataflow supports schema adaptation based on parameters, this behavior should be identical regardless of the execution method. Alternatively, if there are technical limitations, these should be explicitly documented, including guidance on supported design patterns.
... View more