Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Title: How to Parameterize Source Database, Schema, and Table Name in Dataflow Gen2 (Databricks Source)
Message:
Hi Community,
I'm working with Dataflow Gen2 in Microsoft Fabric and need to parameterize the source connection details for a Databricks-based setup. Specifically, I want to dynamically pass:
Source Database
Source Schema
Source Table Name
The goal is to load data from Databricks into a Lakehouse, and I’d like to make the dataflow reusable across environments (Dev/Test/Prod) by using parameters.
Has anyone implemented this successfully?
Would appreciate guidance or examples on how to structure the Power Query M code and use parameters effectively for Databricks sources.
Thanks in advance!.
Hi @Ganjikunta,
Thank you for reaching out to the Microsoft Fabric Community Forum and sharing the details of your scenario. Also, thanks to @AntoineW, for those valuable insights on this thread.
I understand that you want to parameterize the source database, schema, and table in a Dataflow Gen2 for Databricks, so that your dataflow can be reused across Dev/Test/Prod environments.
Create Parameters in Dataflow Gen2: Open your Dataflow in Power Query. Go to Manage Parameters and create parameters for Database, Schema, and Table.
Reference Parameters in Power Query M Code: You can use the parameters to dynamically construct the source reference. For example:
let
Source = Databricks.Catalogs(DatabaseParam, SchemaParam, TableParam)
in
Source
Enable Public Parameters: If you want to override these parameters externally (like from a pipeline), make sure the parameters are set as public/discoverable.
Bind Parameters in Pipelines: When using this dataflow in a pipeline, you can pass values to the parameters dynamically from variables or other sources.
Note: Currently, Dataflow Gen2 allows parameterizing schema and table names, but changing the connection details (like server/host) dynamically isn’t supported. Make sure your parameter usage aligns with Databricks SQL syntax.
Refer this link: https://learn.microsoft.com/en-us/fabric/data-factory/dataflow-parameters
Hope this helps you move forward. Please give it a try and let us know how it goes.
Thanks for using the Microsoft Fabric Community Forum.
Hi @Ganjikunta,
Just checking in to see if the issue has been resolved on your end. If the earlier suggestions helped, that’s great to hear! And if you’re still facing challenges, feel free to share more details happy to assist further.
Thank you.
Hi @Ganjikunta,
Hope you had a chance to try out the solution shared earlier. Let us know if anything needs further clarification or if there's an update from your side always here to help.
Thank you.
Hi @Ganjikunta,
Just wanted to follow up one last time. If the shared guidance worked for you, that’s wonderful hopefully it also helps others looking for similar answers. If there’s anything else you'd like to explore or clarify, don’t hesitate to reach out.
Thank you.
Hello @Ganjikunta,
Yes — you can parameterize Databricks sources in Dataflow Gen2 using Power Query parameters (or a Variable Library if you want central management).
Create Parameters in your Dataflow Gen2
In the Dataflow Gen2 editor, go to Manage parameters.
Define parameters such as:
pDatabase
pSchema
pTable
Use parameters in the Databricks connector
When connecting to Databricks, reference your parameters in the M query.
Example Power Query M code:
This way, the same dataflow can load from different databases/schemas/tables just by switching parameter values.
Switching across environments (DEV / TEST / PROD)
Instead of manually editing parameter values, you can connect them to a Variable Library.
Define a value set per environment (DEV, TEST, PROD) in the Variable Library.
When orchestrating execution with a Data Pipeline, import the variable values and pass them to the Dataflow activity.
This lets you control source Database, Schema, and Table dynamically based on the environment without manual changes.
⚠️ Limitation on Destination (Sink)
The destination Lakehouse in a Dataflow Gen2 cannot be parameterized because the M code only controls the source — the sink is set at design time.
This means when you deploy from DEV → TEST → PROD, you must manually change the destination Lakehouse if it’s different.
A common recommendation is to keep Gold (final curated tables) in a separate workspace, so you don’t have to update every environment’s sink manually.
Hope it can help you !
Best regards,
Antoine
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.