Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM. Register now.

Reply
Shanthan118
Frequent Visitor

Parameterize Database Name in SparkSQL Notebook

Hi 

 

I am using Notebook in SparkSql in Fabric 
 i have created the Notebook to load data from a Lakehouse to the other lakehouse using this SparkSQL.

Now that i created this notebook we have used only a single database but now i want to use it for other Database instead of changing the Database manually is there a way parameterize the database name.

For that,

I have created a notebook that has the database names so when i want to change the database name i can change there and call this databasenames notebook in the notebook where the tables are created or data is transfered.
NOTE: I want to use Notebook in SparkSQL only.

Can Any please give the solution to this ...

1 ACCEPTED SOLUTION

I don't know how to pass variables into a  %%sql cell.

 

I only know how to pass variables into spark.sql() 

 

You could wrap your SQL code inside spark.sql()

 

I think you can do it something like this (I don't have Fabric opened, so I cannot verify the syntax):

 

spark.sql(```your

SQL

code

use a

{variable}

here

```)

 

ChatGPT suggested this code instead:

 

# Define a variable

variable = 'some_value'

 

# Use the variable inside spark.sql()

query = f"""

SELECT *

FROM some_table

WHERE some_column = '{variable}'

"""

 

result = spark.sql(query)

View solution in original post

7 REPLIES 7
frithjof_v
Super User
Super User

By SparkSQL, do you mean %%sql or spark.sql()?

 

What do you mean by Database? Do you mean Lakehouse?

 

Is everything inside the same workspace or in separate workspaces?

 

You can probably parameterize Lakehouse name and use the parameter in the creation of a dataframe, then you can run spark.sql() on that dataframe.

 

If you want to use %%sql cell then you could create a temp view from the dataframe first.

 

https://www.reddit.com/r/MicrosoftFabric/s/5jWsdCEJNy

 

You can get abfss path for a Lakehouse by using Notebookutils.

Hi frithjof_v

yes by database it is lakehouse.
By SparkSql I mean %%sql, i am tbe SparkSql as the Global language for the notebook
as of now everything is in same workspace.

The below screenshot shows you how i am using getting the data from the lakehosue to another lakehouse.

Shanthan118_0-1724912478492.png

So we have to make the Lakehouse name Parameterized in here.

 

We don't to use the PySpark, the requiement is with SparkSql i.e., %%sql.


So can you provide a solution for this.
I have tried using Dataframe it worked but not when using the SparkSQL as global language

Here, for this project we have to use only SparkSql(%%sql) as the Golbal language.



I don't know how to pass variables into a  %%sql cell.

 

I only know how to pass variables into spark.sql() 

 

You could wrap your SQL code inside spark.sql()

 

I think you can do it something like this (I don't have Fabric opened, so I cannot verify the syntax):

 

spark.sql(```your

SQL

code

use a

{variable}

here

```)

 

ChatGPT suggested this code instead:

 

# Define a variable

variable = 'some_value'

 

# Use the variable inside spark.sql()

query = f"""

SELECT *

FROM some_table

WHERE some_column = '{variable}'

"""

 

result = spark.sql(query)

Thank You @frithjof_v 

 

For the Response and suggestion.
when i have also checked on internet and chatgpt i too got the same solution.
So, we have created the notebook using spark.sql() it worked.

I've never tried, but perhaps this shows a way to pass variables into %%sql?

 

https://learn.microsoft.com/en-us/answers/questions/419296/spark-sql-passing-variables-synapse-(spar...)

 

https://community.databricks.com/t5/data-engineering/how-to-set-a-variable-and-use-it-in-a-sql-query...

 

Disclaimer: I don't know anything about these methods. I just found them by googling briefly.

Anonymous
Not applicable

Hi @Shanthan118 

 

If you want to store the database name in a notebook then reference it by another notebook, you can use Microsoft Spark Utilities (MSSparkUtils) in Fabric. Microsoft Spark Utilities (MSSparkUtils) is a built-in package and available in PySpark (Python) Scala, SparkR notebooks, and Fabric pipelines. It is not available in SparkSQL. So if you want to use Notebook in SparkSQL only, I'm afraid it is a big challenge currently. 

Microsoft Spark Utilities (MSSparkUtils) for Fabric - Microsoft Fabric | Microsoft Learn

 

Another alternative is to use the pipeline to trigger a notebook activity and edit the parameter values in the pipeline, which is suggested by SachinNandanwar. You could refer to Integrate a notebook section from Develop, execute, and manage notebooks - Microsoft Fabric | Microsoft Learn

 

Best Regards,
Jing
If this post helps, please Accept it as Solution to help other members find it. Appreciate your Kudos!

SachinNandanwar
Super User
Super User

How would you be executing the Notebook that transfers the data across lakehouses ? Will it be an Notebook Activity ? If yes then go through this post.

https://community.fabric.microsoft.com/t5/Data-Pipelines/Is-it-possible-to-pass-parameter-from-noteb...




Regards,
Sachin
Check out my Blog

Helpful resources

Announcements
FabCon Global Hackathon Carousel

FabCon Global Hackathon

Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!

September Fabric Update Carousel

Fabric Monthly Update - September 2025

Check out the September 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.