We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now
I have a python variable created in notebook cell and want to access it from SQL cell in the same notebook to write into a warehouse. Can anyone share me method to do this. Is there any method to write a python prepared sql code like upsert into a warehouse table using synapsesql? What I see now is we can write a dataframe int warehouse and read from warehouse into dataframe.
Solved! Go to Solution.
Hi @woldea ,
This is a very common question in Microsoft Fabric notebooks, and the short answer is:
You cannot directly access a Python variable from a SQL cell.Python and SQL cells run in different execution engines.
However, there are supported patterns to achieve what you want, including UPSERT/MERGE into a Warehouse.
The most recommended one is Write DataFrame → Warehouse (UPSERT supported)
If your Python variable can be represented as a DataFrame, this is the best practice.
Step 1: Python – prepare data
from pyspark.sql import Row
data = [
Row(id=1, name="Alice", amount=100),
Row(id=2, name="Bob", amount=200)
]
df = spark.createDataFrame(data)
Step 2: Write to Warehouse staging table
df.write \
.format("synapsesql") \
.mode("overwrite") \
.save("WarehouseName.dbo.stg_mydata")
Step 3: SQL cell – MERGE (UPSERT)
MERGE dbo.target_table AS tgt
USING dbo.stg_mydata AS src
ON tgt.id = src.id
WHEN MATCHED THEN
UPDATE SET
tgt.name = src.name,
tgt.amount = src.amount
WHEN NOT MATCHED THEN
INSERT (id, name, amount)
VALUES (src.id, src.name, src.amount);
This is the Fabric-supported UPSERT pattern.
If this post helps, then please appreciate giving a Kudos or accepting as a Solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
Hi @woldea
In a PySpark cell try this first
# Example Python variables
target_date = "2026-01-01"
batch_id = 42
status = "Complete"
spark.conf.set("p_date", target_date)
spark.conf.set("p_batch_id", str(batch_id))
spark.conf.set("p_status", status)
In the second cell, try this
%%sql
SELECT
to_date('${p_date}', 'yyyy-MM-dd') AS LoadDate,
CAST('${p_batch_id}' AS INT) AS BatchId,
'${p_status}' AS Status;
This will allow you to access a Python variable in one cell from another SQL cell in the same notebook.
Hope this helps - please appreciate by leaving a Kudos or accepting as a Solution!
Thank you deborshi_nag,
Your strategy and example was helpful and I was able to accomplish my goal.
Hi @woldea,
You can execute SQL queries inside notebooks using the spark.sql() function.
# Cell 1 (PySpark)
col_1 = 'Name'
# Cell 2 (PySpark)
df = spark.sql(f"""
SELECT {col_1} FROM dbo.table1
""")
# Cell 3 (PySpark)
display(df)In this example, python variable is referened using ( { } ), and the result of the SQL script wrapped inside spark.sql() is stored as a DataFrame, which can then be displayed or further processed.
If this isn't what you are looking for, please share more details on your use case.
Thanks you stoic-harsh!
Hi @woldea
In a PySpark cell try this first
# Example Python variables
target_date = "2026-01-01"
batch_id = 42
status = "Complete"
spark.conf.set("p_date", target_date)
spark.conf.set("p_batch_id", str(batch_id))
spark.conf.set("p_status", status)
In the second cell, try this
%%sql
SELECT
to_date('${p_date}', 'yyyy-MM-dd') AS LoadDate,
CAST('${p_batch_id}' AS INT) AS BatchId,
'${p_status}' AS Status;
This will allow you to access a Python variable in one cell from another SQL cell in the same notebook.
Hope this helps - please appreciate by leaving a Kudos or accepting as a Solution!
Thank you deborshi_nag,
Your strategy and example was helpful and I was able to accomplish my goal.
Hi @woldea ,
This is a very common question in Microsoft Fabric notebooks, and the short answer is:
You cannot directly access a Python variable from a SQL cell.Python and SQL cells run in different execution engines.
However, there are supported patterns to achieve what you want, including UPSERT/MERGE into a Warehouse.
The most recommended one is Write DataFrame → Warehouse (UPSERT supported)
If your Python variable can be represented as a DataFrame, this is the best practice.
Step 1: Python – prepare data
from pyspark.sql import Row
data = [
Row(id=1, name="Alice", amount=100),
Row(id=2, name="Bob", amount=200)
]
df = spark.createDataFrame(data)
Step 2: Write to Warehouse staging table
df.write \
.format("synapsesql") \
.mode("overwrite") \
.save("WarehouseName.dbo.stg_mydata")
Step 3: SQL cell – MERGE (UPSERT)
MERGE dbo.target_table AS tgt
USING dbo.stg_mydata AS src
ON tgt.id = src.id
WHEN MATCHED THEN
UPDATE SET
tgt.name = src.name,
tgt.amount = src.amount
WHEN NOT MATCHED THEN
INSERT (id, name, amount)
VALUES (src.id, src.name, src.amount);
This is the Fabric-supported UPSERT pattern.
If this post helps, then please appreciate giving a Kudos or accepting as a Solution to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.