Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now

Reply
woldea
Regular Visitor

%%tsql magic in spark notebook

I am using fabric notebook to dynamically ingest tables from lakehouse to warehouse. I need to use pySpark as notebook default language for all the cells but I want to execute SQL code on warehouse from the same notebook . I tried to bind a variable from spark cell to SQL cell using like this :

%%tsql -artifact warehouse_name -type Warehouse -bind upsert_query
EXEC(@upsert_query);
the magic %%tsql is not available to select with Pyspark language default. UsageError: Cell magic '%%tsql' not found. If I change the default language to python , I get t-sql functionality but it failed to import some modules like 
import com.microsoft.spark.fabric for other spark cells that are forced to run in pyspark using the magic 
%%pyspark . Does anyone has a workaround to handle both spark and SQL warehouse operation in the same notebook ?
Thanks
1 ACCEPTED SOLUTION
chetanhiwale
Resolver I
Resolver I

HI @woldea , 
You can use Spark Synapse SQL connector for your usecase. 
adding same code for your reference. 


# IMPORTS

import com.microsoft.spark.fabric
from com.microsoft.spark.fabric.Constants import Constants

# Code
spark.read.option(Constants.DatabaseName, "<warehouse/lakeshouse name>").synapsesql("<T-SQL Query>")

You can these docs for refernce. 
https://learn.microsoft.com/en-us/fabric/data-engineering/spark-data-warehouse-connector?tabs=pyspar...


Hope this helps. Let me know if you have any other questions.

View solution in original post

7 REPLIES 7
v-achippa
Community Support
Community Support

Hi @woldea,

 

Thank you for reaching out to Microsoft Fabric Community.

 

Thank you @chetanhiwale@deborshi_nag and @tayloramy for the prompt response. 

 

As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided by the user's for the issue worked? or let us know if you need any further assistance.

 

Thanks and regards,

Anjan Kumar Chippa

Hi @woldea,

 

We wanted to kindly follow up to check if the solution provided by the user's for the issue worked? or let us know if you need any further assistance.

 

Thanks and regards,

Anjan Kumar Chippa

tayloramy
Super User
Super User

Hi @woldea

What you can do is use synapseSQL to interact with the warehouse directly from a Spark notebook. 

 

THough tI stronmgly recommend doing your transformations that require spark all in a lakehouse, and then copying the final table over using a pipeline or copy job. 

 

 





If you found this helpful, consider giving some Kudos.
If I answered your question or solved your problem, mark this post as the solution!

Proud to be a Super User!





deborshi_nag
Resident Rockstar
Resident Rockstar

Hello @woldea 

 

You're not meant to mix Spark and TSQL. If you're applying transformations in a Lakehouse, you can use a Spark notebook with %%pyspark or %%sql magic commands. 

 

On the other hand, if you're applying transformation into a Warehouse, you can use a Python notebook with %%tsql magic command. 

 

I trust this will be helpful. If you found this guidance useful, you are welcome to acknowledge with a Kudos or by marking it as a Solution.
chetanhiwale
Resolver I
Resolver I

HI @woldea , 
You can use Spark Synapse SQL connector for your usecase. 
adding same code for your reference. 


# IMPORTS

import com.microsoft.spark.fabric
from com.microsoft.spark.fabric.Constants import Constants

# Code
spark.read.option(Constants.DatabaseName, "<warehouse/lakeshouse name>").synapsesql("<T-SQL Query>")

You can these docs for refernce. 
https://learn.microsoft.com/en-us/fabric/data-engineering/spark-data-warehouse-connector?tabs=pyspar...


Hope this helps. Let me know if you have any other questions.

tayloramy
Super User
Super User

Hi @woldea

 

You cannot mix TSQL and PySpark notebooks I believe. 

You can however mix PySpark and SparkSQL. I recommend using a SparkSQL cell to do your SQL with. 

 

Spark connector for Microsoft Fabric Data Warehouse - Microsoft Fabric | Microsoft Learn

 

 

Though generally, I find it best to do all your transformations in a lakehouse, which is easier to work with, and then move your data to a warehouse after it is all nice and pretty. 

 

 





If you found this helpful, consider giving some Kudos.
If I answered your question or solved your problem, mark this post as the solution!

Proud to be a Super User!





Thank you tayloramy!
I was able to convert my transformation into SQL and accomplished my goal in the warehouse cell. My question here is I want to run the python transformation in the same notebook but with a tsql command as the transformation impacts the warehouse.  From what I read, this can be achieved using %%tsql .Spark SQL still uses spark engine and cannot execute transformation in warehouse. So, what is the use of %%tsql and how to implement it in the notebook ? What changes are needed in the notebook setting? 
 
Thanks

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.

Top Kudoed Authors