Don't miss your chance to take the Fabric Data Engineer (DP-700) exam on us!
Learn moreWe've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now
I am using fabric notebook to dynamically ingest tables from lakehouse to warehouse. I need to use pySpark as notebook default language for all the cells but I want to execute SQL code on warehouse from the same notebook . I tried to bind a variable from spark cell to SQL cell using like this :
Solved! Go to Solution.
HI @woldea ,
You can use Spark Synapse SQL connector for your usecase.
adding same code for your reference.
# IMPORTS
import com.microsoft.spark.fabric
from com.microsoft.spark.fabric.Constants import Constants
# Code
spark.read.option(Constants.DatabaseName, "<warehouse/lakeshouse name>").synapsesql("<T-SQL Query>")
You can these docs for refernce.
https://learn.microsoft.com/en-us/fabric/data-engineering/spark-data-warehouse-connector?tabs=pyspar...
Hope this helps. Let me know if you have any other questions.
Hi @woldea,
Thank you for reaching out to Microsoft Fabric Community.
Thank you @chetanhiwale, @deborshi_nag and @tayloramy for the prompt response.
As we haven’t heard back from you, we wanted to kindly follow up to check if the solution provided by the user's for the issue worked? or let us know if you need any further assistance.
Thanks and regards,
Anjan Kumar Chippa
Hi @woldea,
We wanted to kindly follow up to check if the solution provided by the user's for the issue worked? or let us know if you need any further assistance.
Thanks and regards,
Anjan Kumar Chippa
Hi @woldea,
What you can do is use synapseSQL to interact with the warehouse directly from a Spark notebook.
THough tI stronmgly recommend doing your transformations that require spark all in a lakehouse, and then copying the final table over using a pipeline or copy job.
Proud to be a Super User! | |
Hello @woldea
You're not meant to mix Spark and TSQL. If you're applying transformations in a Lakehouse, you can use a Spark notebook with %%pyspark or %%sql magic commands.
On the other hand, if you're applying transformation into a Warehouse, you can use a Python notebook with %%tsql magic command.
HI @woldea ,
You can use Spark Synapse SQL connector for your usecase.
adding same code for your reference.
# IMPORTS
import com.microsoft.spark.fabric
from com.microsoft.spark.fabric.Constants import Constants
# Code
spark.read.option(Constants.DatabaseName, "<warehouse/lakeshouse name>").synapsesql("<T-SQL Query>")
You can these docs for refernce.
https://learn.microsoft.com/en-us/fabric/data-engineering/spark-data-warehouse-connector?tabs=pyspar...
Hope this helps. Let me know if you have any other questions.
Hi @woldea,
You cannot mix TSQL and PySpark notebooks I believe.
You can however mix PySpark and SparkSQL. I recommend using a SparkSQL cell to do your SQL with.
Spark connector for Microsoft Fabric Data Warehouse - Microsoft Fabric | Microsoft Learn
Though generally, I find it best to do all your transformations in a lakehouse, which is easier to work with, and then move your data to a warehouse after it is all nice and pretty.
Proud to be a Super User! | |
Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.