Don't miss your chance to take the Fabric Data Engineer (DP-600) exam for FREE! Find out how by attending the DP-600 session on April 23rd (pacific time), live or on-demand.
Learn moreJoin the FabCon + SQLCon recap series. Up next: Power BI, Real-Time Intelligence, IQ and AI, and Data Factory take center stage. All sessions are available on-demand after the live show. Register now
Hi Folks,
I am trying to query data using Fabric SQL DB using python / pyspark. But I am not getting proper syntax to query. If anyone has , can someone share the syntax.
Thanks and regards.
Solved! Go to Solution.
Hi @chetanhiwale ,
You can query a Fabric SQL Database from a Spark notebook using JDBC. Here's a working example:
jdbc_url = "jdbc:sqlserver://<your-server>.database.fabric.microsoft.com:1433;database=<your-db>"
df = spark.read \
.format("jdbc") \
.option("url", jdbc_url) \
.option("dbtable", "dbo.YourTable") \
.option("authentication", "ActiveDirectoryInteractive") \
.load()
df.show()
Alternatively, if your SQL DB is in the same workspace, you can use the synapsesql connector:
df = spark.read.synapsesql("YourDatabase.dbo.YourTable")
Did this help?
Drop a kudo so others can find it !
😉
Hi @Tamanchu ,
I tried both both options but they are not working for me. Have you tried to use MS-SQL python module by Microsoft
Hi @chetanhiwale,
If the JDBC and synapsesql approaches didn't work, you can try using pyodbc directly in a Fabric notebook:
Also, could you share the error messages you got with the JDBC and synapsesql methods? That would help narrow down the root cause.
Thanks!
Hi @chetanhiwale ,
You can query a Fabric SQL Database from a Spark notebook using JDBC. Here's a working example:
jdbc_url = "jdbc:sqlserver://<your-server>.database.fabric.microsoft.com:1433;database=<your-db>"
df = spark.read \
.format("jdbc") \
.option("url", jdbc_url) \
.option("dbtable", "dbo.YourTable") \
.option("authentication", "ActiveDirectoryInteractive") \
.load()
df.show()
Alternatively, if your SQL DB is in the same workspace, you can use the synapsesql connector:
df = spark.read.synapsesql("YourDatabase.dbo.YourTable")
Did this help?
Drop a kudo so others can find it !
😉
Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.
If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.
| User | Count |
|---|---|
| 9 | |
| 5 | |
| 5 | |
| 4 | |
| 4 |
| User | Count |
|---|---|
| 29 | |
| 16 | |
| 10 | |
| 9 | |
| 8 |