Advance your Data & AI career with 50 days of live learning, dataviz contests, hands-on challenges, study groups & certifications and more!
Get registeredGet Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Learn more
Can a pyspark notebook in Fabric connecting to the Lakehouse Sql endpoint?
Please let me know
thanks
Solved! Go to Solution.
Hi @msprog,
A Fabric PySpark notebook can’t “see” T-SQL views that live in a Lakehouse’s SQL analytics endpoint via the Spark catalog. Those views are objects of the SQL endpoint (TDS/T-SQL world), not Spark. But you can query them from a notebook by connecting to the SQL endpoint (via JDBC/TDS or the built-in Fabric Spark TDS reader). Alternatively, re-create the logic as a Spark view/table if you want native Spark access.
Query the view from a notebook
Get your Workspace ID and the SQL endpoint name (Lakehouse’s SQL endpoint).
In the notebook, use the Fabric Spark TDS reader (Scala cell) to run a T-SQL query and bring the result back as a Spark DataFrame.
// Scala cell
import com.microsoft.spark.fabric.tds.implicits.read.FabricSparkTDSImplicits._
import com.microsoft.spark.fabric.Constants
val wsId = "<your-workspace-guid>"
val lakehouseSqlEndpointName = "<your-lakehouse-sql-endpoint-name>"
// Query the view
val df = spark.read
.option(Constants.WorkspaceId, wsId)
.option(Constants.DatabaseName, lakehouseSqlEndpointName)
.synapsesql("select * from dbo.YourViewName");
display(df)
Notes:
This uses the built-in Fabric Spark TDS integration outlined in community write-ups like this walkthrough: https://www.red-gate.com/simple-talk/blogs/fabric-query-a-sql-endpoint-from-a-notebook/
If your query is complex and you hit parser quirks, the same article shows a prepareQuery pattern to send part of the query “as-is”.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution
Maybe I dont fully understand the issue so correct me if I am in the wrong direction, but in a notebook you can choose to use sparkSQL. A view is basically a stored sql script, that you call as a view in the an sql endpoint (or any sql database for that matter).
Have you tried running the actual SQL code, that makes us the view, in a sparkSQL notebook, write to a dataframe, or display it? Whatever the steps are that you need after this.
A notebook can't read the sql endpoint, but it can run SparkSQL which is quite similar to SQL.
Example:
Cheers
Hans
(if my answer is usefull, please give it a kudo or mark it as a solution)
Hi @msprog
Lakehouse = Fabric Data Warehouse = Power BI Semantic Model ( Direct Lake ) = Power BI Dataflow Gen 2
You can connect to lakehouse using Power BI Dataflow Gen 2 and once the table is in Delta Lake Lakehouse, You can get similar approach is Fabric Data Warehouse. Always use the Delta Lake Lakehouse Approach ( Spark ).
Yes once data is in Lakehouse, You can use the View in Fabric Data Warehouse to write SQL. or
In Lakehouse, use SHOW VIEWS and SHOW TABLES in Notebooks
Hello @msprog
Yes—use a Fabric notebook attached to the Lakehouse. You don’t need (and can’t directly “bind”) the Lakehouse SQL endpoint from PySpark; instead you query the same Delta tables via Spark.
Two simple ways:
Spark tables (recommended)
In the notebook, attach the Lakehouse (left pane → “Add lakehouse”).
Then query its tables:
Delta path (Files/Delta)
Notes
%%sql in Fabric notebooks runs Spark SQL, not T-SQL. The Lakehouse SQL analytics endpoint is for T-SQL tools (SQL editor, SSMS, Fabric items using T-SQL).
For programmatic T-SQL against a Warehouse/SQL endpoint you’d use JDBC/ODBC from outside; inside Fabric notebooks, stick to Spark/Spark SQL for Lakehouse data.
thanks for this. but i want to invoke a view that is defined - i can see the view when i am on the sql endpoint. Hence i was hoping if the notebook can see the endpoint, i would be able to fire a query using the view.
Hi @msprog,
A Fabric PySpark notebook can’t “see” T-SQL views that live in a Lakehouse’s SQL analytics endpoint via the Spark catalog. Those views are objects of the SQL endpoint (TDS/T-SQL world), not Spark. But you can query them from a notebook by connecting to the SQL endpoint (via JDBC/TDS or the built-in Fabric Spark TDS reader). Alternatively, re-create the logic as a Spark view/table if you want native Spark access.
Query the view from a notebook
Get your Workspace ID and the SQL endpoint name (Lakehouse’s SQL endpoint).
In the notebook, use the Fabric Spark TDS reader (Scala cell) to run a T-SQL query and bring the result back as a Spark DataFrame.
// Scala cell
import com.microsoft.spark.fabric.tds.implicits.read.FabricSparkTDSImplicits._
import com.microsoft.spark.fabric.Constants
val wsId = "<your-workspace-guid>"
val lakehouseSqlEndpointName = "<your-lakehouse-sql-endpoint-name>"
// Query the view
val df = spark.read
.option(Constants.WorkspaceId, wsId)
.option(Constants.DatabaseName, lakehouseSqlEndpointName)
.synapsesql("select * from dbo.YourViewName");
display(df)
Notes:
This uses the built-in Fabric Spark TDS integration outlined in community write-ups like this walkthrough: https://www.red-gate.com/simple-talk/blogs/fabric-query-a-sql-endpoint-from-a-notebook/
If your query is complex and you hit parser quirks, the same article shows a prepareQuery pattern to send part of the query “as-is”.
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution
Hi @msprog ,
Thanks for reaching out to the Microsoft fabric community forum.
I would also take a moment to thank @tayloramy , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
I hope the above details help you fix the issue. If you still have any questions or need more help, feel free to reach out. We’re always here to support you.
Best Regards,
Community Support Team.
Hello @msprog,
I am also part of CST Team and we’d like to confirm whether your issue has been successfully resolved. If you still have any questions or need further assistance, please don’t hesitate to reach out. We’re more than happy to continue supporting you.
Regards,
B Manikanteswara Reddy
Hi @msprog ,
Thanks for reaching out to the Microsoft fabric community forum.
I hope the above details help you fix the issue. If you still have any questions or need more help, feel free to reach out. We’re always here to support you.
Best Regards,
Community Support Team.
No , you cant.
Lakehouse data is natively accessible from PySpark notebooks — without going through the SQL endpoint.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!
Check out the October 2025 Fabric update to learn about new features.