Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Learn more

Reply
msprog
Advocate III
Advocate III

Pyspark notebook : Lakehouse Sql end point

Can a pyspark notebook in Fabric connecting to  the Lakehouse Sql endpoint?

Please let me know

 

thanks

 

1 ACCEPTED SOLUTION
tayloramy
Community Champion
Community Champion

Hi @msprog

 

A Fabric PySpark notebook can’t “see” T-SQL views that live in a Lakehouse’s SQL analytics endpoint via the Spark catalog. Those views are objects of the SQL endpoint (TDS/T-SQL world), not Spark. But you can query them from a notebook by connecting to the SQL endpoint (via JDBC/TDS or the built-in Fabric Spark TDS reader). Alternatively, re-create the logic as a Spark view/table if you want native Spark access. 

 

 

Query the view from a notebook

  1. Get your Workspace ID and the SQL endpoint name (Lakehouse’s SQL endpoint).

  2. In the notebook, use the Fabric Spark TDS reader (Scala cell) to run a T-SQL query and bring the result back as a Spark DataFrame.

 

// Scala cell
import com.microsoft.spark.fabric.tds.implicits.read.FabricSparkTDSImplicits._
import com.microsoft.spark.fabric.Constants

val wsId = "<your-workspace-guid>"
val lakehouseSqlEndpointName = "<your-lakehouse-sql-endpoint-name>"

// Query the view
val df = spark.read
  .option(Constants.WorkspaceId, wsId)
  .option(Constants.DatabaseName, lakehouseSqlEndpointName)
  .synapsesql("select * from dbo.YourViewName");

display(df)

 

Notes:

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution

 

View solution in original post

9 REPLIES 9
smeetsh
Responsive Resident
Responsive Resident

Maybe I dont fully understand the issue so correct me if I am in the wrong direction, but in a notebook you can choose to use sparkSQL. A view is basically a stored sql script,  that you call as a view in the an sql endpoint (or any sql database for that matter).

 

Have you tried running the actual SQL code, that makes us the view, in a sparkSQL notebook, write to a dataframe, or display it? Whatever the steps are that you need after this.


A notebook  can't read the sql endpoint, but it can run SparkSQL which is quite similar to SQL.

 

Example:

smeetsh_0-1759806895994.png

 

Cheers
Hans
(if my answer is usefull, please give it a kudo or mark it as a solution)

BhaveshPatel
Community Champion
Community Champion

 Hi @msprog 

 

Lakehouse = Fabric Data Warehouse = Power BI Semantic Model ( Direct Lake ) = Power BI Dataflow Gen 2

 

You can connect to lakehouse using Power BI Dataflow Gen 2 and once the table is in Delta Lake Lakehouse, You can get similar approach is Fabric Data Warehouse. Always use the Delta Lake Lakehouse Approach ( Spark ).

 

Yes once data is in Lakehouse, You can use the View in Fabric Data Warehouse to write SQL. or

 

In Lakehouse, use SHOW VIEWS and SHOW TABLES in Notebooks

 

BhaveshPatel_0-1759569598217.png

 

 

Thanks & Regards,
Bhavesh

Love the Self Service BI.
Please use the 'Mark as answer' link to mark a post that answers your question. If you find a reply helpful, please remember to give Kudos.
AntoineW
Solution Sage
Solution Sage

Hello @msprog

 

Yes—use a Fabric notebook attached to the Lakehouse. You don’t need (and can’t directly “bind”) the Lakehouse SQL endpoint from PySpark; instead you query the same Delta tables via Spark.

Two simple ways:

  1. Spark tables (recommended)

  • In the notebook, attach the Lakehouse (left pane → “Add lakehouse”).

  • Then query its tables:

 
# read a table registered in the Lakehouse df = spark.read.table("lakehouse.default.MyTable") # or "lakehouse.<schema>.MyTable" df.display()
 
# Spark SQL result = spark.sql("SELECT col1, col2 FROM lakehouse.default.MyTable WHERE col3 > 0") result.display()
  1. Delta path (Files/Delta)

 
# direct Delta path under /Tables df = spark.read.format("delta").load("Tables/MyTable") df.display()

Notes

  • %%sql in Fabric notebooks runs Spark SQL, not T-SQL. The Lakehouse SQL analytics endpoint is for T-SQL tools (SQL editor, SSMS, Fabric items using T-SQL).

  • For programmatic T-SQL against a Warehouse/SQL endpoint you’d use JDBC/ODBC from outside; inside Fabric notebooks, stick to Spark/Spark SQL for Lakehouse data.

 
 
Hope it can help you ! 
best regards,
Antoine

thanks for this. but i want to invoke a view that is defined - i can see the view when i am on the sql endpoint. Hence i was hoping if the notebook can see the endpoint, i would be able to fire a query using the view. 

tayloramy
Community Champion
Community Champion

Hi @msprog

 

A Fabric PySpark notebook can’t “see” T-SQL views that live in a Lakehouse’s SQL analytics endpoint via the Spark catalog. Those views are objects of the SQL endpoint (TDS/T-SQL world), not Spark. But you can query them from a notebook by connecting to the SQL endpoint (via JDBC/TDS or the built-in Fabric Spark TDS reader). Alternatively, re-create the logic as a Spark view/table if you want native Spark access. 

 

 

Query the view from a notebook

  1. Get your Workspace ID and the SQL endpoint name (Lakehouse’s SQL endpoint).

  2. In the notebook, use the Fabric Spark TDS reader (Scala cell) to run a T-SQL query and bring the result back as a Spark DataFrame.

 

// Scala cell
import com.microsoft.spark.fabric.tds.implicits.read.FabricSparkTDSImplicits._
import com.microsoft.spark.fabric.Constants

val wsId = "<your-workspace-guid>"
val lakehouseSqlEndpointName = "<your-lakehouse-sql-endpoint-name>"

// Query the view
val df = spark.read
  .option(Constants.WorkspaceId, wsId)
  .option(Constants.DatabaseName, lakehouseSqlEndpointName)
  .synapsesql("select * from dbo.YourViewName");

display(df)

 

Notes:

If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution

 

Hi @msprog   ,
Thanks for reaching out to the Microsoft fabric community forum. 

 

I would also take a moment to thank @tayloramy , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference. 

I hope the above details help you fix the issue. If you still have any questions or need more help, feel free to reach out. We’re always here to support you.

Best Regards, 
Community Support Team.

Hello @msprog,

 

I am also part of CST Team and we’d like to confirm whether your issue has been successfully resolved. If you still have any questions or need further assistance, please don’t hesitate to reach out. We’re more than happy to continue supporting you.

 

Regards,

B Manikanteswara Reddy

Hi @msprog   ,
Thanks for reaching out to the Microsoft fabric community forum. 

 

I hope the above details help you fix the issue. If you still have any questions or need more help, feel free to reach out. We’re always here to support you.

Best Regards, 
Community Support Team.

NaveenUpadhye
Regular Visitor

No , you cant.

Lakehouse data is natively accessible from PySpark notebooks — without going through the SQL endpoint.

Helpful resources

Announcements
Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

October Fabric Update Carousel

Fabric Monthly Update - October 2025

Check out the October 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.