Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
UdaySutar
Frequent Visitor

Read view data using notebook

We are seeking to use a view within a SQL analytics Endpoint as a source of data for a dataframe in a notebook.

When we seek to query it, we see an error like  [TABLE_OR_VIEW_NOT_FOUND]

code looks like this. The view is under the sales schema.

2 ACCEPTED SOLUTIONS
nilendraFabric
Super User
Super User

Try this

 

import com.microsoft.spark.fabric.tds.implicits.read.FabricSparkTDSImplicits._
import com.microsoft.spark.fabric.Constants

t_sql_query = """
SELECT * FROM your_view_name
"""

wsid = spark.conf.get("ws_id")
lh_name = spark.conf.get("lh_name")

df = spark.read.option(Constants.WorkspaceId, wsid).option(Constants.DatabaseName, lh_name).synapsesql(t_sql_query)

df.createOrReplaceTempView("view_data")

View solution in original post

V-yubandi-msft
Community Support
Community Support

Hi @UdaySutar ,

It appears that this approach might not work as expected because views created in a SQL Analytics Endpoint aren’t visible on the Lakehouse side. The metadata sync only happens from the Lakehouse to the SQL Endpoint, not the other way around.

As an alternative, 

1. Use a JDBC connector in your notebook to directly query the view from the SQL Endpoint, or

2. Recreate the view within the Lakehouse using Spark so that it becomes accessible in your notebook.

 

 If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

 

View solution in original post

6 REPLIES 6
V-yubandi-msft
Community Support
Community Support

Hi @UdaySutar ,

We noticed we haven't received a response from you yet, so we wanted to follow up and ensure the solution we provided addressed your issue. If you require any further assistance or have additional questions, please let us know.

Your feedback is valuable to us, and we look forward to hearing from you soon.

V-yubandi-msft
Community Support
Community Support

Hi @UdaySutar ,

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions. If my response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.

Thank you.

V-yubandi-msft
Community Support
Community Support

Hi @UdaySutar ,

May I ask if you have resolved this issue? If so, please mark the helpful reply and accept it as the solution. This will be helpful for other community members who have similar problems to solve it faster.

Thank you.

V-yubandi-msft
Community Support
Community Support

Hi @UdaySutar ,

It appears that this approach might not work as expected because views created in a SQL Analytics Endpoint aren’t visible on the Lakehouse side. The metadata sync only happens from the Lakehouse to the SQL Endpoint, not the other way around.

As an alternative, 

1. Use a JDBC connector in your notebook to directly query the view from the SQL Endpoint, or

2. Recreate the view within the Lakehouse using Spark so that it becomes accessible in your notebook.

 

 If my response resolved your query, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.

 

nilendraFabric
Super User
Super User

Try this

 

import com.microsoft.spark.fabric.tds.implicits.read.FabricSparkTDSImplicits._
import com.microsoft.spark.fabric.Constants

t_sql_query = """
SELECT * FROM your_view_name
"""

wsid = spark.conf.get("ws_id")
lh_name = spark.conf.get("lh_name")

df = spark.read.option(Constants.WorkspaceId, wsid).option(Constants.DatabaseName, lh_name).synapsesql(t_sql_query)

df.createOrReplaceTempView("view_data")

Vinodh247
Resolver III
Resolver III

What do you get when you try this?

spark.sql("SHOW TABLES IN sales").show()

 

Things to check:

 

Case Sensitivity or Incorrect Naming:

  • Spark is case-sensitive by default. You must use exact case when referencing schema and view names.

  • For example: if the view is Sales.DailySummary, you must use that exact casing.

Default Database Context Missing:

  • If you are not specifying the full-qualified name (schema.viewname or database.schema.viewname), Spark may not know which catalog or schema to search.

View is Not Accessible from Spark:

  • The view must be created or published in the SQL analytics endpoint tied to the lakehouse or warehouse.

  • Make sure the view is materialized or created in the same SQL analytics eendpoint that your notebook is connected to.

View is a Virtual/External View Not Backed by Delta Lake or Tables:

  • Only views over actual physical tables (Delta format or supported filebacked) are accessible from Spark.

  • Avoid views built purely on unsupported or dynamic external sources.

 

Helpful resources

Announcements
July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.