Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Join us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered

Reply
DataEnjoyer
Regular Visitor

Unable to read SQL Endpoint view into a Notebook dataframe

Hi All,

We are seeking to use a view within a SQL analytics Endpoint as a source of data for a dataframe in a notebook.

When we seek to query it, we see an error like  [TABLE_OR_VIEW_NOT_FOUND]

code looks like this. The view is under the DBO schema.

 

df = spark.sql("SELECT * FROM lakehousename.viewname LIMIT 1000")
display(df)

 



Is this possible? I feel like it wont be :*(  but staying positive.

Thanks! 

1 ACCEPTED SOLUTION
AndyDDC
Super User
Super User

Hi @DataEnjoyer no I don't believe this will work as the View has been created using the SQL Analytics endpoint, there is no metadata sync from the SQL Endpoint to the Lakehouse (it's the other way round). 

 

You could use JDBC connector in the Notebook to query the View in the SQL Endpoint, or create the View in the Lakehouse using Spark.

 

edit: I believe pyodbc would work here 

View solution in original post

6 REPLIES 6
daanblinde
New Member

having the same issue here. a lot of people out there use sql statements stored in views to create there ETL logic. would be very nice if microsoft is going to support the possibility to read from views in a notebook. I came across the following example using the spark connector which works for me perfect as a workaround. https://fabric.guru/querying-sql-endpoint-of-fabric-lakehousewarehouse-in-a-notebook-with-t-sql

AndyDDC
Super User
Super User

Hi @DataEnjoyer no I don't believe this will work as the View has been created using the SQL Analytics endpoint, there is no metadata sync from the SQL Endpoint to the Lakehouse (it's the other way round). 

 

You could use JDBC connector in the Notebook to query the View in the SQL Endpoint, or create the View in the Lakehouse using Spark.

 

edit: I believe pyodbc would work here 

Hi, 

I can read data by using pyodbc package from endpoint. Now I would like to store these results in a delta table in lakehouse... is this possible?

 

Thank you 😄 

Hi,

Do you know how to get the authentication to work in the SQL endpoint? I have tried different combinations of below statement, but it is not working:

df = spark.read\
    .format("jdbc") \
    .option("url", f"jdbc:sqlserver://***.datawarehouse.pbidedicated.windows.net:1433;database=***") \
    .option("dbtable", "someTable") \
    .option("authentication", "ActiveDirectoryIntegrated") \
    .option("encrypt", "true") \
    .option("clientid", "***") \
    .option("hostNameInCertificate", "*.pbidedicated.windows.net") \
    .load()
display(df)
 
Thanks 🙂

Hi, 

Yes, that helped, thanks 🙂

I had to add the serviceprincipal to the workspace (CREATE USER is not a supported statement type.) but after that, I can now read views in a Lakehouse from a Notebook.

Helpful resources

Announcements
Join our Fabric User Panel

Join our Fabric User Panel

This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.

June FBC25 Carousel

Fabric Monthly Update - June 2025

Check out the June 2025 Fabric update to learn about new features.

June 2025 community update carousel

Fabric Community Update - June 2025

Find out what's new and trending in the Fabric community.