Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
Nicolas_DHONT
Regular Visitor

Not able to SELECT from a view in Fabric Notebooks with spark.sql()

Hello,

I need to retrieve data from a LakeHouse un e Fabric Notebook. I am using following code:

df = spark.sql(f"SELECT * FROM {sql_view_name}")


This is well working if I put there a table name... but if I set a SQL view in {sql_view_name} variable, this is no more working (Error : "Table or view not found").

This issue happends even in dbo schema.

If I run same SQL from SQL EndPoint, the statement is well working.

May I ask you help?


Besides, as workarround, I tried to do :

df = spark.sql(f"CREATE OR ALTER TABLE dbo.TempExportTable AS SELECT * FROM {sql_view_name}");
df = spark.sql(f"SELECT * FROM dbo.TempExportTable");


But it the CREATE OR ALTER seems not supported as well 😭

Thanks you by advance for your help!

Best regards,
Nicolas

 

1 ACCEPTED SOLUTION
yvaubourg
Resolver I
Resolver I

Hi @Nicolas_DHONT,

 

You cannot request directly your Analytics Endpoint and as the views are only at this level you cannot use the normal notebooks connector.

 

I make a quick workaround to get your data based in this example write in Spark Scala. It connect directly to your SQL Analytics Endpoint and you can get the Data in the view.

 

yvaubourg_3-1747996790865.png

 

 

yvaubourg_0-1747996754221.png

 

Hope this help.

 

Regards,

Yohann

 

View solution in original post

5 REPLIES 5
v-sdhruv
Community Support
Community Support

Hi @Nicolas_DHONT ,
You're right 
T-SQL notebooks in Microsoft Fabric don't support exporting to CSV or interacting with external storage like PySpark notebooks do. You'll get (Error : "Table or view not found") because that view doesn’t exist from the Lakehouse's perspective.
As a workaround, you can read view via JDBC then 
 df.write.mode("overwrite").csv("/lakehouse/default/Files/output.csv", header=True)

If the response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank You


Srisakthi
Super User
Super User

Hi @Nicolas_DHONT ,

 

Do you want use only Pyspark? If you use T-SQL notebook you can do it, but that is SQL way not pyspark.

 

Srisakthi_0-1748004843107.png

 

Regards,

Srisakthi

Hi @Srisakthi,

The goal is to export as CSV content of the view and upload the file in an Azure Storage. I Guess this is not possible in a T-SQL notebook... 🙁

Thanks and best regards,

yvaubourg
Resolver I
Resolver I

Hi @Nicolas_DHONT,

 

You cannot request directly your Analytics Endpoint and as the views are only at this level you cannot use the normal notebooks connector.

 

I make a quick workaround to get your data based in this example write in Spark Scala. It connect directly to your SQL Analytics Endpoint and you can get the Data in the view.

 

yvaubourg_3-1747996790865.png

 

 

yvaubourg_0-1747996754221.png

 

Hope this help.

 

Regards,

Yohann

 

Hello @yvaubourg

Thank you for your help.


I started Notebook with PySpark bloc to retrive parameters from the Pipeline launching it, then I created a temp txt file to push the variables to next bloc where I extract data and generate my csv and store it in temp folder. Finally a last bloc in PySpark to upload the file from temp folder to Azure storage.

It looks DIY beacause product is not mature enought... but it works!

Thank you for your help 😁

Helpful resources

Announcements
July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.