Power BI is turning 10! Tune in for a special live episode on July 24 with behind-the-scenes stories, product evolution highlights, and a sneak peek at what’s in store for the future.
Save the dateEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
Hello,
I need to retrieve data from a LakeHouse un e Fabric Notebook. I am using following code:
df = spark.sql(f"SELECT * FROM {sql_view_name}")
This is well working if I put there a table name... but if I set a SQL view in {sql_view_name} variable, this is no more working (Error : "Table or view not found").
This issue happends even in dbo schema.
If I run same SQL from SQL EndPoint, the statement is well working.
May I ask you help?
Besides, as workarround, I tried to do :
df = spark.sql(f"CREATE OR ALTER TABLE dbo.TempExportTable AS SELECT * FROM {sql_view_name}");
df = spark.sql(f"SELECT * FROM dbo.TempExportTable");
But it the CREATE OR ALTER seems not supported as well 😭
Thanks you by advance for your help!
Best regards,
Nicolas
Solved! Go to Solution.
Hi @Nicolas_DHONT,
You cannot request directly your Analytics Endpoint and as the views are only at this level you cannot use the normal notebooks connector.
I make a quick workaround to get your data based in this example write in Spark Scala. It connect directly to your SQL Analytics Endpoint and you can get the Data in the view.
Hope this help.
Regards,
Yohann
Hi @Nicolas_DHONT ,
You're right
T-SQL notebooks in Microsoft Fabric don't support exporting to CSV or interacting with external storage like PySpark notebooks do. You'll get (Error : "Table or view not found") because that view doesn’t exist from the Lakehouse's perspective.
As a workaround, you can read view via JDBC then
df.write.mode("overwrite").csv("/lakehouse/default/Files/output.csv", header=True)
If the response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank You
Hi @Nicolas_DHONT ,
Do you want use only Pyspark? If you use T-SQL notebook you can do it, but that is SQL way not pyspark.
Regards,
Srisakthi
Hi @Srisakthi,
The goal is to export as CSV content of the view and upload the file in an Azure Storage. I Guess this is not possible in a T-SQL notebook... 🙁
Thanks and best regards,
Hi @Nicolas_DHONT,
You cannot request directly your Analytics Endpoint and as the views are only at this level you cannot use the normal notebooks connector.
I make a quick workaround to get your data based in this example write in Spark Scala. It connect directly to your SQL Analytics Endpoint and you can get the Data in the view.
Hope this help.
Regards,
Yohann
Hello @yvaubourg
Thank you for your help.
I started Notebook with PySpark bloc to retrive parameters from the Pipeline launching it, then I created a temp txt file to push the variables to next bloc where I extract data and generate my csv and store it in temp folder. Finally a last bloc in PySpark to upload the file from temp folder to Azure storage.
It looks DIY beacause product is not mature enought... but it works!
Thank you for your help 😁
User | Count |
---|---|
6 | |
2 | |
2 | |
2 | |
2 |
User | Count |
---|---|
18 | |
17 | |
6 | |
5 | |
4 |