Supplies are limited. Contact info@espc.tech right away to save your spot before the conference sells out.
Get your discountScore big with last-minute savings on the final tickets to FabCon Vienna. Secure your discount
I am trying to refer a temp view that I have created but unable to do so even using the catalog schema: global_temp
statement:
Hi @AyusmanBasu0604 ,
May I ask if you have resolved this issue? Please let us know if you have any further issues, we are happy to help.
Thank you.
Hi @AyusmanBasu0604 ,
I hope the information provided is helpful.I wanted to check whether you were able to resolve the issue with the provided solutions.Please let us know if you need any further assistance.
Thank you.
@AyusmanBasu0604 Hey,
The error is related to limitations regarding Spark views in Lakehouse environments,
I will follow up below steps to verify info
1) Ensure that the schema and table formats are compatible with Spark.
2) Sometimes, issues arise from unsupported data types or formats in the Lakehouse environment.
3) instead of creating a temporary view, you might consider using a global temporary view, which persists across all sessions until the Spark application terminates:
Try this code -
from pyspark.sql
import SparkSession spark.sql("CREATE OR REPLACE GLOBAL TEMP VIEW temp_grades AS SELECT * FROM Bronze.tbl_grades") df = spark.sql("SELECT * FROM global_temp.temp_grades") df.show()
4) If you can't use views, consider caching the DataFrame. This approach won't directly replace a materialized view but can improve performance
df = spark.sql("SELECT * FROM Bronze.tbl_grades") df.cache() # Perform transformations df.show()
5) If the only reason to avoid DataFrames is due to materialized views, consider saving the transformed DataFrame to a temporary location and then recreating the view/table. While this approach uses more I/O, it might work:
df = spark.sql("SELECT * FROM Bronze.tbl_grades") # Perform transformations df.write.mode("overwrite").save("/path/to/temp/location") # Load back from temp location into a new table/view spark.sql("CREATE OR REPLACE VIEW new_table AS SELECT * FROM delta.`/path/to/temp/location`")
Thanks
Harish M
First of all thanks for your detailed suggestions @HarishKM
point# 3 - With Global temp views also the issue is same as I had tired this multiple times earlier.
point# 5 - I tried saving the dataframe into a temp location, while that worked but while calling, I got the error: Py4JJavaError: An error occurred while calling o348.sql.
: java.lang.AssertionError: assertion failed: Only the following table types are supported: MANAGED, MATERIALIZED_LAKE_VIEW
I tried using a Materialized lake view and it comes back to the original error which I was getting while trying to reference a global/temp view directly into a Materialized Lake View: [TABLE_OR_VIEW_NOT_FOUND]
Hi @AyusmanBasu0604,
Thank you for your response, you're encountering known limitations in Microsoft Fabric's Lakehouse when working with temporary views and Materialized Lake Views (MLVs) using Spark SQL.
Reference: Lakehouse schemas (Preview) - Microsoft Fabric | Microsoft Learn
Fabric Lakehouse does not support reading from temporary or global temporary views in certain operations especially when working with Materialized Lake Views, which have stricter limitations, referencing views created using Spark SQL in another SQL statement or notebook cell or trying to use TEMP VIEW or GLOBAL TEMP VIEW in MLV creation or inside CREATE OR REPLACE VIEW.
You can try this workaround, Use a Staging Delta Table, instead of using TEMP VIEW, create a real physical Delta table (possibly under a temporary/staging schema), then refer to it. Then, use it in your MLV. This works because you’re using a physical Delta table. Spark and Fabric support referencing it in MLVs.
Hope this helps.
Best Regards
Chaithra E.
User | Count |
---|---|
4 | |
4 | |
2 | |
2 | |
2 |
User | Count |
---|---|
10 | |
8 | |
6 | |
6 | |
5 |