Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
I am trying to refer a temp view that I have created but unable to do so even using the catalog schema: global_temp
statement:
Hi @AyusmanBasu0604 ,
We’d like to follow up regarding the recent concern. Kindly confirm whether the issue has been resolved, or if further assistance is still required. We are available to support you and are committed to helping you reach a resolution.
Best Regards,
Chaithra E.
Hi @AyusmanBasu0604 ,
May I ask if you have resolved this issue? Please let us know if you have any further issues, we are happy to help.
Thank you.
Hi @AyusmanBasu0604 ,
I hope the information provided is helpful.I wanted to check whether you were able to resolve the issue with the provided solutions.Please let us know if you need any further assistance.
Thank you.
@AyusmanBasu0604 Hey,
The error is related to limitations regarding Spark views in Lakehouse environments,
I will follow up below steps to verify info
1) Ensure that the schema and table formats are compatible with Spark.
2) Sometimes, issues arise from unsupported data types or formats in the Lakehouse environment.
3) instead of creating a temporary view, you might consider using a global temporary view, which persists across all sessions until the Spark application terminates:
Try this code -
from pyspark.sql
import SparkSession spark.sql("CREATE OR REPLACE GLOBAL TEMP VIEW temp_grades AS SELECT * FROM Bronze.tbl_grades") df = spark.sql("SELECT * FROM global_temp.temp_grades") df.show()
4) If you can't use views, consider caching the DataFrame. This approach won't directly replace a materialized view but can improve performance
df = spark.sql("SELECT * FROM Bronze.tbl_grades") df.cache() # Perform transformations df.show()
5) If the only reason to avoid DataFrames is due to materialized views, consider saving the transformed DataFrame to a temporary location and then recreating the view/table. While this approach uses more I/O, it might work:
df = spark.sql("SELECT * FROM Bronze.tbl_grades") # Perform transformations df.write.mode("overwrite").save("/path/to/temp/location") # Load back from temp location into a new table/view spark.sql("CREATE OR REPLACE VIEW new_table AS SELECT * FROM delta.`/path/to/temp/location`")
Thanks
Harish M
First of all thanks for your detailed suggestions @HarishKM
point# 3 - With Global temp views also the issue is same as I had tired this multiple times earlier.
point# 5 - I tried saving the dataframe into a temp location, while that worked but while calling, I got the error: Py4JJavaError: An error occurred while calling o348.sql.
: java.lang.AssertionError: assertion failed: Only the following table types are supported: MANAGED, MATERIALIZED_LAKE_VIEW
I tried using a Materialized lake view and it comes back to the original error which I was getting while trying to reference a global/temp view directly into a Materialized Lake View: [TABLE_OR_VIEW_NOT_FOUND]
@AyusmanBasu0604 Hey,
I will below steps to troubleshoot.
- Ensure that the global/temp views are being created correctly with the proper scope and namespaces. Verify the default database context and that you aren't trying to access views across different sessions without the necessary context.
- Double-check that you are within the same session or context when you try to reference these views in your SQL operations. Scoping issues can lead to views not being found if they aren't set globally or session persisting views.
- Since saving to a temporary location seems to work, consolidate this approach by efficient file storage. You might store intermediate steps in universally accessible storage, such as HDFS or cloud storage, instead of solely relying on temp views.
- Ensure that the Materialized Lake View is being created with the correct SQL schema declarations and verify that the errors are not stemming from syntax or schema mismatch issues.
- Use consistent and clear namespace practices, ensuring that all views/tables are accessible in your SQL environment. Re-declare or map contexts if needed.
- Enable detailed logging of PySpark operations to uncover hidden discrepancies in SQL calls. It can provide you the exact point where the table or view is not found, helping you adjust your code accordingly.
- Sometimes platform-specific solutions might be available in official documentation or user forums. Consult these resources for additional pointers on handling scope and view references.
If the issues persist, examining logs for the root of the TABLE_OR_VIEW_NOT_FOUND error and reviewing SQL statements
Thanks
Harish M
Hi @AyusmanBasu0604,
Thank you for your response, you're encountering known limitations in Microsoft Fabric's Lakehouse when working with temporary views and Materialized Lake Views (MLVs) using Spark SQL.
Reference: Lakehouse schemas (Preview) - Microsoft Fabric | Microsoft Learn
Fabric Lakehouse does not support reading from temporary or global temporary views in certain operations especially when working with Materialized Lake Views, which have stricter limitations, referencing views created using Spark SQL in another SQL statement or notebook cell or trying to use TEMP VIEW or GLOBAL TEMP VIEW in MLV creation or inside CREATE OR REPLACE VIEW.
You can try this workaround, Use a Staging Delta Table, instead of using TEMP VIEW, create a real physical Delta table (possibly under a temporary/staging schema), then refer to it. Then, use it in your MLV. This works because you’re using a physical Delta table. Spark and Fabric support referencing it in MLVs.
Hope this helps.
Best Regards
Chaithra E.