Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Special holiday offer! You and a friend can attend FabCon with a BOGO code. Supplies are limited. Register now.
Hello Fabric Community,
I am reaching out to see if anyone else is encountering a similar issue with SparkSQL behavior in Fabric.
I have noticed a change in how SparkSQL interacts with schemas. A few days ago, I executed the following script without any issues:
USE SCHEMA RawStore;
CREATE OR REPLACE TEMPORARY VIEW castTypes AS
SELECT
CAST(FLOOR(toto) AS STRING) AS toto_r,
CAST(FLOOR(rara) AS STRING) AS rara_t,
CAST(FLOOR(rite) AS STRING) AS rite_ligne,
CAST(FLOOR(vvsf) AS STRING) AS vvsf_fiche
FROM mvst;
SELECT * FROM castTypes LIMIT 1000;This worked perfectly fine. However, I am now receiving the following error:
[SCHEMA_NOT_FOUND] The schema `rawstore` cannot be found. Verify the spelling and correctness of the schema and catalog. If you did not qualify the name with a catalog, verify the current_schema() output, or qualify the name with the correct catalog. To tolerate the error on drop use DROP SCHEMA IF EXISTS.Interestingly, when I prefix the schema name with the workspace and lakehouse name, like workspace.lakehouse.rawstore, like
USE SCHEMA workspace.lakehouse.rawstore;
CREATE OR REPLACE TEMPORARY VIEW castTypes AS
SELECT
CAST(FLOOR(toto) AS STRING) AS toto_r,
CAST(FLOOR(rara) AS STRING) AS rara_t,
CAST(FLOOR(rite) AS STRING) AS rite_ligne,
CAST(FLOOR(vvsf) AS STRING) AS vvsf_fiche
FROM mvst;
SELECT * FROM castTypes LIMIT 1000;the script works again. This indicates that there may be a change in schema context management in SparkSQL within Fabric.
Despite my lakehouse being set up with schema options enabled, and the schemas existing properly within my lakehouse, I am puzzled by this change in behavior.
Has anyone else faced this issue? If so, how did you resolve it? Any insights or suggestions would be greatly appreciated!
Thank you for your help!
Solved! Go to Solution.
Hi SKONA,
Thank you for contacting the Microsoft Fabric Community Forum and for the detailed explanation.
Based on my understanding, the behavior you are observing is related to how Spark SQL resolves catalog and schema context in Fabric, particularly when using Lakehouse Schemas (Preview). Fabric has recently aligned with the standard Spark/Unity Catalog namespace resolution. As a result, your Spark session is no longer automatically scoped to your lakehouse’s catalog. When you execute USE SCHEMA RawStore; Spark searches for RawStore in the current catalog, which in your session is not the lakehouse catalog, causing the error: [SCHEMA_NOT_FOUND] The schema 'rawstore' cannot be found. When you run USE SCHEMA workspace.lakehouse.rawstore; you explicitly specify the correct catalog and schema, so the query succeeds. This issue may occur after runtime updates or when switching between notebook and pipeline sessions.
Please follow the steps below, which may help resolve the issue:
Alternatively, use fully qualified table paths as shown below. This approach works for pipelines and production workloads:
FROM workspace.lakehouse.rawstore.mvst
For further reference, please refer the link below:
Lakehouse schemas (Preview) - Microsoft Fabric | Microsoft Learn
We hope the information helps resolve the issue. If you have any further queries, please feel free to contact the Microsoft Fabric community.
Thank you.
Hi SKONA,
We would like to follow up and see whether the details we shared have resolved your problem. If you need any more assistance, please feel free to connect with the Microsoft Fabric community.
Thank you.
Thank you for your kind words, I'm glad the information was helpful!
Thank you for your kind words, I'm glad the information was helpful!
Hi SKONA,
We would like to follow up and see whether the details we shared have resolved your problem. If you need any more assistance, please feel free to connect with the Microsoft Fabric community.
Thank you.
Hi SKONA,
Thank you for contacting the Microsoft Fabric Community Forum and for the detailed explanation.
Based on my understanding, the behavior you are observing is related to how Spark SQL resolves catalog and schema context in Fabric, particularly when using Lakehouse Schemas (Preview). Fabric has recently aligned with the standard Spark/Unity Catalog namespace resolution. As a result, your Spark session is no longer automatically scoped to your lakehouse’s catalog. When you execute USE SCHEMA RawStore; Spark searches for RawStore in the current catalog, which in your session is not the lakehouse catalog, causing the error: [SCHEMA_NOT_FOUND] The schema 'rawstore' cannot be found. When you run USE SCHEMA workspace.lakehouse.rawstore; you explicitly specify the correct catalog and schema, so the query succeeds. This issue may occur after runtime updates or when switching between notebook and pipeline sessions.
Please follow the steps below, which may help resolve the issue:
Alternatively, use fully qualified table paths as shown below. This approach works for pipelines and production workloads:
FROM workspace.lakehouse.rawstore.mvst
For further reference, please refer the link below:
Lakehouse schemas (Preview) - Microsoft Fabric | Microsoft Learn
We hope the information helps resolve the issue. If you have any further queries, please feel free to contact the Microsoft Fabric community.
Thank you.