Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Calling all Data Engineers! Fabric Data Engineer (Exam DP-700) live sessions are back! Starting October 16th. Sign up.
Hello Everyone,
I am new to Microsoft Fabric and need help as I am not sure what I am missing.
So I have created a Lakehouse and loaded serveral tables in it using 'Copy Activity'. Now I can see these tables in the Lakehouse and I can query these tables using SQL Analytics Endpoint.
The issue I am facing is when I am trying to load these tables to a Fabric Notebook using Spark for transformation.
When I create a new Notebook, I am able to select my Lakehouse as source and I can also see all the tables in the left handside (explorer). However when I try to read these tables using spark.read.table("<table_name>") or use spark.sql("select * from <table_name>") I get the error "AnalysisException: [TABLE_OR_VIEW_NOT_FOUND] The table or view `DTX_DATA`.`ATTEMPTS` cannot be found."
When I run spark.catalog.listTables() I am getting [] as output.
Can someone explain what I am missing here?
Any help will be appreciated.
Thanks in advance 🙂
Solved! Go to Solution.
Hi @Krishna_Maganti,
Thanks for actively engaging in MS Fabric community support.
The spark.read.table() method looks for a registered table name,
while the spark.read.format("delta").load() method reads directly from the storage path without requiring registration.
Using spark.read.format("delta").load('Tables/RAW_DATA/ATTEMPTS') bypasses the need for a metastore registration and reads the Delta table directly from the specified storage path.
Writing it back using saveAsTable registers the table in the metastore, making it accessible for spark.read.table() in future operations. registered tables is the best practice for scalability and maintainability.
Thanks,
Prashanth Are
MS Fabric community support
Did we answer your question? Mark post as a solution, this will help others!
If our response(s) assisted you in any way, don't forget to drop me a "Kudos"
@Krishna_Maganti, as we haven’t heard back from you, we wanted to check in to see if the resolution provided helps?
If you’re still facing any issues or have additional questions, please don’t hesitate to let us know.
We’re here to help and would be happy to assist further if needed. Looking forward to your feedback!
Thanks,
Prashanth Are
MS Fabric community support.
Did we answer your question? Mark post as a solution, this will help others!
If our response(s) assisted you in any way, don't forget to drop me a "Kudos"
Hi @Krishna_Maganti,
I tried to repro your scenario i'm able to load tables with out any issues. PFBS for same for reference
as of now there are no existing know issues metioned scenario. please let me know if you still blocked on this? it's good to raise a support ticket where a dedicate team will look into the error your facing.
Thanks,
Prashanth Are
MS Fabric community support.
Did we answer your question? Mark post as a solution, this will help others identify similar issues easily!
If our response(s) assisted you in any way, don't forget to drop me a "Kudos"
Thank you so much for your reply Prashanth.
As you can see from the screenshot below, I am still having the same issue.
However, after trying different things, one thing that worked was reading the table using the following code.
df = spark.read.format("delta").load('Tables/RAW_DATA/ATTEMPTS')
Once loaded to 'df', I then wrote the table back to Lakehouse
df.write.mode("overwrite").format("delta").saveAsTable('RAW_DATA.attempts_df')
df.write.mode("overwrite").format("delta").saveAsTable('RAW_DATA.attempts_df')
After doing these steps, I can then read the new 'RAW_DATA.attempts_df' directly.
This is working for now, but I am still not sure why I cannot do this directly. And in terms of efficiency I do not feel it is a good practice to work this way as I have many tables and will have to duplicate each one.
Hi @Krishna_Maganti,
Thanks for actively engaging in MS Fabric community support.
The spark.read.table() method looks for a registered table name,
while the spark.read.format("delta").load() method reads directly from the storage path without requiring registration.
Using spark.read.format("delta").load('Tables/RAW_DATA/ATTEMPTS') bypasses the need for a metastore registration and reads the Delta table directly from the specified storage path.
Writing it back using saveAsTable registers the table in the metastore, making it accessible for spark.read.table() in future operations. registered tables is the best practice for scalability and maintainability.
Thanks,
Prashanth Are
MS Fabric community support
Did we answer your question? Mark post as a solution, this will help others!
If our response(s) assisted you in any way, don't forget to drop me a "Kudos"
Thank you so much for clarifying this!
If you have multiple lakehouses your notebook can only have one default all non-default must have two or three part naming like lakhouse.table or lakehouse.schema.table
Proud to be a Super User!
Thanks for your reply. I have only one Lakehouse in my OneLake
@Krishna_Maganti, Thanks for reaching MS Fabric community support
@nilendraFabric, Thanks for your promt reponse.
Inaddition to @nilendraFabric mentioned please refer below generic resource with similar error details and let me know if this helps resolve your case in fabric
TABLE_OR_VIEW_NOT_FOUND error class - Azure Databricks | Microsoft Learn
Thanks,
Prashanth Are
MS Fabric community support
If my answer helps please Mark post as a solution, this will help others! If our response(s) assisted you in any way, don't forget to drop me a "Kudos".
Hello @Krishna_Maganti
Try these options
Detach and reattach the Lakehouse to your notebook. This can often resolve issues where Spark cannot recognize tables in the Lakehouse
Verify the exact case of table and schema names in your Lakehouse and use them accordingly in your queries (e.g., `spark.sql("SELECT * FROM <schema_name>.<table_name>")`
Fully qualify the table name with its catalog and schema (e.g., `spark.read.table("catalog.schema.table")`). Alternatively, set the default schema using `spark.sql("USE SCHEMA <schema_name>")`
Refresh the metadata by running a command like `REFRESH TABLE <table_name>`
Join the Fabric FabCon Global Hackathon—running virtually through Nov 3. Open to all skill levels. $10,000 in prizes!
Check out the September 2025 Fabric update to learn about new features.
User | Count |
---|---|
18 | |
5 | |
4 | |
2 | |
2 |