Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Py4JJavaError: An error occurred while calling o324.sql. : com.microsoft.fabric.spark.metadata.DoesNotExistException: Artifact not found: workspaceABC.bronze_lakehouse
Solved! Go to Solution.
Hi @eddy1980,
In fact, I can run these code in both two lakehouses that enabled or not enabled schema: (I modify the code to remove the workspace and lakehouse prefix and setting the default lakehouse, invoke tables which existed in current default lakehouse)
df = spark.sql("SELECT * FROM Question")
display(df)
Snapshots:
BTW, did the lakehouse sql endpoint succeed generated and sync the tables from lakehouse that you used? If not, I'd like to suggest you report to dev to help check the root cause and fix it more quickly.
Regards,
Xiaoxin Sheng
Hi @eddy1980. I know this is almost a year old, however I just ran into this exact same error this week and found a solution. My sandbox environment ran fine using the schema-enabled lakehouse, however when I ran the exact same code in dev on the schema-enabled lakehouse the code failed. Same naming conventions and everything across objects. I got the same "Artifact not found" error, shown here. The same code attached to a regular lakehouse (without schemas) worked just fine.
Digging in a little more, I ran this script in the dev environment to try and figure if something looked funny in the environment and I saw that the default workspace had this leading space in the name.
As it turns out, somebody fat fingered the space in there when creating the workspace. There is a bug (or feature!) in the schema-enabled lakehouse that won't work with special characters, including spaces. This is actually mentioned near the top and also down at the bottom of the documentation for lakehouse schemas, which is somehow still in public preview after a year.
I verified this by looking at the name side-by-side with the sandbox environment. Sure enough, there was the space shifting it over slightly. I can't believe I didn't notice it earlier.
I had our admin change the workspace name to remove the space, and bingo - the code ran fine afterwards.
I hope your problem has long since been solved, but in case you've been up at night randomly thinking about this problem from a year ago, maybe this will help!
Thanks @Anonymous . I ran your script and it didn't work. This issue only happened at our prod workspace. It works fine in uat and dev workspace.
In addition, when I ran below script in the notebook. It also got same error. Please see the screenshto below.
spark.sql("SHOW SCHEMAS").show(truncate=False)
I guess it is caused by setting of prod workspace. However, when I compared it between uat and prod. It is same. Could you please further advise?
HI @eddy1980,
Any update on this? Did the above helps? If not, you can feel free to post here.
Regards,
Xiaoxin Sheng
hi @Anonymous , thanks for your following up on this. I couldn't resolve it. I had to create a lakehouse without the schema feature.
Hi @eddy1980,
In fact, I can run these code in both two lakehouses that enabled or not enabled schema: (I modify the code to remove the workspace and lakehouse prefix and setting the default lakehouse, invoke tables which existed in current default lakehouse)
df = spark.sql("SELECT * FROM Question")
display(df)
Snapshots:
BTW, did the lakehouse sql endpoint succeed generated and sync the tables from lakehouse that you used? If not, I'd like to suggest you report to dev to help check the root cause and fix it more quickly.
Regards,
Xiaoxin Sheng
Hi @eddy1980,
Current it seems not support directly switch from common lakehouse and schema lakehouse.
If you were working with common lakehouse tables, you can directly use pinned lakehouse table name without any other prefixed.
df = spark.sql("SELECT * FROM Question ")
display(df)
In addition, I'm not so recommend you to add any special characters in the lakehouse, schema and table names, they may affect the query usages.
Regards,
Xiaoxin Sheng
Thanks @Anonymous . I tried your script and doesn't work. This issue only happened at our prod workspace in the Fabric. It works fine in the UAT workspace.
In addition, if I run below script to show the schema of the lakehouse, I got same error "Artifact not found". Please see the screenshot below.
spark.sql("SHOW SCHEMAS").show(truncate=False)
I guest it could be caused by the setting of workspace or lakehouse. I compared the setting of workspace between prod and uat. They are same. Is there a command I can run in the notebook to enable schema of a lakehouse?
Hi @eddy1980,
I test with your script and it works well on my side. Have you setting the default Lakehouse of the notebook to quick reference this resource? In addition, you can try to directly use the 'table name' in query string if current table is hosted in the default dbo schema.
Regards,
Xiaoxin Sheng