Join us for an expert-led overview of the tools and concepts you'll need to pass exam PL-300. The first session starts on June 11th. See you there!
Get registeredJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Py4JJavaError: An error occurred while calling o324.sql. : com.microsoft.fabric.spark.metadata.DoesNotExistException: Artifact not found: workspaceABC.bronze_lakehouse
Solved! Go to Solution.
Hi @eddy1980,
In fact, I can run these code in both two lakehouses that enabled or not enabled schema: (I modify the code to remove the workspace and lakehouse prefix and setting the default lakehouse, invoke tables which existed in current default lakehouse)
df = spark.sql("SELECT * FROM Question")
display(df)
Snapshots:
BTW, did the lakehouse sql endpoint succeed generated and sync the tables from lakehouse that you used? If not, I'd like to suggest you report to dev to help check the root cause and fix it more quickly.
Regards,
Xiaoxin Sheng
Thanks @Anonymous . I ran your script and it didn't work. This issue only happened at our prod workspace. It works fine in uat and dev workspace.
In addition, when I ran below script in the notebook. It also got same error. Please see the screenshto below.
spark.sql("SHOW SCHEMAS").show(truncate=False)
I guess it is caused by setting of prod workspace. However, when I compared it between uat and prod. It is same. Could you please further advise?
HI @eddy1980,
Any update on this? Did the above helps? If not, you can feel free to post here.
Regards,
Xiaoxin Sheng
hi @Anonymous , thanks for your following up on this. I couldn't resolve it. I had to create a lakehouse without the schema feature.
Hi @eddy1980,
In fact, I can run these code in both two lakehouses that enabled or not enabled schema: (I modify the code to remove the workspace and lakehouse prefix and setting the default lakehouse, invoke tables which existed in current default lakehouse)
df = spark.sql("SELECT * FROM Question")
display(df)
Snapshots:
BTW, did the lakehouse sql endpoint succeed generated and sync the tables from lakehouse that you used? If not, I'd like to suggest you report to dev to help check the root cause and fix it more quickly.
Regards,
Xiaoxin Sheng
Hi @eddy1980,
Current it seems not support directly switch from common lakehouse and schema lakehouse.
If you were working with common lakehouse tables, you can directly use pinned lakehouse table name without any other prefixed.
df = spark.sql("SELECT * FROM Question ")
display(df)
In addition, I'm not so recommend you to add any special characters in the lakehouse, schema and table names, they may affect the query usages.
Regards,
Xiaoxin Sheng
Thanks @Anonymous . I tried your script and doesn't work. This issue only happened at our prod workspace in the Fabric. It works fine in the UAT workspace.
In addition, if I run below script to show the schema of the lakehouse, I got same error "Artifact not found". Please see the screenshot below.
spark.sql("SHOW SCHEMAS").show(truncate=False)
I guest it could be caused by the setting of workspace or lakehouse. I compared the setting of workspace between prod and uat. They are same. Is there a command I can run in the notebook to enable schema of a lakehouse?
Hi @eddy1980,
I test with your script and it works well on my side. Have you setting the default Lakehouse of the notebook to quick reference this resource? In addition, you can try to directly use the 'table name' in query string if current table is hosted in the default dbo schema.
Regards,
Xiaoxin Sheng
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
User | Count |
---|---|
12 | |
4 | |
3 | |
3 | |
3 |
User | Count |
---|---|
8 | |
7 | |
6 | |
5 | |
5 |