Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!View all the Fabric Data Days sessions on demand. View schedule
Hello all,
I am struggling to resolve an error arising from what I thought would be a trivial task in my workflow. In my Bronze lakehouse, I have loaded a raw file as a Spark dataframe and would now like to save it as a table in the Tables section of the lakehouse in a newly created schema. I tried the command:
df.write.mode("overwrite").saveAsTable("dbo.test_table")for my dataframe df, and many other variations, but get thrown the following error:
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
Cell In[62], line 1
----> 1 test_df.write.mode("overwrite").saveAsTable("dbo.test_table")
File /opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py:1586, in DataFrameWriter.saveAsTable(self, name, format, mode, partitionBy, **options)
1584 if format is not None:
1585 self.format(format)
-> 1586 self._jwrite.saveAsTable(name)
File ~/cluster-env/trident_env/lib/python3.11/site-packages/py4j/java_gateway.py:1322, in JavaMember.__call__(self, *args)
1316 command = proto.CALL_COMMAND_NAME +\
1317 self.command_header +\
1318 args_command +\
1319 proto.END_COMMAND_PART
1321 answer = self.gateway_client.send_command(command)
-> 1322 return_value = get_return_value(
1323 answer, self.gateway_client, self.target_id, self.name)
1325 for temp_arg in temp_args:
1326 if hasattr(temp_arg, "_detach"):
File /opt/spark/python/lib/pyspark.zip/pyspark/errors/exceptions/captured.py:179, in capture_sql_exception.<locals>.deco(*a, **kw)
177 def deco(*a: Any, **kw: Any) -> Any:
178 try:
--> 179 return f(*a, **kw)
180 except Py4JJavaError as e:
181 converted = convert_exception(e.java_exception)
File ~/cluster-env/trident_env/lib/python3.11/site-packages/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
331 "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
332 format(target_id, ".", name, value))
Py4JJavaError: An error occurred while calling o8037.saveAsTable.
: com.microsoft.fabric.spark.catalog.metadata.DoesNotExistException: Artifact not found: `<no-lakehouse-workspace-specified>`.`My_Bronze_LH`
at com.microsoft.fabric.spark.catalog.metadata.NamespaceResolver.getArtifact(pathResolvers.scala:280)
at com.microsoft.fabric.spark.catalog.metadata.NamespaceResolver.inferRealNamespace(pathResolvers.scala:287)
at com.microsoft.fabric.spark.catalog.metadata.NamespaceResolver.$anonfun$toRealNamespace$1(pathResolvers.scala:264)
at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1705)
at com.microsoft.fabric.spark.catalog.metadata.NamespaceResolver.toRealNamespace(pathResolvers.scala:264)
at com.microsoft.fabric.spark.catalog.metadata.NamespaceResolver.toNamespace(pathResolvers.scala:254)
at com.microsoft.fabric.spark.catalog.metadata.DefaultSchemaMetadataManager.createSchema(DefaultSchemaMetadataManager.scala:52)
at com.microsoft.fabric.spark.catalog.metadata.MetadataManager.createSchema(MetadataManager.scala:213)
at com.microsoft.fabric.spark.catalog.metadata.InstrumentedMetadataManager.super$createSchema(MetadataManager.scala:345)
at com.microsoft.fabric.spark.catalog.metadata.InstrumentedMetadataManager.$anonfun$createSchema$1(MetadataManager.scala:345)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.microsoft.fabric.spark.catalog.metadata.Helpers$.timed(Helpers.scala:89)
at com.microsoft.fabric.spark.catalog.metadata.InstrumentedMetadataManager.createSchema(MetadataManager.scala:345)
at com.microsoft.fabric.spark.catalog.OnelakeExternalCatalog.createDatabase(OnelakeExternalCatalog.scala:59)
at com.microsoft.fabric.spark.catalog.InstrumentedExternalCatalog.$anonfun$createDatabase$1(OnelakeExternalCatalog.scala:416)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.microsoft.fabric.spark.catalog.metadata.Helpers$.timed(Helpers.scala:89)
at com.microsoft.fabric.spark.catalog.InstrumentedExternalCatalog.createDatabase(OnelakeExternalCatalog.scala:416)
at org.apache.spark.sql.internal.SharedState.liftedTree2$1(SharedState.scala:215)
at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:187)
at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:169)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:54)
at org.apache.spark.sql.hive.HiveSessionStateBuilder.$anonfun$catalog$1(HiveSessionStateBuilder.scala:69)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog$lzycompute(SessionCatalog.scala:145)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.externalCatalog(SessionCatalog.scala:145)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.databaseExists(SessionCatalog.scala:375)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.requireDbExists(SessionCatalog.scala:292)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.$anonfun$getTableRawMetadata$2(SessionCatalog.scala:635)
at org.apache.spark.microsoft.onesecurity.OneSecurityTelemetry$.executeAndLogMetric(OneSecurityTelemetry.scala:425)
at org.apache.spark.microsoft.onesecurity.OneSecurityTelemetry$.executeAndLogMetricMs(OneSecurityTelemetry.scala:455)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableRawMetadata(SessionCatalog.scala:634)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.getTableMetadata(SessionCatalog.scala:618)
at org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.loadTable(V2SessionCatalog.scala:81)
at org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension.loadTable(DelegatingCatalogExtension.java:73)
at org.apache.spark.sql.delta.catalog.DeltaCatalog.super$loadTable(DeltaCatalog.scala:192)
at org.apache.spark.sql.delta.catalog.DeltaCatalog.$anonfun$loadTable$1(DeltaCatalog.scala:192)
at org.apache.spark.sql.delta.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:169)
at org.apache.spark.sql.delta.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:167)
at org.apache.spark.sql.delta.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:65)
at org.apache.spark.sql.delta.catalog.DeltaCatalog.loadTable(DeltaCatalog.scala:191)
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:606)
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:593)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:374)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:829)I cannot understand why it complains about <no-lakehouse-workspace-specified> as I just want to save the table in the current workspace and lakehouse that I am working from. To the left on the screen, below "Explorer/Data items" I can see my lakehouse and existing tables.
Any help on how to save as Spark dataframe from a lakehouse notebook as a table in a given schema would be very much appreciated. Many thanks in advance!
Solved! Go to Solution.
Hi @bertanyilmaz,
That error means Spark cannot resolve a default Lakehouse context for the managed table write. In Fabric, saveAsTable writes a managed Delta table that is bound to the notebook’s default Lakehouse. If no default is set (or the wrong one is pinned), you get messages like <no-lakehouse-workspace-specified>. Also, if you are using schemas, you must create the schema first and reference it correctly.
spark.sql("CREATE SCHEMA IF NOT EXISTS sales")
df.write.format("delta").mode("overwrite").saveAsTable("sales.test_table") Docs on schemas: Lakehouse schemas. Fabric Lakehouse tables are Delta: Lakehouse and Delta tables.# For schema-enabled lakehouse, include schema in the pathCommunity note about saveAsTable requiring a default Lakehouse: Using "Save As Table" Without a default lakehouse.
df.write.format("delta").mode("overwrite").save("Tables/sales/test_table")
spark.sql("CREATE SCHEMA IF NOT EXISTS sales")
spark.sql("CREATE TABLE IF NOT EXISTS sales.test_table USING delta LOCATION 'Tables/sales/test_table'")
spark.sql("REFRESH TABLE sales.test_table")
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Hi @bertanyilmaz,
That error means Spark cannot resolve a default Lakehouse context for the managed table write. In Fabric, saveAsTable writes a managed Delta table that is bound to the notebook’s default Lakehouse. If no default is set (or the wrong one is pinned), you get messages like <no-lakehouse-workspace-specified>. Also, if you are using schemas, you must create the schema first and reference it correctly.
spark.sql("CREATE SCHEMA IF NOT EXISTS sales")
df.write.format("delta").mode("overwrite").saveAsTable("sales.test_table") Docs on schemas: Lakehouse schemas. Fabric Lakehouse tables are Delta: Lakehouse and Delta tables.# For schema-enabled lakehouse, include schema in the pathCommunity note about saveAsTable requiring a default Lakehouse: Using "Save As Table" Without a default lakehouse.
df.write.format("delta").mode("overwrite").save("Tables/sales/test_table")
spark.sql("CREATE SCHEMA IF NOT EXISTS sales")
spark.sql("CREATE TABLE IF NOT EXISTS sales.test_table USING delta LOCATION 'Tables/sales/test_table'")
spark.sql("REFRESH TABLE sales.test_table")
If you found this helpful, consider giving some Kudos. If I answered your question or solved your problem, mark this post as the solution.
Check out the November 2025 Fabric update to learn about new features.
Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!