Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Special holiday offer! You and a friend can attend FabCon with a BOGO code. Supplies are limited. Register now.

Reply
woldea
New Member

Need Help

I am trying to ingest lakehouse tables into fabric warehouse in the same workspace. in my config file, i have the table name as dbo.tableName and in pyspark notebook I set the target warehouse name as variable warehouse = "DatabaseName" . I got this error when I run the notebook. [REQUIRES_SINGLE_PART_NAMESPACE] spark_catalog requires a single-part namespace, but got `DatabaseName`.`dbo`.

2 ACCEPTED SOLUTIONS
dinesh_7780
Resolver IV
Resolver IV

Hi @woldea ,

That error is actually pointing to a namespace mismatch between how Spark expects to reference tables and how Fabric Warehouse uses schemas.

 

Follow below steps to fix the issue.

 

You need to align the namespace and table references:

 

1. Set the warehouse variable correctly Instead of including both database and schema, just use the database name:

 

warehouse = "DatabaseName"

 

2. Reference the table with schema inside the table name When you specify the table, include the schema as part of the identifier:

 

table = "dbo.tableName"

 

3.  Use the correct catalog path For example:

 

df.write.saveAsTable(f"{warehouse}.{table}")

 

This resolves to: DatabaseName.dbo.tableName → interpreted as Database = DatabaseName, Table = dbo.tableName.

 

Alternative approache.

 

You can create the table in Fabric Warehouse without schema prefix (default schema = dbo).

 

Then reference it simply as:

 

df.write.saveAsTable(f"{warehouse}.tableName")

 

If my response as resolved your issue please mark it as solution and give kudos.

 

View solution in original post

v-hjannapu
Community Support
Community Support

Hi @woldea,
I would also take a moment to thank @dinesh_7780  , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
 

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions.

Regards,
Community Support Team.

View solution in original post

3 REPLIES 3
v-hjannapu
Community Support
Community Support

Hi @woldea,
I would also take a moment to thank @dinesh_7780  , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.
 

I wanted to check if you had the opportunity to review the information provided. Please feel free to contact us if you have any further questions.

Regards,
Community Support Team.

dinesh_7780
Resolver IV
Resolver IV

Hi @woldea ,

That error is actually pointing to a namespace mismatch between how Spark expects to reference tables and how Fabric Warehouse uses schemas.

 

Follow below steps to fix the issue.

 

You need to align the namespace and table references:

 

1. Set the warehouse variable correctly Instead of including both database and schema, just use the database name:

 

warehouse = "DatabaseName"

 

2. Reference the table with schema inside the table name When you specify the table, include the schema as part of the identifier:

 

table = "dbo.tableName"

 

3.  Use the correct catalog path For example:

 

df.write.saveAsTable(f"{warehouse}.{table}")

 

This resolves to: DatabaseName.dbo.tableName → interpreted as Database = DatabaseName, Table = dbo.tableName.

 

Alternative approache.

 

You can create the table in Fabric Warehouse without schema prefix (default schema = dbo).

 

Then reference it simply as:

 

df.write.saveAsTable(f"{warehouse}.tableName")

 

If my response as resolved your issue please mark it as solution and give kudos.

 

Hi dinesh_7780

Thank you so much for your wokaround fixing the main culprit and I was able to read and write to and from warehouse using pyspark. My primary goal was to use synapsesql method to process incremental load to the warehouse. But this method does allow only overwriting the target tables and it does not allow upsert using a notebook. Have you ever achived this in notebook or is this something in microsoft attention in the future releases?

Thanks

Helpful resources

Announcements
November Fabric Update Carousel

Fabric Monthly Update - November 2025

Check out the November 2025 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Kudoed Authors