Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Don't miss out! 2025 Microsoft Fabric Community Conference, March 31 - April 2, Las Vegas, Nevada. Use code MSCUST for a $150 discount. Prices go up February 11th. Register now.

Reply
sushiat
Regular Visitor

Lakhouse partioned table with schema

I created a new lakehouse with schemas enabled and copied a table from an Oracle source to it specifying a schema name. I also partioned the table on two keys.

 

In the SQL endpoint the schema isn't shown (I think that's not supported yet) but the table is available in the dbo schema and it can read the data.

 

When opening the lakehouse explorer the table isn't inside the dbo or custom schema, it shows up as a new schema with the name of the table!? Expanding that tree node only shows an "Unidentified" folder which then contains the delta folders:

 

sushiat_0-1732205566502.png

I looked through both the limitations section of the public preview about schemas, as well as the known issues list but can't find any mention of this.

1 ACCEPTED SOLUTION

Hi @sushiat ,

 

The preview feature can be a bit limiting for us, using alternatives when they are available.

 

In this case, you can create lakehouse without the preview feature enabled, create a new schema in its corresponding SQL endpoint, and then put the new table into use in this schema.

 

I'm sure the preview feature will be better in the near future, so let's keep our fingers crossed.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

View solution in original post

7 REPLIES 7
v-huijiey-msft
Community Support
Community Support

Hi @sushiat ,

 

Thanks for the reply from FabianSchut .

 

I ran a test and creating a new table in an schema that is not a dbo causes problems.

 

Try copying the table from the Oracle source into it and specify the dbo architecture instead of the new architecture you created and check if you still have problems.

 

Don't use Lakehouse with schema enabled (preview), use a regular lakehouse to see if it works.

 

If you want to create a new schema and create new tables in the new schema, this can be done using SQL statements at the SQL endpoints corresponding to the lakehouse, as illustrated below:

 

CREATE SCHEMA test_schema

 

vhuijieymsft_0-1732698069990.png

 

For example, I have a table named test in dbo. Using the following code, you can move it to a schema named test_schema.

 

ALTER SCHEMA test_schema TRANSFER dbo.test

 

vhuijieymsft_1-1732698069994.png

 

If you have any other questions please feel free to contact me.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

Hi Yang,

 

Did those tests now and using the data pipeline to write the table to dbo.b_donegal_students still fails, creates a schema with that table name next to dbo, but not in dbo:

 

sushiat_0-1732790169841.png

Using a non-schema lakehouse works fine, that's what we used in our current prototype implementation but I was hoping that the schema preview was advanced enough to jump onto it when developing the final implementation and not having to redo it later as organising all those tables in schemas would be hugely beneficial for our distributed systems.

 

sushiat_1-1732790322604.png

I was thinking of what could cause us issues when trying out the schema enabled lakehouses, but I'm just a bit surprised that it's the basic data pipeline copy activity.

Hi @sushiat ,

 

The preview feature can be a bit limiting for us, using alternatives when they are available.

 

In this case, you can create lakehouse without the preview feature enabled, create a new schema in its corresponding SQL endpoint, and then put the new table into use in this schema.

 

I'm sure the preview feature will be better in the near future, so let's keep our fingers crossed.

 

Best Regards,
Yang
Community Support Team

 

If there is any post helps, then please consider Accept it as the solution  to help the other members find it more quickly.
If I misunderstand your needs or you still have problems on it, please feel free to let us know. Thanks a lot!

sushiat
Regular Visitor

Sure, here is a screenshot of the destination config of the copy activity from the pipeline:

 

sushiat_0-1732270121789.png

EDIT: I also tried removing the partioning, didn't solve it either, still created table as a schema. I checked the file commited to Github and it looks fine to me:

 

"sink": {
            "type": "LakehouseTableSink",
            "tableActionOption": "OverwriteSchema",
            "partitionOption": "PartitionByKey",
            "partitionNameList": [
              "TERM_CODE",
              "DEPT_CODE"
            ],
            "datasetSettings": {
              "type": "LakehouseTable",
              "typeProperties": {
                "schema": "banner",
                "table": "b_donegal_students"
              },
              "schema": [],
              "linkedService": {
                "name": "lh_bs_banner",
                "properties": {
                  "type": "Lakehouse",
                  "typeProperties": {
                    "artifactId": "056ac282-842a-4960-bf6b-0f528210a889",
                    "workspaceId": "00000000-0000-0000-0000-000000000000",
                    "rootFolder": "Tables"
                  },
                  "annotations": []
                }
              },
              "annotations": []
            }
          },

 

Just to be sure, I can see a banner schema in your screenshot too. Is your table created there? 

Afraid not, I created that empty schema when I created the lakehouse and before setting up the data pipeline.

 

I also tried deleting it and running the pipeline again, no change.

FabianSchut
Super User
Super User

Hi, can you show us your copy script/activity? 

Helpful resources

Announcements
Las Vegas 2025

Join us at the Microsoft Fabric Community Conference

March 31 - April 2, 2025, in Las Vegas, Nevada. Use code MSCUST for a $150 discount! Prices go up Feb. 11th.

JanFabricDE_carousel

Fabric Monthly Update - January 2025

Explore the power of Python Notebooks in Fabric!

JanFabricDW_carousel

Fabric Monthly Update - January 2025

Unlock the latest Fabric Data Warehouse upgrades!

JanFabricDF_carousel

Fabric Monthly Update - January 2025

Take your data replication to the next level with Fabric's latest updates!