Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

We've captured the moments from FabCon & SQLCon that everyone is talking about, and we are bringing them to the community, live and on-demand. Starts on April 14th. Register now

Reply
Kostas
Helper IV
Helper IV

LakeHouse Delta Tables not found

Hello, 

 

I am having an issue and cannot understand whether I am doing something wrong or is a known MS issue. 

I want to consolidate my reporting universe data under a single lakehouse so I can utilise the notebooks as well as the SQL analytics endpoint. To do that in a structured way, all my delta tables I need them to be in differerent schemas. 

Loading tables with PySpark directly to the schema is relatevely easy and straightforward; for the GEn2 Dataflow though, similar with the process for the warehouses, I need to load the table under the dbo schema (cannot select schema when I am selecting destination), move the table from the dbo schema to the new one, go back to the Gen2 Flow and select to map to existing table --> table under the schema. 
For the DataLake though, when I am trying thorugh the Data Destination in the Gen2 Dataflow to identify the existing table, when expanding the datalake folder, I cannot see any of the otherwise avalable tables within the DataLake so I can select them as destination table. 
Do I need to follow a different approach maybe? Is that a known issue or I am doing something wrong?

1 ACCEPTED SOLUTION
Avyaktha
Frequent Visitor

Hi Kostas 

 

Yes — using a Fabric Pipeline is the recommended way to automate and schedule the post-processing step (e.g., moving tables from dboto the target schema).

You can set this up as follows:

  1. Keep the Gen2 Dataflow without its own schedule.

  2. Create a Fabric Pipeline that has two activities:

    • Activity 1: Run the Gen2 Dataflow (this ingests data into dbo).

    • Activity 2: Run a Notebook or T-SQL script to rename/move the tables into their target schemas.

  3. Schedule the pipeline, not the Dataflow.

Thank you 

Avyaktha 



View solution in original post

5 REPLIES 5
v-kpoloju-msft
Community Support
Community Support

Hi @Kostas,

Thank you for reaching out to the Microsoft Fabric Community Forum. Also, thanks to @Avyaktha, for his inputs on this thread.

Has your issue been resolved? If the response provided by the community member @Avyaktha, addressed your query, could you please confirm? It helps us ensure that the solutions provided are effective and beneficial for everyone.

Hope this helps clarify things and let me know what you find after giving these steps a try happy to help you investigate this further.

Thank you for using the Microsoft Community Forum.

Hi @Kostas,

Just wanted to follow up one last time. If the shared guidance worked for you, that’s wonderful hopefully it also helps others looking for similar answers. If there’s anything else you'd like to explore or clarify, don’t hesitate to reach out.

Thank you.

Avyaktha
Frequent Visitor

Hi Kostas 

 

Yes — using a Fabric Pipeline is the recommended way to automate and schedule the post-processing step (e.g., moving tables from dboto the target schema).

You can set this up as follows:

  1. Keep the Gen2 Dataflow without its own schedule.

  2. Create a Fabric Pipeline that has two activities:

    • Activity 1: Run the Gen2 Dataflow (this ingests data into dbo).

    • Activity 2: Run a Notebook or T-SQL script to rename/move the tables into their target schemas.

  3. Schedule the pipeline, not the Dataflow.

Thank you 

Avyaktha 



Avyaktha
Frequent Visitor

Hi Kostas

I hope you are doiing good 

 

This isn’t something you’re doing wrong — it’s a known limitation of Gen2 Dataflows in Microsoft Fabric.
Currently, when using a Lakehouse as the destination, Gen2 Dataflows only write to the default dbo schema and don’t support schema selection. They also don’t display tables under custom schemas in the “Data Lake” destination picker, which is why you can’t see or select them.

you can refere this documnet : https://learn.microsoft.com/en-us/fabric/data-factory/dataflow-gen2-data-destinations-and-managed-se...

 

The recommended approach is to:

  1. Ingest data using Gen2 Dataflows into the dbo schema (default Lakehouse tables folder).

  2. After the dataflow run, use a Notebook or SQL script (via a Fabric pipeline activity) to move or rename the table into the desired schema.

     
    ALTER TABLE dbo.TableName RENAME TO targetschema.TableName;

This is the officially suggested workaround until schema selection for Lakehouse destinations is supported, which Microsoft has on their roadmap but hasn’t released yet.

Thank you 
Avyaktha 

Hello @Avyaktha,

 

Thanks for taking the time to respond to my query

 

Although that doesn't seem efficient at all, I will implement that solution. 
If I am to schedule and automate that approach, would you recommend doing that so by creating a pipeline so to run the processes?
Also, if you can provide that information, if I go with the above approach, do I need to schedule refresh in both, the pipeline and the Gen2 Dataflow or by placing the schedule refresh in the pipeline it will refresh it anyway?

Helpful resources

Announcements
FabCon and SQLCon Highlights Carousel

FabCon &SQLCon Highlights

Experience the highlights from FabCon & SQLCon, available live and on-demand starting April 14th.

New to Fabric survey Carousel

New to Fabric Survey

If you have recently started exploring Fabric, we'd love to hear how it's going. Your feedback can help with product improvements.

Join our Fabric User Panel

Join our Fabric User Panel

Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.

March Fabric Update Carousel

Fabric Monthly Update - March 2026

Check out the March 2026 Fabric update to learn about new features.