Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Get Fabric certified for FREE! Don't miss your chance! Learn more
We are evaluating Microsoft Fabric and have encountered what appears to be a tenant‑level capability issue. Our Dataflows Gen2 environment cannot access Lakehouse Files. Symptoms:
- Lakehouse (Files) connector does not appear
- `/lakehouse/<name>/Files/...` paths are treated as external and require a gateway
- Folder and Parquet connectors require a gateway even for internal OneLake paths
- Lakehouse connector only shows SQL tables, not the Files hierarchy
- Cannot drill into `Files → Raw Data → <company> → <table>`
- Cannot combine Parquet files
- Cannot configure incremental refresh
This behavior indicates that the OneLake internal provider is not enabled for our tenant.
Tenant ID: accb73ee-bf3a-4bad-a36f-5bbcf7e0524d
Region: United States (Fabric 60‑day trial; exact Fabric region not visible in this tenant, domain hosted via GoDaddy)
We need confirmation of:
1. Whether our tenant is missing this capability
2. Whether our region is fully rolled out
3. Whether this can be manually enabled
4. Whether this limitation is expected for Fabric trial tenants and/or GoDaddy‑backed tenants
We are planning multi‑company ingestion and need Dataflows Gen2 to access Lakehouse Files. Any help would be appreciated!
Thank you.
Solved! Go to Solution.
Hello @pbiadminjenhill,
What you’re seeing is expected behavior based on the current design of Dataflows Gen2 in Microsoft Fabric.
Dataflows Gen2 connects to a Lakehouse through the SQL analytics endpoint, which exposes Delta tables only. It does not surface the Lakehouse Files folder structure. As a result:
The Lakehouse connector shows tables, not /Files/...
File paths are treated as external sources and may prompt for a gateway
Combining Parquet files directly from Lakehouse Files isn’t supported
Incremental refresh applies to table sources, not raw file paths
This is a product capability boundary, not a tenant, region, trial, or GoDaddy limitation.
If you need to transform Parquet files stored in Files, first materialize them as Delta tables (via Notebook or Pipeline). Once the data exists as a Lakehouse table, Dataflows Gen2 can access it and support transformations and incremental refresh.
The files are all parquet files. I did try clicking the [Table] link under the content column but nothing happened, its like the sub folder capability is not available for some reason?
Hi @pbiadminjenhill,
Thank you for reaching out to the Microsoft Fabric Community Forum and sharing the screenshot and the details. Also thanks to @Olufemi7, @deborshi_nag, for those inputs on this thread.
Based on your scenario, this behaviour is expected and not related to your tenant, region, trial status, or GoDaddy licensing. Your Dataflows Gen2 is already connected correctly to One Lake. Currently, when accessing the Lakehouse Files section through the One Lake Catalog connector, Dataflows Gen2 may show files in a flattened view and does not always allow deeper folder-level navigation or direct transformation of Parquet files from nested folders.
To work with these files in Dataflows Gen2, the recommended approach is to first load the Parquet files into Lakehouse Delta tables. You can do this by going to your Lakehouse, navigating to Files, and selecting Load to Table, or by using a Fabric Notebook or Pipeline. Once the data is available under the Tables section, you can connect to it from Dataflows Gen2 without gateway prompts and use features like transformations and incremental refresh.
Refer these links:
1. https://learn.microsoft.com/en-us/fabric/data-factory/dataflows-gen2-overview
2. https://learn.microsoft.com/en-us/fabric/onelake/onelake-overview
3. https://learn.microsoft.com/en-us/fabric/data-factory/connector-lakehouse
4. https://learn.microsoft.com/en-us/fabric/data-engineering/lakehouse-and-delta-tables
Thank you for using the Microsoft Fabric Community Forum.
Thank you for the guidance. I appreciate it!
Hi @pbiadminjenhill,
Just checking in to see if the issue has been resolved on your end. If the earlier suggestions helped, that’s great to hear! And if you’re still facing challenges, feel free to share more details happy to assist further.
Thank you.
Thank you for this information. What I am doing is bringing in data into a Datalake for multiple subsidiaries of a master company and so putting them in a landing zone in their own folder and then within their folder we have the files like accounts, customers etc as additional files. I am using a 3rd party connector to get the data out and only incremental data gets added to the raw files in the data lake. These files are already coming in parquet format. I was hoping that I could then use dataflows gen 2 on top of these raw files, make some transformations and move them to the tables area. Today I am doing this via pipeline however I am having to overwrite the data everyday rather than an incremental refresh. It is my understanding that incremental refresh is much easier to do in Dataflows Gen 2. Will I have to figure out a way to do it only in pipelines and once the data has moved into a table then only can I use Dataflows Gen 2 for additional transformations?
Hi @deborshi_nag,
Thank you for explaining your ingestion process, your approach of landing incremental Parquet files into the Lakehouse Files section is correct and follows a proper landing zone pattern. However, incremental refresh in Dataflows Gen2 is supported when working with structured Lakehouse Tables (Delta tables), not directly from raw Parquet files in the Files section. Because of this, Dataflows Gen2 cannot manage incremental logic directly on those landing files. This is why you are currently needing to overwrite data when using pipelines, and it is expected behaviour when the incremental logic is not yet applied at the table level.
To resolve this, the recommended approach is to continue landing incremental Parquet files in the Files section but configure your Pipeline or Notebook to append or merge only new data into a Lakehouse Delta table instead of overwriting it. Once the data is maintained incrementally in the Tables section, you can connect Dataflows Gen2 to that table and perform further transformations efficiently without full reloads. In summary, Pipelines or Notebooks should handle incremental ingestion into Tables, and Dataflows Gen2 can then be used on top of those Tables for transformation and downstream processing.
Thank you again for using the Microsoft Fabric Community Forum.
Hello @pbiadminjenhill
Dataflows Gen2 can:
Hello @pbiadminjenhill,
What you’re seeing is expected behavior based on the current design of Dataflows Gen2 in Microsoft Fabric.
Dataflows Gen2 connects to a Lakehouse through the SQL analytics endpoint, which exposes Delta tables only. It does not surface the Lakehouse Files folder structure. As a result:
The Lakehouse connector shows tables, not /Files/...
File paths are treated as external sources and may prompt for a gateway
Combining Parquet files directly from Lakehouse Files isn’t supported
Incremental refresh applies to table sources, not raw file paths
This is a product capability boundary, not a tenant, region, trial, or GoDaddy limitation.
If you need to transform Parquet files stored in Files, first materialize them as Delta tables (via Notebook or Pipeline). Once the data exists as a Lakehouse table, Dataflows Gen2 can access it and support transformations and incremental refresh.
Here is a screenshot of what I see - you can see I can go into my raw data but here it shows all the files together vs letting me choose one. My hope was to take this raw data, do some transformations like add a column for company name and then establish incremental refresh in the data flow. I can do it via pipeline as well and have done for now but feel like pipelines are more clunky and need more coding in order to do incremental refresh. Also, since I need to make some transformations like adding a column (I know I can add a custom column in pipelines as well) my thought was to use Data Flows Gen 2 to improve and cleanse the data so it can move from a raw zone to a cleansed data zone. Some online looking/Co pilot help is suggesting this is a Go Daddy limitation but would love feedback to see if anyone else has had this issue as well.
Hello @pbiadminjenhill,
Thanks for the screenshot that behavior is consistent with how Dataflows Gen2 currently works in Microsoft Fabric.
Although the UI lets you browse into Lakehouse → Files → Raw Data, Dataflows Gen2 still treats this as a generic file source, not a native Lakehouse Files source. That’s why:
All Parquet files in the folder are shown/combined together
You can’t select individual files or use folder structure as partitions
Incremental refresh isn’t available on this source
Gateway prompts can appear
The Lakehouse connector itself only surfaces Delta tables (via the SQL analytics endpoint)
So your approach of using Pipelines/Notebooks to add the company column and materialize the data as Delta tables in the Lakehouse is the correct first step. Once the data is in tables, Dataflows Gen2 is the right tool for cleansing, shaping, and configuring incremental refresh.
This isn’t a GoDaddy, tenant, trial, or region limitation — it’s the current product design boundary for Dataflows Gen2.
Hello @pbiadminjenhill Using Dataflow Gen2 can you perform a Get Data > OneLake Catalog > Select your Lakehouse using your Organisational Account? This usually creates a connection to your OneLake and shows all your managed tables as well as all files underneath a folder named Files.
That is what I thought as well. Here is a screenshot and you can see when I go to Files I have a folder for each company and underneath that are a bunch of files. For some reason when I go to the company it does not let me go one level lower- it just shows all of them together, at least online researching suggested maybe an issue due to the license being bought by Go Daddy vs directly with Microsoft? Here is a screenshot. I really appreciate your help. Thank you!
Hello @pbiadminjenhill
Thank you for the screenshot and confirming that the files are in Parquet. I have replicated this on my Fabric workspace and I can tell you how you can load the content of the files using Dataflow Gen2. But bear in mind, incremental load won't work, as the file format is in parquet.
The first change you'd have to do is to use Azure Data Lake Storage Gen2 connector instead of OneLake connector when you do Get Data step. In the Azure Data Lake Storage connection specify the https URI of the sub folder name inside your Raw Data folder. You will get this from your OneLake if you look at the Properties menu of the folder in Files/ section. Press Create to load them into Power Query. You will see one record for each parquet file in the folder. At this stage you can remove _SUCCESS record if present. Press Combine files from the ribbon menu and it will load all the data inside those parquet files.
Thank you for the suggestion. I will definitely try this solution. I do have the option of bringing the file in as csv, avro, or parquet. Should I be thinking of bringing it in one of those formats so that incremental refresh could work or are none of those options going to work for incremental refresh? If not incremental refresh how about a full rewrite. The amount of data is not too crazy - overall across all tables maybe 500k rows? Thank you so much for your advice!
Hello @pbiadminjenhill
Unfortunately Delta tables (i.e., files stored in Delta format, is what Dataflow Gen2 expects to use for Incremental Refresh - csv/avro or parquet won't work!
This is because Incremental Refresh with Dataflow Gen2 works on Query Folding, and those file types don't allow query folding.
Hello @pbiadminjenhill what is the format of files in those company folders? Did you try clicking the [Table] link under the Content column?
Share feedback directly with Fabric product managers, participate in targeted research studies and influence the Fabric roadmap.
Check out the February 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 30 | |
| 15 | |
| 13 | |
| 7 | |
| 6 |
| User | Count |
|---|---|
| 52 | |
| 41 | |
| 28 | |
| 15 | |
| 14 |