Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
Hello @all
We need to transfer data from on-prem Oracle table to the Lakehouse. As of tuday, in Fabric, is it only possible using DF Gen2 (copy pipeline doesn't support on-prem gw yet).
We are using on prem db gatweay.
The problem is that we cannot import tables - with more then 2mln of rows, 150 columns (compressed parquet size is about 500mb) in DF2.
The flow run twice and finished with errors:
1 - timed out after ~ 1h 15m
2 - capacity exhausted 🙂
We are using F2 for testing.
The worst is, that DF2 burned almost 140k CU doing nothing. All work was done on on-prem gateway, not in the fabric.
Comparing with ADF v2 - we were able to import same table using self-hosted IR using only 0.2 hours data movement activities (0.02 eur) in 10 mins.
Is there any way to import on-prem data into fabric directly? Should I wait until Cp Pipelines support on prem gw/ir?
Do you have any experience with that?
This is my first post - so greeting Everyone 🙂
Best of all
prom
Hi @prom ,
Thanks for using Fabric Community.
Unfortunately you can only connect to On-prem is via DataFlow Gen 2.
The feature - "Connecting to the On-premise gateways using Pipelines" is still on roadmap. I will keep you posted regarding the updates.
Docs to refer -
What's new and planned for Data Factory in Microsoft Fabric - Microsoft Fabric | Microsoft Learn
How to access on-premises data sources in Data Factory - Microsoft Fabric | Microsoft Learn
Hope this helps. Please let me know if you have any further questions.
Hello @prom ,
We haven’t heard from you on the last response and was just checking back to see if you have a resolution yet .
Otherwise, will respond back with the more details and we will try to help .
Hello,
Thanks for quick answer. I'll probably wait for migration CP pipelines to Fabric. I'll try using ADF or custom spark cluster as a workaround.
Best Regards
@prom one thing you could try is disable staging for your entities, it should at least complete without running twice.
We are aware of an issue that may slow down ingestion into LH if your data contains large strings (>4k in a cell) or column that is primarily nulls. If that is what you are running into, then the next GW patch will have this fix and it should speed it up by another 40%.
Hi @prom ,
We haven’t heard from you on the last response and was just checking back to see if we answered your query.
Otherwise, will respond back with the more details and we will try to help .
Hello,
I'm using ADF CP pipeline with lakehouse destination for now. It works pretty well with on-prem IR. But still waiting for native fabric functionality with equal througput and performance.
Regards
Prom
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |
User | Count |
---|---|
4 | |
3 | |
1 | |
1 | |
1 |