The ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM.
Get registeredEnhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.
If Data source is MS SQL SERVER (on-premises) Can I Perform ETL in Microsoft Fabric Without Using Lakehouse or Warehouse (No Cloud Storage)?
after done ETD ,Data need to store MS SQL SERVER (on-premises)
Solved! Go to Solution.
Hello @rajasekaro,
Thank you for reaching out to the Microsoft Fabric Forum Community.
Yes, it’s possible to perform ETL in Microsoft Fabric with an on-premises MS SQL Server as both the source and destination without using a Lakehouse, Warehouse, or cloud storage. Here’s how you can achieve this:
Use Fabric Pipelines and Dataflow Gen2:
For more control, use a Notebook to extract, transform, and load data. Connect to the SQL Server via the gateway, process data in-memory, and write back to the target table without persisting to OneLake.
If this information is helpful, please “Accept as solution” and give a "kudos" to assist other community members in resolving similar issues more efficiently.
Thank you.
Hello @rajasekaro,
Hope you're doing well. Is the issue resolved or still ongoing? An update would help others in the community facing similar problems.
Thank you.
Hello @rajasekaro,
Hope everything’s going great on your end! Just checking in has the issue been resolved, or are you still running into problems? Sharing an update can really help others facing the same thing.
Thank you.
Hello @rajasekaro,
We hope you're doing well. Could you please confirm whether your issue has been resolved or if you're still facing challenges? Your update will be valuable to the community and may assist others with similar concerns.
Thank you.
Hello @rajasekaro,
Thank you for reaching out to the Microsoft Fabric Forum Community.
Yes, it’s possible to perform ETL in Microsoft Fabric with an on-premises MS SQL Server as both the source and destination without using a Lakehouse, Warehouse, or cloud storage. Here’s how you can achieve this:
Use Fabric Pipelines and Dataflow Gen2:
For more control, use a Notebook to extract, transform, and load data. Connect to the SQL Server via the gateway, process data in-memory, and write back to the target table without persisting to OneLake.
If this information is helpful, please “Accept as solution” and give a "kudos" to assist other community members in resolving similar issues more efficiently.
Thank you.
Hi @rajasekaro ,
Fabric does not support direct on-prem SQL → transformation → on-prem SQL ETL pipelines without leveraging cloud storage (Lakehouse/Warehouse).
Microsoft Fabric requires data to be ingested into its OneLake storage (via Lakehouse or Warehouse)
User | Count |
---|---|
78 | |
73 | |
38 | |
30 | |
28 |
User | Count |
---|---|
107 | |
100 | |
55 | |
49 | |
45 |