Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!Get Fabric certified for FREE! Don't miss your chance! Learn more
<p>Hi FabCon Family! ๐</p>
<p>I am currently diving into the <b>Data Warehouse</b> capabilities within Microsoft Fabric. I am really interested in the performance of the T-SQL engine and how it handles cross-database queries seamlessly with the Lakehouse.</p>
<p>I am also exploring how to best leverage the <b>Synapse Data Warehouse</b> for serving high-performance reporting layers while keeping data management unified.</p>
<p>If anyone is working on migrating legacy warehouses or testing out the performance of the Warehouse endpoint vs. the Lakehouse SQL endpoint, I would love to connect and compare notes.</p>
<p>Happy coding! ๐ป</p>
Solved! Go to Solution.
Hi @Krishna_11 ,
Welcome to the Fabric community. You have picked one of the most exciting areas to explore.
Since you are looking to compare notes, here are a few key architectural distinctions I have found while testing the Synapse Data Warehouse (DW) versus the Lakehouse SQL Endpoint:
1. The "Read/Write" Divide (Crucial for Migration)
Warehouse: This is your traditional T-SQL engine. It supports full DML (INSERT, UPDATE, DELETE) and DDL. If your legacy migration relies heavily on Stored Procedures for data transformation, the Warehouse is your natural landing zone.
Lakehouse SQL Endpoint: This is strictly Read-Only for user tables. You cannot run an UPDATE statement here. It is designed to serve data that was already engineered (usually via Spark/Notebooks or Dataflows).
2. Performance and The Engine
Warehouse: It uses a specialized, fully managed SQL engine designed for high concurrency and strict ACID compliance. It handles complex joins across large datasets very well because it manages its own transaction logs and distribution.
Lakehouse: The SQL Endpoint is fantastic for "Direct Lake" scenarios where you want to read Delta Parquet files instantly without importing them. However, for heavy-duty reporting with complex logic, the Warehouse often provides a more predictable query plan optimization.
3. Cross-Database Queries You hit the nail on the head, this is the "superpower."
In Fabric, you can write a query in your Warehouse that joins a table from a Lakehouse and a view from another Warehouse using simple 3-part naming (database.schema.table).
Tip: Since data doesn't move (Zero-Copy), performance is generally constrained only by the size of the data and the complexity of the join, not by network latency between servers.
Question for you: What is your legacy source? Are you migrating from an on-premise SQL Server or a cloud appliance like Synapse Dedicated Pool? That usually dictates which path (DW vs. Lakehouse) is smoother.
Happy coding!
If my response provided a good starting point, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
This response was assisted by AI for translation and formatting purposes.
Hello @Krishna_11
Hope this helps, kindly appreciate giving a Kudos or accepting as a Solution.
Hi @Krishna_11,
Thank you for reaching out to the Microsoft Fabric Community Forum. Also, thanks to @Thomaslleblanc, @deborshi_nag, @burakkaragoz, for those inputs on this thread.
Has your issue been resolved? If the response provided by the community member @Thomaslleblanc, @deborshi_nag, @burakkaragoz, addressed your query, could you please confirm? It helps us ensure that the solutions provided are effective and beneficial for everyone.
Hope this helps clarify things and let me know what you find after giving these steps a try happy to help you investigate this further.
Thank you for using the Microsoft Community Forum.
Hi @Krishna_11,
Just wanted to follow up. If the shared guidance worked for you, thatโs wonderful hopefully it also helps others looking for similar answers. If thereโs anything else you'd like to explore or clarify, donโt hesitate to reach out.
Thank you.
In Fabric, the Warehouseโs T-SQL engine (built on the same SQL runtime as Synapse) gives you strong performance for BI-ready serving, while the Lakehouse SQL endpoint is ideal for flexible, ELT-friendly exploration. For cross-database/querying with Lakehouse, use OneLake shortcuts + external tables to keep data unified and push joins/filters down efficiently.
Practical tips:
Hello @Krishna_11
Hope this helps, kindly appreciate giving a Kudos or accepting as a Solution.
Hi @Krishna_11 ,
Welcome to the Fabric community. You have picked one of the most exciting areas to explore.
Since you are looking to compare notes, here are a few key architectural distinctions I have found while testing the Synapse Data Warehouse (DW) versus the Lakehouse SQL Endpoint:
1. The "Read/Write" Divide (Crucial for Migration)
Warehouse: This is your traditional T-SQL engine. It supports full DML (INSERT, UPDATE, DELETE) and DDL. If your legacy migration relies heavily on Stored Procedures for data transformation, the Warehouse is your natural landing zone.
Lakehouse SQL Endpoint: This is strictly Read-Only for user tables. You cannot run an UPDATE statement here. It is designed to serve data that was already engineered (usually via Spark/Notebooks or Dataflows).
2. Performance and The Engine
Warehouse: It uses a specialized, fully managed SQL engine designed for high concurrency and strict ACID compliance. It handles complex joins across large datasets very well because it manages its own transaction logs and distribution.
Lakehouse: The SQL Endpoint is fantastic for "Direct Lake" scenarios where you want to read Delta Parquet files instantly without importing them. However, for heavy-duty reporting with complex logic, the Warehouse often provides a more predictable query plan optimization.
3. Cross-Database Queries You hit the nail on the head, this is the "superpower."
In Fabric, you can write a query in your Warehouse that joins a table from a Lakehouse and a view from another Warehouse using simple 3-part naming (database.schema.table).
Tip: Since data doesn't move (Zero-Copy), performance is generally constrained only by the size of the data and the complexity of the join, not by network latency between servers.
Question for you: What is your legacy source? Are you migrating from an on-premise SQL Server or a cloud appliance like Synapse Dedicated Pool? That usually dictates which path (DW vs. Lakehouse) is smoother.
Happy coding!
If my response provided a good starting point, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
This response was assisted by AI for translation and formatting purposes.
If you love stickers, then you will definitely want to check out our Community Sticker Challenge!
Check out the January 2026 Fabric update to learn about new features.
| User | Count |
|---|---|
| 4 | |
| 4 | |
| 2 | |
| 2 | |
| 1 |
| User | Count |
|---|---|
| 7 | |
| 4 | |
| 3 | |
| 3 | |
| 2 |