Skip to main content
cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Get Fabric certified for FREE! Don't miss your chance! Learn more

Reply
Krishna_11
Advocate I
Advocate I

/b> Exploring the Data Warehouse experience in Fabric ๐Ÿญ</p>

<p>Hi FabCon Family! ๐Ÿ‘‹</p>

<p>I am currently diving into the <b>Data Warehouse</b> capabilities within Microsoft Fabric. I am really interested in the performance of the T-SQL engine and how it handles cross-database queries seamlessly with the Lakehouse.</p>

<p>I am also exploring how to best leverage the <b>Synapse Data Warehouse</b> for serving high-performance reporting layers while keeping data management unified.</p>

<p>If anyone is working on migrating legacy warehouses or testing out the performance of the Warehouse endpoint vs. the Lakehouse SQL endpoint, I would love to connect and compare notes.</p>

<p>Happy coding! ๐Ÿ’ป</p>

2 ACCEPTED SOLUTIONS
burakkaragoz
Super User
Super User

Hi @Krishna_11 ,

Welcome to the Fabric community. You have picked one of the most exciting areas to explore.

Since you are looking to compare notes, here are a few key architectural distinctions I have found while testing the Synapse Data Warehouse (DW) versus the Lakehouse SQL Endpoint:

1. The "Read/Write" Divide (Crucial for Migration)

  • Warehouse: This is your traditional T-SQL engine. It supports full DML (INSERT, UPDATE, DELETE) and DDL. If your legacy migration relies heavily on Stored Procedures for data transformation, the Warehouse is your natural landing zone.

  • Lakehouse SQL Endpoint: This is strictly Read-Only for user tables. You cannot run an UPDATE statement here. It is designed to serve data that was already engineered (usually via Spark/Notebooks or Dataflows).

2. Performance and The Engine

  • Warehouse: It uses a specialized, fully managed SQL engine designed for high concurrency and strict ACID compliance. It handles complex joins across large datasets very well because it manages its own transaction logs and distribution.

  • Lakehouse: The SQL Endpoint is fantastic for "Direct Lake" scenarios where you want to read Delta Parquet files instantly without importing them. However, for heavy-duty reporting with complex logic, the Warehouse often provides a more predictable query plan optimization.

3. Cross-Database Queries You hit the nail on the head, this is the "superpower."

  • In Fabric, you can write a query in your Warehouse that joins a table from a Lakehouse and a view from another Warehouse using simple 3-part naming (database.schema.table).

  • Tip: Since data doesn't move (Zero-Copy), performance is generally constrained only by the size of the data and the complexity of the join, not by network latency between servers.

Question for you: What is your legacy source? Are you migrating from an on-premise SQL Server or a cloud appliance like Synapse Dedicated Pool? That usually dictates which path (DW vs. Lakehouse) is smoother.

Happy coding!


If my response provided a good starting point, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
This response was assisted by AI for translation and formatting purposes.

View solution in original post

deborshi_nag
Memorable Member
Memorable Member

Hello @Krishna_11 

 

In Microsoft Fabric Warehouses, crossโ€‘database/ crossโ€‘warehouse querying is a builtโ€‘in Tโ€‘SQL capability that lets you join and select from other Warehouses (and SQL analytics endpoints of Lakehouses) within the same workspaceโ€”all over the shared OneLake storage.
 
1) Use threeโ€‘part naming and add the target database to your Explorer
From the SQL query editor of a Warehouse, you first add other Warehouses (or Lakehouse SQL endpoints) via + Warehouses in the Explorer. Once added, you can reference objects using database.schema.table (threeโ€‘part names) in ordinary Tโ€‘SQL.
 
2) Scope: current workspace
Crossโ€‘database querying is supported within the active Fabric workspace. You reference the added items in Explorer and write joins across them in a single query tab.
 
3) Crossโ€‘workspace? Use a shortcut workaround
Direct crossโ€‘workspace Warehouseโ€‘toโ€‘Warehouse queries arenโ€™t supported today. A common workaround is to create a Lakehouse shortcut in your local workspace that points to the remote tables, then query the Lakehouse SQL endpoint from your Warehouse.
 

What you can (and canโ€™t) do

  • Read across databases/warehouses: You can SELECT and JOIN across Warehouses and Lakehouse SQL endpoints via threeโ€‘part names. 
  • Write semantics: Write operations are fully supported inside a Warehouse; however, when you reference external items (e.g., another Warehouse or a Lakehouse SQL endpoint), operations are generally readโ€‘only. Community guidance highlights using SELECT โ€ฆ INTO to materialize data locally if you need repeated access or DML on the combined result. 
  • Tooling: You can run crossโ€‘database queries in the Fabric SQL query editor (and Visual query editor). SSMS / VS Code (mssql) can also connect via Tโ€‘SQL connection strings to the Warehouse surface.

 

Performance and governance tips

  • Filter early: Use WHERE, TOP, and selective columns to reduce data movement when joining external objects. 
  • Stage for repeatability: For recurrent workloads, stage external data (SELECT โ€ฆ INTO), then index/cluster locally to improve latency and concurrency.
  • Direct Lake for BI: For downstream Power BI, consider Direct Lake semantic models that can pull from warehouses and lakehouses without duplicating data, while still delivering inโ€‘memory speed (VertiPaq) for visuals.

Hope this helps, kindly appreciate giving a Kudos or accepting as a Solution

I trust this will be helpful. If you found this guidance useful, you are welcome to acknowledge with a Kudos or by marking it as a Solution.

View solution in original post

5 REPLIES 5
v-kpoloju-msft
Community Support
Community Support

Hi @Krishna_11

Thank you for reaching out to the Microsoft Fabric Community Forum. Also, thanks to @Thomaslleblanc@deborshi_nag@burakkaragoz,  for those inputs on this thread.

Has your issue been resolved? If the response provided by the community member @Thomaslleblanc@deborshi_nag@burakkaragoz,  addressed your query, could you please confirm? It helps us ensure that the solutions provided are effective and beneficial for everyone.

Hope this helps clarify things and let me know what you find after giving these steps a try happy to help you investigate this further.

Thank you for using the Microsoft Community Forum.

Hi @Krishna_11,

Just wanted to follow up. If the shared guidance worked for you, thatโ€™s wonderful hopefully it also helps others looking for similar answers. If thereโ€™s anything else you'd like to explore or clarify, donโ€™t hesitate to reach out.

Thank you.

Thomaslleblanc
Super User
Super User

In Fabric, the Warehouseโ€™s T-SQL engine (built on the same SQL runtime as Synapse) gives you strong performance for BI-ready serving, while the Lakehouse SQL endpoint is ideal for flexible, ELT-friendly exploration. For cross-database/querying with Lakehouse, use OneLake shortcuts + external tables to keep data unified and push joins/filters down efficiently.

Practical tips:

  • Use Warehouse for semantic, curated reporting layers; Lakehouse SQL for broad data access and staging.
  • Enable result set caching and materialize key views (e.g., CTAS into Warehouse) for hot paths.
  • Test with TPCโ€‘DS style workloads: compare join performance, concurrency, and cost of complex aggregations on both endpoints.
  • For migrations: land in Delta in OneLake, map to Warehouse tables (or external tables), then incrementally replace legacy star schemas.
  • Monitor with Query Insights and workload management to tune concurrency and memory.

 

deborshi_nag
Memorable Member
Memorable Member

Hello @Krishna_11 

 

In Microsoft Fabric Warehouses, crossโ€‘database/ crossโ€‘warehouse querying is a builtโ€‘in Tโ€‘SQL capability that lets you join and select from other Warehouses (and SQL analytics endpoints of Lakehouses) within the same workspaceโ€”all over the shared OneLake storage.
 
1) Use threeโ€‘part naming and add the target database to your Explorer
From the SQL query editor of a Warehouse, you first add other Warehouses (or Lakehouse SQL endpoints) via + Warehouses in the Explorer. Once added, you can reference objects using database.schema.table (threeโ€‘part names) in ordinary Tโ€‘SQL.
 
2) Scope: current workspace
Crossโ€‘database querying is supported within the active Fabric workspace. You reference the added items in Explorer and write joins across them in a single query tab.
 
3) Crossโ€‘workspace? Use a shortcut workaround
Direct crossโ€‘workspace Warehouseโ€‘toโ€‘Warehouse queries arenโ€™t supported today. A common workaround is to create a Lakehouse shortcut in your local workspace that points to the remote tables, then query the Lakehouse SQL endpoint from your Warehouse.
 

What you can (and canโ€™t) do

  • Read across databases/warehouses: You can SELECT and JOIN across Warehouses and Lakehouse SQL endpoints via threeโ€‘part names. 
  • Write semantics: Write operations are fully supported inside a Warehouse; however, when you reference external items (e.g., another Warehouse or a Lakehouse SQL endpoint), operations are generally readโ€‘only. Community guidance highlights using SELECT โ€ฆ INTO to materialize data locally if you need repeated access or DML on the combined result. 
  • Tooling: You can run crossโ€‘database queries in the Fabric SQL query editor (and Visual query editor). SSMS / VS Code (mssql) can also connect via Tโ€‘SQL connection strings to the Warehouse surface.

 

Performance and governance tips

  • Filter early: Use WHERE, TOP, and selective columns to reduce data movement when joining external objects. 
  • Stage for repeatability: For recurrent workloads, stage external data (SELECT โ€ฆ INTO), then index/cluster locally to improve latency and concurrency.
  • Direct Lake for BI: For downstream Power BI, consider Direct Lake semantic models that can pull from warehouses and lakehouses without duplicating data, while still delivering inโ€‘memory speed (VertiPaq) for visuals.

Hope this helps, kindly appreciate giving a Kudos or accepting as a Solution

I trust this will be helpful. If you found this guidance useful, you are welcome to acknowledge with a Kudos or by marking it as a Solution.
burakkaragoz
Super User
Super User

Hi @Krishna_11 ,

Welcome to the Fabric community. You have picked one of the most exciting areas to explore.

Since you are looking to compare notes, here are a few key architectural distinctions I have found while testing the Synapse Data Warehouse (DW) versus the Lakehouse SQL Endpoint:

1. The "Read/Write" Divide (Crucial for Migration)

  • Warehouse: This is your traditional T-SQL engine. It supports full DML (INSERT, UPDATE, DELETE) and DDL. If your legacy migration relies heavily on Stored Procedures for data transformation, the Warehouse is your natural landing zone.

  • Lakehouse SQL Endpoint: This is strictly Read-Only for user tables. You cannot run an UPDATE statement here. It is designed to serve data that was already engineered (usually via Spark/Notebooks or Dataflows).

2. Performance and The Engine

  • Warehouse: It uses a specialized, fully managed SQL engine designed for high concurrency and strict ACID compliance. It handles complex joins across large datasets very well because it manages its own transaction logs and distribution.

  • Lakehouse: The SQL Endpoint is fantastic for "Direct Lake" scenarios where you want to read Delta Parquet files instantly without importing them. However, for heavy-duty reporting with complex logic, the Warehouse often provides a more predictable query plan optimization.

3. Cross-Database Queries You hit the nail on the head, this is the "superpower."

  • In Fabric, you can write a query in your Warehouse that joins a table from a Lakehouse and a view from another Warehouse using simple 3-part naming (database.schema.table).

  • Tip: Since data doesn't move (Zero-Copy), performance is generally constrained only by the size of the data and the complexity of the join, not by network latency between servers.

Question for you: What is your legacy source? Are you migrating from an on-premise SQL Server or a cloud appliance like Synapse Dedicated Pool? That usually dictates which path (DW vs. Lakehouse) is smoother.

Happy coding!


If my response provided a good starting point, kindly mark it as the Accepted Solution to assist others. Additionally, I would be grateful for a 'Kudos' if you found my response helpful.
This response was assisted by AI for translation and formatting purposes.

Helpful resources

Announcements
Sticker Challenge 2026 Carousel

Join our Community Sticker Challenge 2026

If you love stickers, then you will definitely want to check out our Community Sticker Challenge!

Free Fabric Certifications

Free Fabric Certifications

Get Fabric certified for free! Don't miss your chance.

January Fabric Update Carousel

Fabric Monthly Update - January 2026

Check out the January 2026 Fabric update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.