Power BI is turning 10, and we’re marking the occasion with a special community challenge. Use your creativity to tell a story, uncover trends, or highlight something unexpected.
Get startedJoin us at FabCon Vienna from September 15-18, 2025, for the ultimate Fabric, Power BI, SQL, and AI community-led learning event. Save €200 with code FABCOMM. Get registered
We have tables loaded into Fabric Lake House. We have 3 requirements below and need help to identify the correct approach to achieve the requirements.
(1) Build data models by joining tables (in sql or other options if available).
(2) The data model/joined query should be able to be used in the notebook in another workspace.
(3) Ideally the query should do the join on the fly so that the output is close to real-time.
We researched a few options below but still not be able to find the perfect way.
We are wondering if there is any option to achieve what we need. Could you please let us know if we are missing something?
Solved! Go to Solution.
Hi @1maximax1,
Thanks for reaching out to Microsoft forum community.
We appreciate your efforts and thank you for sharing the update on the issue. Could you please confirm if your query has been resolved? If so, kindly mark the helpful response and accept it as the solution. This will help other community members resolve similar issues more efficiently.
Thank you.
Hi @nilendraFabric ,
Thank you very much for the reply and help on this topic! I want to get more clarification for the option #1 in your response.
In option 1, when you say SQL views, do you mean the view in the screenshot below?
Question 1. I found that I can't use the view in the notebook in pyspark. Is it possible to read the view into pyspark data frame?
Question 2. When I set up shortcut from another Lakehouse, I am not able to create shortcut for the view, the only option in the shortcut it to the table or file. Is it possible to create shortcut to the views?
hello @1maximax1
you can query view using Spark SQL and load the results into a DataFrame
df = spark.sql("SELECT * FROM your_view_name")
If you need to access a view from another Lakehouse, you might need to consider this :
1. Creating a shortcut to the underlying table(s) that the view is based on.
2. Recreating the view logic in your current Lakehouse.
3. Using Spark SQL to query the view in the source Lakehouse and then working with the resulting DataFrame
Please accept this as solution and give kudos if this resolved the query
Thank you very much for the quick response!
Your suggestion for accessing a view from another Lakehouse makes sense!
For reading the view to the Spark datafram, I don't think below code works. This approch only works for table.
df = spark.sql("SELECT * FROM your_view_name")
I found a way to read the view to dataframe in this blog.
https://fabric.guru/querying-sql-endpoint-of-fabric-lakehousewarehouse-in-a-notebook-with-t-sql
Hi @1maximax1,
Thanks for reaching out to Microsoft forum community.
We appreciate your efforts and thank you for sharing the update on the issue. Could you please confirm if your query has been resolved? If so, kindly mark the helpful response and accept it as the solution. This will help other community members resolve similar issues more efficiently.
Thank you.
Thanks for getting back @1maximax1
If this all works could you please accept the solution, so other users in community can be benefited
Hello @1maximax1
there isn't a perfect solution to accommodate all your needs , but please tey these :
Option 1: Cross-Workspace Lakehouse Queries
You can use cross-workspace queries to access data from a Lakehouse in one workspace from another workspace. This approach allows you to:
1. Build data models by joining tables using SQL in the source Lakehouse.
2. Access these joined tables from notebooks in another workspace.
3. Perform joins on-the-fly for near real-time results.
To implement this:
1. Create your data model (joined tables) in the source Lakehouse using SQL views.
2. In the destination workspace, create a shortcut to the source Lakehouse.
3. Use the shortcut in your notebooks to query the views from the source Lakehouse.
Pros:
• Allows cross-workspace data access
• Supports on-the-fly joins for near real-time data
• No need to duplicate data
Cons:
• Performance may be impacted for complex joins
• Requires proper access permissions across workspaces
Option 2: Fabric SQL Database with Cross-Workspace Access
Another approach is to use a Fabric SQL Database:
1. Create a SQL Database in your source workspace.
2. Build your data models using SQL views or stored procedures in this database.
3. Use the SQL analytics endpoint to query this database from other workspaces.
Pros:
• Supports complex SQL logic
• Can be queried in near real-time
• Accessible across workspaces
Cons:
• Requires setting up and managing a SQL Database
• May have some performance overhead for very large datasets
Given your requirements, Option 1 (Cross-Workspace Lakehouse Queries) seems to be the best fit. It allows you to build data models using SQL, access them from notebooks in other workspaces, and perform joins on-the-fly for near real-time results
please give kudos and accept this solution, if this resolves your query.
thanks
This is your chance to engage directly with the engineering team behind Fabric and Power BI. Share your experiences and shape the future.
Check out the June 2025 Fabric update to learn about new features.
User | Count |
---|---|
9 | |
5 | |
3 | |
3 | |
2 |
User | Count |
---|---|
6 | |
4 | |
3 | |
3 | |
3 |