Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Enhance your career with this limited time 50% discount on Fabric and Power BI exams. Ends August 31st. Request your voucher.

Reply
girishtharwani2
Helper II
Helper II

Serverless views in Fabric

Hi,

I have loaded data from a source into the Data Lake (ADLS) using a Fabric pipeline. In Synapse, we used to query ADLS data using serverless views. Is there a similar way to utilize serverless views in Fabric?

 

Thanks,
Girish

1 ACCEPTED SOLUTION

Hello @girishtharwani2.,

Thank you for your detailed explaination regarding your query.

 

I would also take a moment to personally thank @nilendraFabric , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.

 

In Fabric Lakehouse, while there isn't a direct equivalent to Synapse Serverless SQL for dynamically querying partitioned data, you can achieve similar functionality using Lakehouse Shortcuts with Delta tables or dynamic queries with metadata tables.

 

Here are some approaches to consider:

  • If your data in ADLS follows the Delta format, Fabric Lakehouse natively supports querying Delta tables, allowing you to use SQL to filter the latest partition dynamically.
  • If you are working with Parquet files, you can use Fabric Notebook with PySpark to create a metadata table that dynamically tracks partition locations.

I hope this should resolve your issue, if you need any further assistance, feel free to reach out.

 

If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.

 

Thank you. 

View solution in original post

6 REPLIES 6
girishtharwani2
Helper II
Helper II

@nilendraFabric The feature is yet to be released it seems, any other alternative that I can leverage ?

 

Thanks,

Girish

 There are 2 options 

 

Easiest way is to create shortcut to Adls

Select New Shortcut → Azure Data Lake Storage Gen2 → Provide ADLS URL.

And then query it like this

 

SELECT * FROM Lakehouse.DemoShortcut.sales_data;

 

other way is 

 

Use the `COPY INTO` Command
The most direct method for bulk-loading data into a Fabric Warehouse or Lakehouse table:

 

COPY INTO dbo.RetailSales
FROM 'https://<storage-account>.dfs.core.windows.net/<container>/sales/*.parquet'
WITH (
FILE_TYPE = 'PARQUET',
CREDENTIAL = (IDENTITY = 'Managed Identity')
);

 

Hope this helps

 

please accept this solution if this is helpful and give kudos

 

Thanks for the inputs.

In serverless in Synapse, I have partitioned by the files based on date and I used to pick latest folder as date was handled dynamically.

girishtharwani2_0-1738679536943.png

I see I can manually create a table on ADLS but it will be one time, Is there a way I can achieve similar functionality in lakehouse as I am getting in serverless ?

 

Thanks,

Girish

 

Hello @girishtharwani2.,

Thank you for your detailed explaination regarding your query.

 

I would also take a moment to personally thank @nilendraFabric , for actively participating in the community forum and for the solutions you’ve been sharing in the community forum. Your contributions make a real difference.

 

In Fabric Lakehouse, while there isn't a direct equivalent to Synapse Serverless SQL for dynamically querying partitioned data, you can achieve similar functionality using Lakehouse Shortcuts with Delta tables or dynamic queries with metadata tables.

 

Here are some approaches to consider:

  • If your data in ADLS follows the Delta format, Fabric Lakehouse natively supports querying Delta tables, allowing you to use SQL to filter the latest partition dynamically.
  • If you are working with Parquet files, you can use Fabric Notebook with PySpark to create a metadata table that dynamically tracks partition locations.

I hope this should resolve your issue, if you need any further assistance, feel free to reach out.

 

If this post helps, then please give us Kudos and consider Accept it as a solution to help the other members find it more quickly.

 

Thank you. 

Anonymous
Not applicable

Hi @girishtharwani2 
Thank you for reaching out microsoft fabric community forum.
I wanted to check if you had the opportunity to review the information provided by @nilendraFabric . Please feel free to contact us if you have any further questions. If his response has addressed your query, please accept it as a solution and give a 'Kudos' so other members can easily find it.
Thank you.

nilendraFabric
Super User
Super User

Hello @girishtharwani2 


You can use sql endpoint on OPENROWSET. 

this functionality will be available using OPENROWSET

Estimated release timeline: Q1 2025

Release Type: Public preview

Fabric DW enables the users to use the OPENROWSET function to read data from the files in the lake. A simple example of OPENROWSET function is:

 
SELECT * 
FROM OPENROWSET ( BULK ‘<file path>’ ) 
WITH ( <column definition> ) 

The OPENROWSET function will read the content of the file(s) at the given <file path>and return the content of the files. Thi function enables easy browsing and previewing the files before ingestion.

 

https://learn.microsoft.com/en-us/fabric/release-plan/data-warehouse

 

please see if this is helpful and try running OPENROWSET 

 

Helpful resources

Announcements
Fabric July 2025 Monthly Update Carousel

Fabric Monthly Update - July 2025

Check out the July 2025 Fabric update to learn about new features.

July 2025 community update carousel

Fabric Community Update - July 2025

Find out what's new and trending in the Fabric community.