Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now! Learn more

Reply
rgsalido
Frequent Visitor

DirectLake limitations

Hello everyone,
I have a table in the lakehouse with 1.5 billion rows at a minute-level granularity. I can't create a report due to the direct lake limitation. What do you think would be the best way to approach this from the desktop?
Any reference links I could read on the subject?

Thanks

1 ACCEPTED SOLUTION
Ritaf1983
Super User
Super User

Hi @rgsalido 

Direct Lake does support very large tables, but it still has hard guardrails that depend on the Fabric capacity SKU. Even on high SKUs (F64/F128), a fact table with 1.5B minute-level rows is at the edge of the supported limit and often won’t load in Direct Lake. When the engine detects too many rowgroups/files or an oversized table, it will either fall back to DirectQuery or simply refuse to load if fallback is disabled.

For this type of workload (telemetry / high-frequency events), the recommended pattern is:

Create an aggregated table (hourly/daily) and use that table in your Direct Lake semantic model.

Keep the raw 1.5B-row minute-level table in the lakehouse and access it via DirectQuery only for detailed pages or drill-through.

Optionally keep dimensions in Import for better performance (composite model).

This pattern aligns with Microsoft’s guidance for Direct Lake: use it for analytical models and aggregated facts, not for raw high-granularity event streams.

So even with Fabric capacity, the best approach is:
Direct Lake for the aggregated table + DirectQuery for the raw detail table.

If this post helps, then please consider Accepting it as the solution to help the other members find it more quickly

Regards,
Rita Fainshtein | Microsoft MVP
https://www.linkedin.com/in/rita-fainshtein/
Blog : https://www.madeiradata.com/profile/ritaf/profile

View solution in original post

5 REPLIES 5
rgsalido
Frequent Visitor

We are using an F64 capacity. I am going to test in DQ mode and create aggregated tables.

v-prasare
Community Support
Community Support

Hi @rgsalido,

We would like to confirm if our community members answer resolves your query or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.

 

 

Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support

v-prasare
Community Support
Community Support

Hi @rgsalido,

We would like to confirm if our community members answer resolves your query or if you need further help. If you still have any questions or need more support, please feel free to let us know. We are happy to help you.

 

@Ritaf1983 & @amitchandak Thanks for your prompt response

 

 

Thank you for your patience and look forward to hearing from you.
Best Regards,
Prashanth Are
MS Fabric community support

Ritaf1983
Super User
Super User

Hi @rgsalido 

Direct Lake does support very large tables, but it still has hard guardrails that depend on the Fabric capacity SKU. Even on high SKUs (F64/F128), a fact table with 1.5B minute-level rows is at the edge of the supported limit and often won’t load in Direct Lake. When the engine detects too many rowgroups/files or an oversized table, it will either fall back to DirectQuery or simply refuse to load if fallback is disabled.

For this type of workload (telemetry / high-frequency events), the recommended pattern is:

Create an aggregated table (hourly/daily) and use that table in your Direct Lake semantic model.

Keep the raw 1.5B-row minute-level table in the lakehouse and access it via DirectQuery only for detailed pages or drill-through.

Optionally keep dimensions in Import for better performance (composite model).

This pattern aligns with Microsoft’s guidance for Direct Lake: use it for analytical models and aggregated facts, not for raw high-granularity event streams.

So even with Fabric capacity, the best approach is:
Direct Lake for the aggregated table + DirectQuery for the raw detail table.

If this post helps, then please consider Accepting it as the solution to help the other members find it more quickly

Regards,
Rita Fainshtein | Microsoft MVP
https://www.linkedin.com/in/rita-fainshtein/
Blog : https://www.madeiradata.com/profile/ritaf/profile
amitchandak
Super User
Super User

@rgsalido , What is capacity are you using?  I think F64 onward, you should not have an issue 

amitchandak_0-1764237484277.png

You can use direct Query mode. Because import will also work best F64 onward in this case 

Share with Power BI Enthusiasts: Full Power BI Video (20 Hours) YouTube
Microsoft Fabric Series 60+ Videos YouTube
Microsoft Fabric Hindi End to End YouTube

Helpful resources

Announcements
Power BI DataViz World Championships

Power BI Dataviz World Championships

The Power BI Data Visualization World Championships is back! Get ahead of the game and start preparing now!

December 2025 Power BI Update Carousel

Power BI Monthly Update - December 2025

Check out the December 2025 Power BI Holiday Recap!

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors