Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
mmace1
Impactful Individual
Impactful Individual

Direct Query to Dataflows: What would the downsides be | Solves any dataset size limits it seem

Circa the June 2021 release of Power BI, direct query to dataflows is now generally supported (in Premium). Just offhand:

Pros:

  • Your dataset can always be small if everything you pull in is a Dataflow - so no matter if you have 100s of millions of facts, with 100s of thousands of entries in your dimensions - no problem.

Cons:

  • Direct Query, so no adding additional columns, either in PQ, or DAX.
  • Performance?

Seems like a big thing. Then again, I'm not sure how many places have the volume of (actually relevant) high cardinality that my shop does and/or if I'm missing something. 

1 ACCEPTED SOLUTION
jeffshieldsdev
Solution Sage
Solution Sage

The data goes into a table in managed SQL store. I'm not sure if any indexes are automatically created.

We've found performance slow, but just in POC testing and that was with a table with high cardinality and lengthily text (URLs).

I imagine you have to build dataflows that precisely fit your use case and probably leverage aggregations in your model too. Perhaps creating lookup tables and importing those in a composite model will improve performance also.

View solution in original post

5 REPLIES 5
jeffshieldsdev
Solution Sage
Solution Sage

The data goes into a table in managed SQL store. I'm not sure if any indexes are automatically created.

We've found performance slow, but just in POC testing and that was with a table with high cardinality and lengthily text (URLs).

I imagine you have to build dataflows that precisely fit your use case and probably leverage aggregations in your model too. Perhaps creating lookup tables and importing those in a composite model will improve performance also.

@jeffshieldsdev 

Thanks for the feedback on testing-speed. 

Our biggestcovers-most-things model, which is at the most granular level, just went over 1GB.  So given we're on P1 Premium, we're doing *OK* at the moment, but the stupid company is growing quickly. 

It's the nature of our business as well- we have 2 different dimensions that are in the millions of rows & growing, so even grouping the facts still leaves a ton. 

By lookup table do you mean just a dimensions in proper demnormalized star schema, or...? 




Yes, that's right--put the dimensions you'll want in slicers into their own table in Import mode and relate to your fact table.

Yes, that's right--put the dimensions you'll want in slicers into their own table in Import mode and relate to your fact table.

And importing them - got it.  Thanks. 

Helpful resources

Announcements
Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

October Power BI Update Carousel

Power BI Monthly Update - October 2025

Check out the October 2025 Power BI update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Kudoed Authors