Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Get Fabric Certified for FREE during Fabric Data Days. Don't miss your chance! Request now

Reply
MaxItaly
Helper III
Helper III

Best Practices with very large SQL Server Views

Hello everyone,

I'd like to ask your opinion about how to handle very large SQL Server Views.

We use SQL Server, and we have some views that operate on a large fact table (15+ million rows).

We'd like to create some reports with Power BI based on them.

My first thought was to create a table where I import the results of the view, and then use that table as data source (I wanted to limit possible problems).

The import of the data has been quite slow, but it worked (~250 MB .pbix file), and I also managed to publish this data set.

I thought to keep working this way, but then I realized that the table I created on SQL Server was something like 15+GB.

I can't afford to create tables this big, and, furthermore, I discovered that Power BI does not manage to automatically refresh the data because of the 10+ GB of raw data.

How would you act in such a situation?
If I tried to import directly from the view, but I encountered some error (like timeout). Anyway I would not be able to refresh the data, I guess.

If I direct query from the view, it will take way too much time to render a report, making Power BI useless (SQL Server takes 40+ minutes to elaborate the view).

Do I have to direct query from the table created from the view?

Do I have other possibilities, like trying to reduce the raw data by creating "normalized keys"?

I never encountered this problem, so I'd like to hear your best practices about it...

Many thanks...!

1 ACCEPTED SOLUTION
ImkeF
Community Champion
Community Champion

Normalizing is an option worth trying. Not only because it reduces the total amount of data, but also because the tabular engine likes it more: Simply spoken: Tabular has no problem with long (narrow) table, but will tend to slow down with (even short) wide tables.

 

So everything that will reduce the number of columns (and their cardinality) will help you here.

Imke Feldmann (The BIccountant)

If you liked my solution, please give it a thumbs up. And if I did answer your question, please mark this post as a solution. Thanks!

How to integrate M-code into your solution -- How to get your questions answered quickly -- How to provide sample data -- Check out more PBI- learning resources here -- Performance Tipps for M-queries

View solution in original post

4 REPLIES 4
Greg_Deckler
Community Champion
Community Champion

Well, you could post the SQL here and see if it can be optimized. Otherwise, the general recommendation is that you create a cube and hook Power BI to that. 



Follow on LinkedIn
@ me in replies or I'll lose your thread!!!
Instead of a Kudo, please vote for this idea
Become an expert!: Enterprise DNA
External Tools: MSHGQM
YouTube Channel!: Microsoft Hates Greg
Latest book!:
DAX For Humans

DAX is easy, CALCULATE makes DAX hard...

Yes, that's a possibility...but I'd like to avoid that, since the data is quite dirty and I wanted to move the calculation work to Power BI.
If I created a cube on my data and import the cube to Power BI, would the cube weight much less than the import of the data?

That may partially solve my problems: I could create the cubes from the tables created from the views. In this way, I may import the cubes to Power BI, but I'd have a lot of space occupied by such tables in SQL Server Database  😞   .

ImkeF
Community Champion
Community Champion

Normalizing is an option worth trying. Not only because it reduces the total amount of data, but also because the tabular engine likes it more: Simply spoken: Tabular has no problem with long (narrow) table, but will tend to slow down with (even short) wide tables.

 

So everything that will reduce the number of columns (and their cardinality) will help you here.

Imke Feldmann (The BIccountant)

If you liked my solution, please give it a thumbs up. And if I did answer your question, please mark this post as a solution. Thanks!

How to integrate M-code into your solution -- How to get your questions answered quickly -- How to provide sample data -- Check out more PBI- learning resources here -- Performance Tipps for M-queries

I guess I'll try normalizing the data...

20+ columns for a fact table are way too much, and I guess it's easier than creating a cube and use it by direct query.

Thanks.

Helpful resources

Announcements
Fabric Data Days Carousel

Fabric Data Days

Advance your Data & AI career with 50 days of live learning, contests, hands-on challenges, study groups & certifications and more!

October Power BI Update Carousel

Power BI Monthly Update - October 2025

Check out the October 2025 Power BI update to learn about new features.

FabCon Atlanta 2026 carousel

FabCon Atlanta 2026

Join us at FabCon Atlanta, March 16-20, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.

Top Solution Authors
Top Kudoed Authors