Join us at FabCon Atlanta from March 16 - 20, 2026, for the ultimate Fabric, Power BI, AI and SQL community-led event. Save $200 with code FABCOMM.
Register now!To celebrate FabCon Vienna, we are offering 50% off select exams. Ends October 3rd. Request your discount now.
Hi, I have large(50M+ Rows) data model storage on a BigQuery model. I've created a view to develop my Model in PowerBI Desktop and it worked fine and even importing all the rows to my Power BI desktop file, it tooked a while but worked. I Published my model to PowerBI Service, but the refresh data it's taking too long and only on PowerBI's side. The query ran suceffully on Google Biguery in just 45 seconds and now, more then one hour later the PowerBI still updating the dataset. Is there a way to improve de connection between Google and Microsoft?
Time spent on BigQuery:
Power BI update still in progress:
Hi @FelipeDBA , why is the transfer speed so low? transferring 20Gbs in an hour is so low 😞 @otravers we have this kind of use case as well where we cannot perform incremental refresh since historical data is changing every day.
Hi @FelipeDBA ,
You can use Incremental refresh and real-time data for datasets. With the help of it, Datasets with potentially billions of rows can grow without the need to fully refresh the entire dataset with each refresh operation.
Besides, you can manage the storage mode. The storage mode lets you control whether Power BI Desktop caches table data in-memory for reports. With the help of it, tables that aren't cached don't consume memory for caching purposes. You can enable interactive analysis over large datasets that are too large or expensive to completely cache into memory. You can choose which tables are worth caching, and which aren't.
For more details, please refer to:
Incremental refresh and real-time data for datasets
Configure incremental refresh and real-time data
Manage storage mode in Power BI Desktop
Best Regards,
Jianbo Li
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Did the refresh eventually complete? I'd look at incremental refresh to avoid reloading 50M+ rows each time, assuming most of these rows are now historical records.
Why dont focus on solving the real Issue? Which is the transfer speed
Is the dataset in a Pro or Premium workspace?